Bias in Text Generative Open AI
Sai Asrith Devisetti
Sai Asrith Devisetti, Department of Computer Science and Engineering International Institute of Information Technology Hyderabad (Telangana), India.
Manuscript received on 08 January 2024 | Revised Manuscript received on 17 January 2024 | Manuscript Accepted on 15 February 2024 | Manuscript published on 30 May 2024 | PP: 8-10 | Volume-4 Issue-2, February 2024 | Retrieval Number: 100.1/ijainn.B108404020224 | DOI: 10.54105/ijainn.B1084.04020224
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: The rise of text generation models, especially those powered by advanced deep learning architectures like Open AI’s GPT-3, has unquestionably transformed various natural language processing applications. However, these models have recently faced examination due to their inherent biases, often evident in the generated text. This paper critically examines the issue of bias in text generation models, exploring the challenges posed, the ethical implications it entails, and the potential strategies to mitigate bias. Firstly, we go through the causes of the origin of the bias, ways to minimize it, and mathematical representation of Bias.
Keywords: Deep Learning, Generated Text, AI
Scope of the Article: AI