|
Machine learning, neural networks
Causes of content distortion: analysis and classification of hallucinations in large GPT language models
M. Sh. Madzhumder, D. D. Begunova Moscow State Linguistic University, Moscow, Russia
Abstract:
The article examines hallucinations produced by two versions of the GPT large language model – GPT-3.5-turbo and GPT-4. The primary aim of the study is to investigate the possible origins and classification of hallucinations as well as to develop strategies to address them. The work reveals the existing challenges that can lead to the generation of content that doesn't correspond to the factual data and misleads users. Detection and elimination of hallucinations play an important role in the development of artificial intelligence by improving natural language processing capabilities. The results of the study have practical relevance for developers and users of language models, due to the provided approaches that improve the quality and reliability of the generated content.
Keywords:
AI system hallucinations, GPT, large language models, artificial intelligence.
Citation:
M. Sh. Madzhumder, D. D. Begunova, “Causes of content distortion: analysis and classification of hallucinations in large GPT language models”, Artificial Intelligence and Decision Making, 2024, no. 3, 32–41
Linking options:
https://www.mathnet.ru/eng/iipr596 https://www.mathnet.ru/eng/iipr/y2024/i3/p32
|
|