Artificial Intelligence and Decision Making
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Guidelines for authors

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Artificial Intelligence and Decision Making:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Artificial Intelligence and Decision Making, 2024, Issue 3, Pages 32–41
DOI: https://doi.org/10.14357/20718594240303
(Mi iipr596)
 

Machine learning, neural networks

Causes of content distortion: analysis and classification of hallucinations in large GPT language models

M. Sh. Madzhumder, D. D. Begunova

Moscow State Linguistic University, Moscow, Russia
Abstract: The article examines hallucinations produced by two versions of the GPT large language model – GPT-3.5-turbo and GPT-4. The primary aim of the study is to investigate the possible origins and classification of hallucinations as well as to develop strategies to address them. The work reveals the existing challenges that can lead to the generation of content that doesn't correspond to the factual data and misleads users. Detection and elimination of hallucinations play an important role in the development of artificial intelligence by improving natural language processing capabilities. The results of the study have practical relevance for developers and users of language models, due to the provided approaches that improve the quality and reliability of the generated content.
Keywords: AI system hallucinations, GPT, large language models, artificial intelligence.
Bibliographic databases:
Document Type: Article
Language: Russian
Citation: M. Sh. Madzhumder, D. D. Begunova, “Causes of content distortion: analysis and classification of hallucinations in large GPT language models”, Artificial Intelligence and Decision Making, 2024, no. 3, 32–41
Citation in format AMSBIB
\Bibitem{MadBeg24}
\by M.~Sh.~Madzhumder, D.~D.~Begunova
\paper Causes of content distortion: analysis and classification of hallucinations in large GPT language models
\jour Artificial Intelligence and Decision Making
\yr 2024
\issue 3
\pages 32--41
\mathnet{http://mi.mathnet.ru/iipr596}
\crossref{https://doi.org/10.14357/20718594240303}
\elib{https://elibrary.ru/item.asp?id=69556042}
Linking options:
  • https://www.mathnet.ru/eng/iipr596
  • https://www.mathnet.ru/eng/iipr/y2024/i3/p32
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Artificial Intelligence and Decision Making
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025