Zapiski Nauchnykh Seminarov POMI
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive
Impact factor

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Zap. Nauchn. Sem. POMI:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Zapiski Nauchnykh Seminarov POMI, 2023, Volume 529, Pages 102–122 (Mi znsl7422)  

IMAD: IMage-Augmented multi-modal dialogue

V. Moskvoretskii, A. Frolov, D. Kuznetsov

DeepPavlov.ai
References:
Abstract: Currently, dialogue systems have achieved high performance in processing text-based communication. However, they have not yet effectively incorporated visual information, which poses a significant challenge. Furthermore, existing models that incorporate images in dialogue generation focus on discussing the image itself. Our proposed approach presents a novel perspective on multi-modal dialogue systems, which interprets the image in the context of the dialogue. By doing so, we aim to expand the capabilities of current dialogue systems and transition them from single modality (text) to multi-modality. However, there is a lack of validated English datasets that contain both images and dialogue contexts for this task. Thus, we propose a two-stage approach to automatically construct a multi-modal dialogue dataset. In the first stage, we utilize text-to-image similarity and sentence similarity to identify which utterances could be replaced with an image. In the second stage, we replace those utterances by selecting a subset of relevant images and filtering them with a visual question answering model. We used this approach, along with additional labeling, to create the IMage Augmented multi-modal Dialogue dataset (IMAD), which can serve as a validated dataset for this task. Furthermore, we propose a baseline model trained on this dataset, which outperforms model trained on the same data without images and BlenderBot.
Key words and phrases: natural language processing, deep learning, machine learning, IMAD, dialogue dataset, multi-modal dataset, dialogue systems, multi-modality.
Received: 06.09.2023
Document Type: Article
UDC: 81.322.2
Language: English
Citation: V. Moskvoretskii, A. Frolov, D. Kuznetsov, “IMAD: IMage-Augmented multi-modal dialogue”, Investigations on applied mathematics and informatics. Part II–1, Zap. Nauchn. Sem. POMI, 529, POMI, St. Petersburg, 2023, 102–122
Citation in format AMSBIB
\Bibitem{MosFroKuz23}
\by V.~Moskvoretskii, A.~Frolov, D.~Kuznetsov
\paper IMAD: IMage-Augmented multi-modal dialogue
\inbook Investigations on applied mathematics and informatics. Part~II--1
\serial Zap. Nauchn. Sem. POMI
\yr 2023
\vol 529
\pages 102--122
\publ POMI
\publaddr St.~Petersburg
\mathnet{http://mi.mathnet.ru/znsl7422}
Linking options:
  • https://www.mathnet.ru/eng/znsl7422
  • https://www.mathnet.ru/eng/znsl/v529/p102
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Записки научных семинаров ПОМИ
    Statistics & downloads:
    Abstract page:31
    Full-text PDF :13
    References:25
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024