Аннотация:
In this talk, I will provide a brief survey of recent neural entity linking (EL) systems developed since 2015 as a result of the "deep learning revolution" in NLP. I will try to systemize design features of neural entity linking systems and compare their performances to the best classic methods on the common benchmarks distilling generic architectural components of a neural EL system, like candidate generation and entity ranking summarizing the prominent methods for each of them, such as approaches to mention encoding based on the self-attention architecture.
Besides, various modifications of this general neural entity linking architecture can be grouped by several common themes: joint entity recognition and linking, models for global linking, domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches. Since many neural models take advantage of pre-trained entity embeddings to improve their generalization capabilities, I will also briefly discuss several types of entity embeddings. Finally, we briefly discuss classic applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models such as BERT. The materials are based on the following survey: Özge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, and Chris Biemann (2021): Neural Entity Linking: A Survey of Models based on Deep Learning. CoRR abs/2006.00575 (https://arxiv.org/abs/2006.00575).