Abstract by Nancy Fulda
Linguistic Embeddings as a Common-Sense Knowledge Repository: Challenges and Opportunities
Many applications of pre-trained linguistic embedding models use them as inputs for end-to-end tasks such as dialog modeling, machine translation, and question answering. We present an alternate paradigm: Rather than treating pre-trained embeddings as input features, we treat them as common-sense knowledge repositories that can be queried using simple mathematical operations within the embedding space, without the need for additional training. To validate this paradigm, we apply simple distance metrics to reasoning tasks such as threat detection, emotional classification, and sentiment analysis. The results provide a valuable proof of concept that this form of common-sense reasoning, or `reasoning in the linguistic domain', lies within the grasp of the research community.