Abstract by Annie Larkins

Personal Infomation

Presenter's Name

Annie Larkins


Chris Sypherd

Degree Level



Kolby Nottingham
Christopher Rytting
Chris Sypherd

Abstract Infomation


Computer Science

Faculty Advisor

David Wingate


Commonsense Inference Using Large Language Models


Intelligent agents need to reason abstractly about concrete environments. However, traditional AI has long struggled to ground symbols in the real world. In response, we propose that introducing the inductive bias of language models can provide a conceptually grounded understanding of the world, which can help models learn more quickly and generally. The structure of language encodes the structure of the world it describes, and can provide a logic to the knowledge that learners acquire. This implies faster training for agents whose task and environment can be modeled by language, along with the additional benefit of interpretable decision-making expressed in natural language. We demonstrate a learner’s ability to leverage language comprehension in learning and predicting the behavior of a symbolic dynamical system more effectively than a baseline agent learning from scratch.