Abstract by Zachary Brown

Personal Infomation

Presenter's Name

Zachary Brown


Nathan Robinson

Degree Level



Nathan Robinson

Abstract Infomation


Computer Science

Faculty Advisor

Nancy Fulda


Towards Neural Program Interfaces



The research community has begun to investigate how natural language models produced by deep learning might be ’controlled’ for desired output. However, current solutions to this problem of control are formulated under the same research paradigm: develop a new data set, train a new language model with this new data set, and repeat. Drawing inspiration from Application Program Interfaces (APIs), we recast the problem of controllable natural language generation as one of learning to interface with a pretrained language model to generate desired linguistic output and thereby remove the overhead of retraining a language model entirely from scratch. In this new training paradigm, a neural network model (which we refer to as a Rudimentary Neural Program Interface or R-NPI) seeks to control a pretrained language model at the activation level in real time to produce desired outputs without any permanent changes being made to the original language model. In exploring this paradigm we experiment with several R-NPI models that are trained to control the outputs of their host pretrained network(s) and present our preliminary results here.