Abstract by Berkeley Andrus
Immersive Gameplay via Improved Natural Language Understanding
Many popular video games feature non-player characters that work alongside the player in combat settings. These characters are usually either controlled entirely by artificial intelligence with no player interference or, if they can be controlled by the player, receive orders via complicated menus that interrupt gameplay. Recent improvements in automated voice recognition software and language representation models have set the stage for new, more immersive methods of interfacing with non-player characters through natural spoken commands. We present several promising methods of classifying user utterances to extract specific executable commands by making use of embedded representations of both structured commands and user utterances. This framework facilitates a more flexible speech interface than has been used in past speech-controlled games. We show how these extracted commands can be used to improve downstream AI behavior and increase player immersion and engagement. We also show how our model leverages small sets of example data to outperform existing industrial utterance classification systems.