Abstract by Natalie Turley
We hope to create a universal decoder capable of understanding embedding spaces and returning accurate text representations. Currently, there are many different methods in which text may be embedded, or encoded, including language models such as Infersent, Fasttext, Universal Sentence Encoder, and Bert. These embedding methods turn text into numbers that are easier for computers to manipulate. Afterwards, the embedded representations are turned back into text through a decoding process. We are hoping to create a decoder that is versatile in decoding from the embedding space. We hope that this decoder will help us better understand both the geometry of the embedding space and potentially other structural properties of the embedding space.