Abstract by Wesley Ackerman
Cross-Domain Unsupervised Image-to-Image Translation
The recently published "Multimodal Unsupervised Image-to-Image Translation" proposes a deep neural network that is able to perform image-to-image translation on two unpaired sets of images. It is able to disentangle information on the images' content and style and translate a given image between the two domains. However, it requires that the two image domains be closely related and share all content. We propose an augmented cross-domain algorithm that uses a separate encoder network to translate a subset of the content between the two domains. This allows the model to learn mappings between common objects in one domain, to common analogues in the other domain. Our model can translate more effectively between domains that have unshared content or are less closely related, such as synthetic to real cityscapes, or landscape to cityscape.