BYU

Abstract by Wesley Ackerman

Personal Infomation


Presenter's Name

Wesley Ackerman

Degree Level

Masters

Co-Authors

Tony Martinez

Abstract Infomation


Department

Computer Science

Faculty Advisor

Tony Martinez

Title

Cross-Domain Unsupervised Image-to-Image Translation

Abstract

The recently published "Multimodal Unsupervised Image-to-Image Translation" proposes a deep neural network that is able to perform image-to-image translation on two unpaired sets of images. It is able to disentangle information on the images' content and style and translate a given image between the two domains. However, it requires that the two image domains be closely related and share all content. We propose an augmented cross-domain algorithm that uses a separate encoder network to translate a subset of the content between the two domains. This allows the model to learn mappings between common objects in one domain, to common analogues in the other domain. Our model can translate more effectively between domains that have unshared content or are less closely related, such as synthetic to real cityscapes, or landscape to cityscape.