BYU

Abstract by Wesley Ackerman

Personal Infomation


Presenter's Name

Wesley Ackerman

Degree Level

Masters

Abstract Infomation


Department

Computer Science

Faculty Advisor

Tony Martinez

Title

Semantic-driven Cross-domain Unsupervised Image-to-Image Translation

Abstract

We propose SEmantic-driven Cross-domain Unsupervised Image-to-image Translation (SECUNIT), a method which trains an image translation model by learning encodings for the semantic segmentation of images. Segmentations are translated between domains to enable unshared objects to be translated correctly. The segmentations are then used as the basis for image reconstruction. This ensures that the original structure of the image is maintained, resulting in less artifacts. Our model introduces semantic and image feature information at multiple levels within the decoder network, allowing the model to recreate both coarse and fine-grained detail in the images. This improves the model's ability to render realistic and error-free detail. The model also uses a separate network to translate a subset of the content between the two domains. This allows the model to learn mappings between structurally-similar objects in the two domains, improving its ability to translate in cross-domain problems. SECUNIT improves image translation outcomes for domains that have unshared objects, making it useful for dataset augmentation and augmented reality.