Abstract by Patrick Hammond
Deep Synthetic Noise Generation for RGB-D Dataset Augmentation
The increasing availability of consumer-grade depth cameras has driven the development of new depth-aware technologies in recent years. Unfortunately, most depth cameras provide unreliable readings that may contain significant noise or dropout in the depth measurements. Efforts to correct these depth maps via deep learning are limited by the need for large datasets of hand-corrected ground-truths, which are time-consuming and difficult to acquire. We therefore create two generative machine-learning models, a conditional generative adversarial network (cGAN), and a modified variational autoencoder (VAE), to augment depth-completion datasets by damaging existing ground-truth depth maps in believable, realistic ways. This approach artificially expands the size of existing datasets by creating more damaged/ground-truth pairs for supervised learning in order to improve results on baseline depth-completion methods. We train both models using a novel dataset of our own design to capture the latent noise distributions of two consumer-grade depth cameras.