Abstract by Andrew Hale
Multi-view Learning of Object Geometry
This research tackles the problem of learning the articulated structure and range of motion for an animal observed from multiple cameras simultaneously. Leveraging these synchronized video cameras, we gather data and seek to synthesize a model that captures the rigid parts and joints of, for example, a spider or a scorpion. Two approaches have been explored: 2D and 3D. The idea of the 2D approach is to take a video of the invertebrate from each of the cameras, identify important features on that specimen in each of the frames, and then figure out which of the identified features are the same for any given instant for all of the cameras. Then, optical flow can be calculated to track the movement of the invertebrate. For the 3D approach, each of the views are brought together to create a 3D point cloud, and then features are identified and tracked between instants.