BYU

Abstract by Meredith Von Feldt

Personal Infomation


Presenter's Name

Meredith Von Feldt

Degree Level

Undergraduate

Co-Authors

Michael Jones
Courtni Byun
Travis Graham

Abstract Infomation


Department

Computer Science

Faculty Advisor

Michael Jones

Title

Understanding the Role of Video and Data in User Interfaces for Annotation of Activities in Sensor Data

Abstract

Large amounts of manually collected data are required by machine learning algorithms for a system to learn to accurately recognize and label sensor input. To help the user efficiently and precisely categorize this data, our research explores the use of video and sensor data during the annotation process. Other studies have aimed to mitigate the burden of manual labeling, but no studies have looked at the role of video and data together in accurate data collection. In our study we had 73 participants label noisy and inconsistent sensor data under a time constraint. They used a tool that showed them video and data both together and separately. We examined their labels, focusing on whether they marked the events happening at the correct time and identified the correct type of event taking place. The results suggest that novice users can learn to label using only sensor data after seeing the data with the video, but in general they have a hard time labeling noisy data, despite the tool employed. Future data annotation should include data for all of the events and enough video of repeated events so that the user can learn how to label them.