Semantic Human Activity Annotation Tool Using Skeletonized Surveillance Videos [Demonstration]

ACM International Joint Conference on Pervasive and Ubiquitous Computing

Bokyung Lee, Michael Lee, Pan Zhang, Alexander Tessier, Azam Khan

September 2019
4 Pages

 

Human activity data sets are fundamental for intelligent activity recognition in context-aware computing and intelligent video analysis. Surveillance videos include rich human activity data that are more realistic compared to data collected from a controlled environment. However, there are several challenges in annotating large data sets: 1) inappropriateness for crowd-sourcing because of public privacy, and 2) tediousness to manually select activities of people from busy scenes.

We present Skeletonotator, a web-based annotation tool that creates human activity data sets using anonymous skeletonized poses. The tool generates 2D skeletons from surveillance videos using computer vision techniques, and visualizes and plays back the skeletonized poses. Skeletons are tracked between frames, and a unique id is automatically assigned to each skeleton. For the annotation process, users can add annotations by selecting the target skeleton and applying activity labels to a particular time period, while only watching skeletonized poses. The tool outputs human activity data sets which include the type of activity, relevant skeletons, and timestamps. We plan to open source Skeletonotator together with our data sets for future researchers.