Understanding and analyzing a large collection of archived swimming videos

(Verstehen und analysieren einer großen Sammlung archivierter Schwimmvideos)

In elite sports, nearly all performances are captured on video. Despite the massive amounts of video that has been captured in this domain over the last 10-15 years, most of it remains in an "unstructured" or "raw" form, meaning it can only be viewed or manually annotated/tagged with higher-level event labels which is time consuming and subjective. As such, depending on the detail or depth of annotation, the value of the collected repositories of archived data is minimal as it does not lend itself to large-scale analysis and retrieval. One such example is swimming, where each race of a swimmer is captured on a camcorder and in-addition to the split-times (i.e., the time it takes for each lap), stroke rate and stroke-lengths are manually annotated. In this paper, we propose a vision-based system which effectively "digitizes" a large collection of archived swimming races by estimating the location of the swimmer in each frame, as well as detecting the stroke rate. As the videos are captured from moving hand-held cameras which are located at different positions and angles, we show our hierarchical-based approach to tracking the swimmer and their different parts is robust to these issues and allows us to accurately estimate the swimmer location and stroke rates.
© Copyright 2014 IEEE Workshop on Applications of Computer Vision. Veröffentlicht von IEEE. Alle Rechte vorbehalten.

Schlagworte: Schwimmen Video Datenbank Software mathematisch-logisches Modell Motion Capturing
Notationen: Naturwissenschaften und Technik Ausdauersportarten
Tagging: Big Data
DOI: 10.1109/WACV.2014.6836037
Veröffentlicht in: IEEE Workshop on Applications of Computer Vision
Veröffentlicht: Steamboat Springs IEEE 2014
Seiten: 674-681
Dokumentenarten: Kongressband, Tagungsbericht
Sprache: Englisch
Level: hoch