FIELD: information technology.
SUBSTANCE: invention relates to generation and processing descriptive content data, particularly, to automatic digital collection, annotating and marking dynamic video images. For this purpose the processing system receives the GPS data, and in some embodiments - inertial data, from a wearable device with sensors on an athlete during a sporting action. Processing system processes the GPS data and the inertial data, in case of their presence, in order to identify at least one event in the athlete and saves the data identifying the event in an executive database. Video data from a video camera are saved in a video images database, herewith the video data includes information on location, time and direction associated with the video frames. Time code data in the video images database is synchronized with a time code data in the executive database, and identifying the event data is used for automatic selection, annotating, marking or editing the said video data.
EFFECT: technical result is enabling comparison of the video data and the executive data from different data sets, even if the video data and the executive data have been recorded independently and without clear information upon the other action.
33 cl, 5 dwg
Authors
Dates
2017-04-26—Published
2013-01-11—Filed