Fitness Dance Video Event Analysis Based on Dynamic Programming Fusion Multimoding
Physical culture institute, Beifang University of Nationalities, Yinchuan, 750021, China
ABSTRACT: In order to better meet the needs of users to browse and retrieve video, this paper proposes an efficient analysis framework of fitness dance video event with video and text information fused. Which can quickly and accurately analyze the fitness dance video events, and extract the detailed information of the event ; Using the method of independent analysis of text and video to excavate event information for text and video as much as possible, to avoid affecting performance on account of multiple constraints. Using the method of dynamic programming to find the alignment of global optimal event on the basis of analysis of text and video. At the same time, according to the alignment results construct the global probability model with matching text events and video events. Therefore estimate the corresponding video clips that not match the text event, which avoid the existing methods only consider local time of video and text and missing and false detection events, the final event content information is detailed and accurate, which can meet the needs of users to browse and retrieve the content of the event.
Keywords : Event analysis; Fitness dance video; Dynamic programming.
 B. Khaleghi, A. Khamis, F. O. Karray, N. Saiedeh, “Multisensor data fusion: A review of the state-of-the-art”, Information Fusion., vol. 11, no.3, pp. 32-39, March 2013.
 M. M. Md, M. Gavrilova, “Markov chain model for multimodal biometric rank fusion”, Signal, Image and Video Processing, vol. 9, no.5, pp. 333-337, May 2014.
 P. Z. Eskikand, S. A. Seyyedsalehia, “Robust speech recognition by extracting invariant features”, Procedia – Social and Behavioral Sciences, vol. 24, no.1, pp. 99-107, January 2012.
 O. Dehzangi, B. Ma, E. S. Chng, H. Z. Li, “Discriminative feature extraction for speech recognition using continuous output codes”, Pattern Recognition Letters , vol. 11, no.13, pp. 29-37, July 2012.
 Y. N. Singh, S. K. Singh, P. Gupta, “Fusion of electrocardiogram with unobtrusive biometrics: An efficient individual authentication system”, Pattern Recognition Letters, vol. 11, no.14, pp. 35-42, July 2012.
 J. F. Yang, X. Zhang, “Feature-level fusion of fingerprint and finger-vein for personal identification”, Pattern Recognition Letters, vol. 14, no.6, pp. 73-79, July 2012.
 M. Sahidullah, G. Saha, “Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition”, Speech Communication, vol. 18, no.4, pp. 43-49, April 2011.
 R. Giot, C. Rosenberger, “Genetic programming for multibiometrics”, Expert Systems With Applications, vol. 32, no.2, pp. 119-127, February 2011.
 A. Dante, P. Vicente, “Voice activity detection and speaker localization using audiovisual cues”, Pattern Recognition Letters, vol. 10, no.2, pp. 20-29, January 2011.
 Y. Z. Zhang, S. J. Ding, J.Y. Wang, “Event detection of fusion multi-mode based on HMM”, Journal of system simulation, vol. 8, no.10, pp. 67-75, October 2011.