Highlight - Pose video summarization as a supervised learning problem for subset selection - Propose sequential determinantal point process (seqDPP) as the underlying probabilistic model - Evaluate on three video summarization tasks and obtain state-of-the-art performance Introduction Video summarization: pressing need - 100 hours of new Youtube video per min - 422,000 CCTV cameras in London 24/7 Summaries by three users
Challenges - Heterogeneous subjects/categories - Various temporal changing rates - Subjective, disparate, and noisy labels Previous work - Criteria: representativeness vs. diversity - Largely unsupervised, frame clustering - Require sophisticated handcrafting Our main idea - Supervised learning from human supplied annotations - Summarization as subset selection - Modeling temporal cue & diversity Approach Sequential DPP (seqDPP) 1. Partition video into T disjoint segments 2. Introduce subset selection (of frames) variable Yt for each segment 3. Condition Yt on Yt-1 = yt-1 by DPP
Parameterization of DPP kernel - Linear embedding (L): - Neural networks (NN) Inference Learning via MLE - through gradient descent In contrast, bag DPPs: Model permutable items (no temporal info) Often use quality-diversity kernel (limited) Inference NP hard Generating target summaries User study on inter-annotator agreement - Data: 100 videos from Open Video Project and Youtube - Annotation: 5 user summaries per video - Observation: high inter-annotator agreement Generate target summaries by greedy search
Experiments Setup - Data: OVP (50), Youtube (39), Kodak (18) - Feature: Fisher vector, saliency, context - Evaluation: Precision, Recall, F-score - Comparison: bag DPP and previous (unsupervised) DT, STIMO, VSUMM Results on Youtube and Kodak
Results on OVP
[1] S. Avila, A. Lopes, A. Luz Jr, A. Araujo. “VSUMM: A mechanism designed to produce static video summaries and a novel evaluation method”. Pattern Recognition Letters, 32(1):56–68, 2011. [2] A. Kulesza and B. Taskar. “Determinantal point processes for machine learning”. Foundations and Trends® in Machine Learning, 5(2-3):123–286, 2012.