MATH Seminar

Title: Two Methods for Easing Video Consumption
Seminar: Computer Science
Speaker: Amanda Stent of Yahoo Labs
Contact: Eugene Agichtein, eugene@mathcs.emory.edu
Date: 2015-11-11 at 1:30PM
Venue: W302
Download Flyer
Abstract:
Content on the world wide web increasingly takes the form of video; consequently, it is important both to analyze and to summarize video in order to facilitate search, personalization, browsing, etc. In this talk I will present two projects from Yahoo Labs devoted to different aspects of video processing. First, I will present a method for automatic creation of a well-formatted, readable transcript for a video from closed captions or ASR output. Readable transcripts are a necessary precursor to indexing, ranking and content-based summarization of videos. Our approach uses acoustic and lexical features extracted from the video and the raw transcription/caption files. Empirical evaluations of our approach show that it outperforms baseline methods. Second, I will present a method for video summarization that uses title-based image search results to find visually important shots. A video title is often carefully chosen to be maximally descriptive of the video’s main topic, and hence images related to the title can serve as a proxy for important visual concepts of the main topic. However, images searched using the title can contain noise (images irrelevant to video content) and variance (images of different topics). Our approach to video summarization is a novel co-archetypal analysis technique that learns canonical visual concepts shared between video and images, but not in either alone, by finding a joint-factorial representation of the two data sets. Experimental results show that our approach produces superior quality summaries compared to several recently proposed approaches. I will conclude the talk with some ideas for future work on video summarization using multimodal representations.

See All Seminars