An Optimization Model for Temporal Video Lecture Segmentation usingWord2vec and Acoustic Features

  • Eduardo R. Soares UFJF
  • Eduardo Barrére UFJF

Resumo


Video lectures are part of our daily lives. Whether to learn something new, review content for exams or just out of curiosity. People are increasingly looking for video lectures that address what they are looking for. Unfortunately, finding specific content in this type of video is not an easy task. Many video lectures are extensive and cover several topics, and not all of these topics are relevant to the user who has found the video. The result is that the user spends so much time trying to find topic of interest in the middle of content irrelevant to him. The temporal segmentation of video lectures in topics can solve this problem allowing users to navigate of a non-linear way through all topics of a video lecture. However, temporal video lecture segmentation is not an easy task and needs to be automatized. For this reason, in this paper we propose an optimization model for the temporal video lecture segmentation problem. This model uses as features the Word2Vec representation of video lecture’s audio transcripts and low-level acoustic characteristics. To find the best video partition, an genetic algorithm with local search is used. We have performed experiments in two data sets and results showed that our proposal is able to overcome stateof- the-art methods and achieve good results for different kinds of video lectures.
Publicado
29/10/2019
Como Citar

Selecione um Formato
SOARES, Eduardo R.; BARRÉRE, Eduardo. An Optimization Model for Temporal Video Lecture Segmentation usingWord2vec and Acoustic Features. In: ANAIS PRINCIPAIS DO SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 25. , 2019, Rio de Janeiro. Anais Principais do XXV Simpósio Brasileiro de Multimídia e Web. Porto Alegre: Sociedade Brasileira de Computação, oct. 2019 . p. 513-520.

Artigos mais lidos do(s) mesmo(s) autor(es)