Generating audiovisual summaries from literary works using emotion analysis
Literature work reading is an essential activity for human communication and learning. However, several relevant tasks as selection, filter or analyze in a high number of such works become complex. For dealing with this requirement, several strategies are proposed to rapidly inspect substantial amounts of text, or retrieve information previously read, exploiting graphical, textual or auditory resources. In this paper, we propose a methodology to generate audiovisual summaries by the combination of emotion-based music composition and graph-based animation. We applied natural language processing algorithms for extracting emotions and characters involved in literary work. Then, we use the extracted information to compose a musical piece to accompany the visual narration of the story aiming to convey the extracted emotion. For that, we set important musical features as chord progressions, tempo, scale, and octaves, and we assign a set of suitable instruments. Moreover, we animate a graph to sum up the dialogues between the characters in the literary work. Finally, to assess the quality of our methodology, we conducted two user studies that reveal that our proposal provides a high level of understanding over the content of the literary work besides bringing a pleasant experience to the user.
D. K. Elson, N. Dames, K. R. McKeown, "Extracting social networks from literary fiction", Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics ser. ACL 'pp. 138-12010.
D. Williams, A. Kirke, E. R. Miranda, E. Roesch, I. Daly, S. Nasuto, "Investigating affect in algorithmic composition systems", Psychology of Music, vol. no. 6, pp. 831-82015.
M. Yamada, Y. Murai, I. Kumagai, "Story visualization of novels with multi-theme keyword density analysis", Journal of Visualization, vol. no. 3, pp. 247-2Aug 2013.
M. Yamada, Y. Murai, "Story visualization of literary works", Journal of Visualization, vol. no. 2, pp. 181-188, Jun 2009.
S. Mohammad, "From once upon a time to happily ever after: Tracking emotions in novels and fairy tales", Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage Social Sciences and Humanities ser. LaTeCH 'pp. 105-12011.
M. C. Waumans, T. Nicodéme, H. Bersini, "Topology analysis of social networks extracted from literature", PLOS ONE, vol. no. 6, pp. 1-2015.
L. Zhang, B. Liu, Sentiment Analysis and Opinion Mining, Boston, MA:Springer US, pp. 1152-112017.
J. Tao, T. Tan, J. Tao, T. Tan, R. W. Picard, "Affective computing: A review" in Affective Computing and Intelligent Interaction, Berlin, Heidelberg:Springer Berlin Heidelberg, pp. 981-995, 2005.
C.-C. Stere, S. Trausan-Matu, "Generation of musical accompaniment for a poem using artificial intelligence techniques", Romanian Journal of Human - Computer Interaction, vol. no. 3, pp. 250-22017.
F. Morreale, A. de Angeli, "Collaborating with an autonomous agent to generate affective music", Comput. Entertain., vol. no. 3, pp. 5:1-5:Dec. 2016.
L. Yang, S. Chou, Y. Yang, "Midinet: A convolutional generative adversarial network for symbolic-domain music generation using 1 d and 2d conditions", CoRR, vol. abs/1703.1082017.
H. Baxter, M. Baxter, The Right Way to Read Music ser. Right way. Right Way, 2008.
B. Benward, M. Saker, Music in Theory and Practice Volume 1 with Audio CD ser. Music in Theory and Practice, McGraw-Hill Education, 2008.
A. Gabrielsson, E. Lindström, The influence of musical structure on emotional expression., 2001.
S. M. Mohammad, P. D. Turney, Crowdsourcing a word-emotion association lexicon, vol. no. 3, pp. 436-42013.
F. Findeisen, The Addiction Formula, Albino Publishing, 2015.
W. Chase, How Music Really Works, Roedy Black Publishing, 2006.
J. Hobbs, Musical key characteristics and emotions. [Online], 20[online] Available: https://ledgernote.com/blog/interesting/musical-key-characteristics-emotions/.
P. Juslin, P. N. Juslin, J. Sloboda, P. Sloboda, Handbook of Music and Emotion: Theory Research Applications ser. Affective Science, OUP Oxford, 2010.