A prospective report on the research developed at the Laboratory of Audio and Music Technology at USP

  • Regis Rossi A. Faria Universidade de São Paulo
  • Ricardo Thomasi Universidade de São Paulo
  • João Monnazzi Universidade de São Paulo
  • Eduardo Bonachela Universidade de São Paulo
  • André Giolito Universidade de São Paulo
  • Gabriel Lemos Universidade de São Paulo


This paper presents a concise report on the research developed at the Laboratory of Audio and Music Technology at the EACH-USP. The laboratory was founded in 2011 targeting the areas of music technology, musical acoustics and bioacoustics, strengthening its scope in 2019 to the areas of sound and music computing and audio engineering. Six projects are presented herein, describing their application areas, goals, achievements and perspectives.

Palavras-chave: Artificial Intelligence, A-Life and Evolutionary Music Systems, Computer Music and Creative processes, Digital Sound Processing, Music Analysis and Synthesis, Music Information Retrieval, Real-time Interactive Systems, Software Systems and Languages for Sound and Music


A. Di Scipio. Sound is the interface’: from interactive to ecosystemic signal processing. Organised Sound, 8(3), 269–277. Cambridge University Press, 2003.

S. Waters. Performance ecosystems: ecological approaches to musical interaction. Electroacoustic Music Studies Network. De Montford/Leicester: EMS, 2007.

R. Thomasi and R. R. A. Faria. Moving along sound spectra: an experiment with feedback loop topologies and audible ecosystems. International Computer Music Conference. Santiago: ICMC, 2021.

R. Thomasi and F. Kozu. Study for Ecosystemic Guitars: The electroacoustic improvisation in the sound emergence minefield. The 21st Century Guitar: Unconventional Approaches to Performance, Composition and Research. Lisboa: NOVA, 2021.

R. Meric and M. Solomos, Agostino Di Scipio’s music: emergent sound structures and audible ecosystems. Journal of Interdisciplinary Music Studies, 3 (1,2), 57-76, 2009.

PUCKETTE, M. “Pd Documentation”. http://crca.ucsd.edu/~msp/Pd_documentation/. Access: 23/07/2021.

FARIA, R. R. A.; CUNHA JUNIOR, R. B. ; AFONSO, E. S. Reactive music: designing interfaces and sound processors for real-time music processing. In: Proceedings of the 11th International Symposium on Computer Music Multidisciplinary Research (CMMR 2015), Plymouth, 2015. p. 626-633.

P. Guillot. Camomile: Creating audio plugins with Pure Data. In: Linux Audio Conference, Berlin, Jun. 2018. Available at: https://hal.archives-ouvertes.fr/hal-01816603. Access: July 2021.

P. Tagg and B. Clarida. Music’s Meaning: a modern musicology for non-musos. Nova York: MMMSP, 2013.

P. Tagg and B. Clarida. Ten Little Title Tunes: Towards a musicology of the mass media. The Mass Media Music Scholar’s Press. Nova York and Montreal, 2003.

C. Vogler. The Writer's Journey: Mythic Structure for Writers. New Frontier. Rio de Janeiro, 1998.

M. Cuthbert and C. Ariza. Music21: A toolkit for Computer-Aided Musicology and Symbolic Music Data. 2010. ISMIR, International Society for Music Information Retrieval, 2010. pp. 637-642.

C. Mckay and J. Cumming. JSYMBOLIC 2.2: Extracting features from symbolic music for use in musicological and MIR research. 2018. Proceedings of the 19th ISMIR COnference, Paris, 2018. pp. 348-354.

GORDILLO, Christian Dayan Arcos. Continuous speech recognition by combining MFCC and PNCC attributes with SS, WD, MAP and FRN methods of robustness. Thesis (Masters in Electrical Engineering), Pontifícia Universidade Católica, Rio de Janeiro, 2013.

W. Fu, X. Yang and Y. Wang, "Heart sound diagnosis based on DTW and MFCC," In: 2010 3rd International Congress on Image and Signal Processing, 2010, pp. 2920-2923, doi: 10.1109/CISP.2010.5646678.

Littmann Library. Lung Sounds, 2020, available at: http://www.3m.com/healthcare/littmann/lung.html. Accessed on July 5 2021.

THINKLABS. Digital Stethoscope, 2020, available at: https://www.thinklabs.com/, accessed on July 5 2021.

Faria, R.R.A. et al. AUDIENCE - Audio Immersion Experiences in the CAVERNA Digital. In: Proceedings of the 10th Brazilian Symposium on Computer Music Current Frameworks for Music Information Representation, Belo Horizonte, 2005. p. 106-117.

Faria, R.R.A et al, "Improving spatial perception through sound field simulation in VR," IEEE Symposium on Virtual Environments, Human-Computer Interfaces and Measurement Systems, 2005, pp. 103-108, doi: 10.1109/VECIMS.2005.1567573.

Faria, R. R. A. AUDIENCE for Pd, a scene-oriented library for spatial audio. In: Proceedings of the 4th international Pure Data Convention, Weimar e Berlin, 2011.

OpenAUDIENCE library for sound immersion and auralization, v. 1.0.3 (2012), available in: http://lsi.usp.br/neac/en/openaudience

Forum SBTVD, Call for Proposals: TV 3.0 Project, (17/07/2020), available at: https://forumsbtvd.org.br/wp-content/uploads/2020/07/SBTVDTV-3-0-CfP.pdf

VEEN, Fjodor Van. The Neural Network Zoo. The Asimov Institute, 2016. Published online at: https://www.asimovinstitute.org/neural-network-zoo/. Accessed on July 25 2021.

Center for Artificial Intelligence, 2020, accessed on July 23 2021 at: http://c4ai.inova.usp.br/pt/home-2/.

GAIA - Grupo de Arte e Inteligência Artificial, 2020, https://sites.usp.br/gaia/, accessed on July 23 2021.
FARIA, Regis Rossi A.; THOMASI, Ricardo; MONNAZZI, João; BONACHELA, Eduardo; GIOLITO, André; LEMOS, Gabriel. A prospective report on the research developed at the Laboratory of Audio and Music Technology at USP. In: SIMPÓSIO BRASILEIRO DE COMPUTAÇÃO MUSICAL (SBCM), 18. , 2021, Recife. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 259-265. DOI: https://doi.org/10.5753/sbcm.2021.19461.