LinguoMusa
LinguoMusa – music, speech and gesture analysis laboratory
AMU Faculty of Art Studies
About LinguoMusa
- a laboratory geared towards a comparative analysis of two types of human communication: natural languages (speech and gesture) and music (singing and instrumental music)
- a database of music reflecting the contemporary music environment in Poland
Possible applications
- experimental verification of contemporary predictive models for speech and music perception, as well as related modes of communication, such as whistled languages or drum talk
- checking whether our predictions are linked with somatic reactions while listening to music, speech or watching people gesturing as they speak
- researching other types of human sound expression, such as crying or laughter, for their links with speech and music
- creating advanced distribution models for pitch and rhythm in selected music cultures
- creating distribution models as above for past musical cultures based on archival data and other historical sources, such as note sheets, recordings, but also hypothetical models based on archeological sources, such as prehistoric musical instruments for which possible pitch classes can be determined (e.g. for bone flutes)
Equipment
- software for automatic extraction of structural musical data from recordings to MIDI format, and the pitch and rhythm distribution base
- software for calculating the information content of discrete passages of speech, gestures and music, which reflects our expectations for what sounds of music or speech, or what gestures will follow
- EEG and tools for measuring somatic markers of the autonomous nervous system activity
Who is it for?
- researchers from AMU and other universities
- businesses involved in the distribution of pop music (to determine which sound sequences are popular among listeners, which in turn can help to predict how popular a piece of music may become in order to increase profits)
- students of musicology, linguistics, cognitive studies and similar majors who want to expand their knowledge of communication and participate in experimental research on speech and music
- composers of generative music who use computers to create music (they can use the data on information content to achieve the desired artistic and esthetic effects)
Name and contact details of the module coordinator
Associate Professor Piotr Podlipniak (piotr.podlipniak@amu.edu.pl)