How we distinguish between language and music

A team from the Max Planck Institute for Empirical Aesthetics in Frankfurt, the Max Planck NYU Center for Language, Music, and Emotion (CLaME) and Arizona State University has investigated the question of how we distinguish between language and music

Symbolic image: Jaee Kim / unsplash.com,SMPV

There have been many comparisons of how people perceive language and music. However, the differences are difficult to grasp - especially when there are overlaps, such as in rhymes or rap music. In order to better define the boundaries, the international research team initiated an online study with more than one hundred people from a total of 15 different native speaker backgrounds.

The study focused on the Dùndún drum. This is used as a musical instrument in south-western Nigeria, but is also used for communication. The drum imitates the tonal language of the Yorùbá, creating a so-called language surrogate. The team compared the acoustic properties of linguistic and musical Dùndún recordings. They then asked the test subjects to listen to the same recordings and indicate whether they were listening to speech or music.

Using the data collected, the researchers developed a statistical model that can be used to predict when a sound sample is perceived as music-like or speech-like. Volume, pitch, timbre and timing proved to be decisive. A regular rhythm and frequent changes in timbre, for example, make a sequence sound more music-like, while a lower intensity and fewer changes in pitch appear speech-like.

Original article:
Durojaye, C., Fink, L., Roeske, T., Wald-Fuhrmann, M. and Larrouy-Maestri, P. (2021). Perception of Nigerian Dùndún Talking Drum Performances as Speech-Like vs. Music-Like: The Role of Familiarity and Acoustic Cues. Frontiers in Psychology 12:652673.
https://doi.org/10.3389/fpsyg.2021.652673

 

Das könnte Sie auch interessieren