12th Speech in Noise Workshop, 9-10 January 2020, Toulouse, FR

Using fNIRS to explore emotional prosody perception

Ryssa Moffat(a)
International Doctorate of Experimental Approaches to Language and Brain (IDEALAB), University of Potsdam, Germany; University of Groningen, Netherlands; Newcastle University, UK; and Macquarie University, Australia | Department of Cognitive Science, The Australian Hearing Hub, Macquarie University, Sydney, Australia

David McAlpine
Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia

Deniz Başkent(b)
University Medical Center Groningen, University of Groningen, Groningen, The Netherlands

Robert Luke, Lindsey van Yper
Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia

(a) Presenting
(b) Attending

Recognising emotional prosodies in speech is a key element in verbal communication. Recipients of cochlear implants (CIs) with good speech recognition perform below normal hearing (NH) peers on emotional prosody recognition tasks. Evidence from behavioural studies indicates that CI recipients rely on temporal and intensity cues to compensate for the device’s poor transmission of spectral cues. Little is known about the brain mechanisms underlying emotional prosody recognition in CI hearing. We employed the brain-imaging tool functional near-infrared spectroscopy (fNIRS) to examine cortical processing of emotional prosody in normal-hearing (NH) listeners and to prepare an appropriate paradigm for CI recipients.

Forty NH adults participated in a behavioural forced-choice listening task and a fNIRS passive listening task. Six-syllable sentences with pseudo content words and real function words were used in both tasks (e.g., "the larfle is himber"). Stimuli were recorded with prosodic features adjudged neutral, happy, sad, fearful and angry. To examine each group's reliance on acoustic cues, stimuli were manipulated in four separate ways: 1) pitch cues equalised, 2) pitch and intensity cues equalised, 3) pitch and rate cues equalised, and 4) rate and intensity cues equalised. In the forced-choice listening task, participants identified the emotion conveyed in each stimulus (N=100). During the passive listening fNIRS task, participants heard 10 blocks of each emotion-condition pair (N=20). Comparisons will be made between prosodies, as well as within and between acoustic conditions. Correlations between metabolic and behavioural responses for each prosody-condition pair will be reported. This method offers insight into the relative importance of pitch in emotional prosody recognition and provides the groundwork for understanding emotional prosody processing in CI hearing.

Last modified 2020-01-06 19:23:55