Using fNIRS to explore emotional prosody perception
Recognising emotional prosodies in speech is a key element in verbal communication. Recipients of cochlear implants (CIs) with good speech recognition perform below normal hearing (NH) peers on emotional prosody recognition tasks. Evidence from behavioural studies indicates that CI recipients rely on temporal and intensity cues to compensate for the device’s poor transmission of spectral cues. Little is known about the brain mechanisms underlying emotional prosody recognition in CI hearing. We employed the brain-imaging tool functional near-infrared spectroscopy (fNIRS) to examine cortical processing of emotional prosody in normal-hearing (NH) listeners and to prepare an appropriate paradigm for CI recipients.
Forty NH adults participated in a behavioural forced-choice listening task and a fNIRS passive listening task. Six-syllable sentences with pseudo content words and real function words were used in both tasks (e.g., "the larfle is himber"). Stimuli were recorded with prosodic features adjudged neutral, happy, sad, fearful and angry. To examine each group's reliance on acoustic cues, stimuli were manipulated in four separate ways: 1) pitch cues equalised, 2) pitch and intensity cues equalised, 3) pitch and rate cues equalised, and 4) rate and intensity cues equalised. In the forced-choice listening task, participants identified the emotion conveyed in each stimulus (N=100). During the passive listening fNIRS task, participants heard 10 blocks of each emotion-condition pair (N=20). Comparisons will be made between prosodies, as well as within and between acoustic conditions. Correlations between metabolic and behavioural responses for each prosody-condition pair will be reported. This method offers insight into the relative importance of pitch in emotional prosody recognition and provides the groundwork for understanding emotional prosody processing in CI hearing.