12th Speech in Noise Workshop, 9-10 January 2020, Toulouse, FR

Identifying overlapping vowel-consonants following hearing loss: Machine learning of neural representations

Samuel S. Smith(a), Mark N. Wallace
University of Nottingham, UK

Joel I. Berger
University of Iowa, USA

Michael A. Akeroyd(b)
University of Nottingham, UK

Christian J. Sumner
Nottingham Trent University, UK

(a) Presenting
(b) Attending

A moderate hearing loss can pose major challenges for speech identification, principally in noisy environments. This is despite most features of speech remaining audible. It is not yet clear how the neural representation of speech changes following hearing loss. Here, in order to quantify this, we present a framework that integrates neural data gathered from animals with either normal hearing or a noise induced hearing loss with a probabilistic classifier.

Neural responses to the presentation of a target vowel-consonant (VC) overlapped by a distractor VC, with distractor lags between -262.5 ms and +262.5 ms, were recorded from the inferior colliculus of anaesthetised guinea pigs. Animals either had normal hearing (NH) or were exposed to high intensity sound (8-10 kHz at 115 dB SPL for 1 hour) imposing a moderate hearing loss (HL) at frequencies above 4 kHz. A machine learning classifier (naïve Bayes) was implemented to predict auditory perception. The classifier was both trained and tested on neural data recorded from either NH or HL animals.

The classifier, trained on NH responses to VCs in quiet, was set to identify VCs in quiet with an accuracy of 95%. In line with human behaviour, the classifier’s identification of VCs in quiet did not much reduce for the HL data (93%). Crucially, and again in line with human behaviour, the classifier’s identification of overlapping VCs was significantly worse (up to 20%) for neural representations from animals with HL in comparison to NH. It was determined that these findings were not solely attributable to a simple loss of auditory information in the frequencies most affected by hearing loss.

However, it is likely that people listening to speech in a noisy background are able to utilise prior knowledge of interfering sounds. When the classifier employed knowledge of distracting VCs (i.e. the classifier was trained on representations of overlapping VCs), not only was performance improved in all cases, but the difference between NH and HL was greatly diminished. One possible interpretation of this is that prior information about interfering sounds can reduce the impact of poorer neural coding following HL.

Overall, this work offers evidence for a degraded representation of speech in complex acoustic backgrounds, at the midbrain level, following hearing loss. Applying a machine learning classifier to the neural representation of speech sounds appears to be a promising method for understanding real-world problems associated with hearing loss.

Last modified 2020-01-06 19:23:55