12th Speech in Noise Workshop, 9-10 January 2020, Toulouse, FR

Development and testing of a simulated gaze-directed beamformer

John Culling(a)
Cardiff University

Patrick Naylor, Emilie D'Olne
Imperial College London

(a) Presenting

Head-mounted multi-microphone beamforming systems offer opportunities to improve signal-to-noise ratio in complex listening environments for hearing-impaired listeners. However, conventional microphone arrays are bulky and must be directed by unnaturally large head movements. The advent of MEMS (micro-electrical mechanical systems) microphones and small discreet eye trackers has led to renewed interest in beamforming systems. MEMS microphones are small enough to be discreetly mounted in spectacle frames, and similarly miniaturised eye-tracking systems could be used to steer the beam, reducing the required head movement. We created a prototype 8-microphone beamforming array on the frames of a pair of glasses and mounted it on an acoustic manikin. Head-related impulse responses were measured for each microphone for 48 source directions on the horizontal plane. These impulse responses were used to calculate the MVDR (minimum-variance distortionless response) beams that could be created for different source directions. The beam specifications can be used to predict the benefit of beamforming and to simulate beamforming in experiments. Speech-importance-weighted beams were used to predict the effective beam depth and width when listening to speech, showing that off-beam sound sources will be effectively attenuated by about 6 dB at 30 degrees and 8-10 dB beyond 60 degrees. Digital filters were designed that represented the frequency response of the beamforming system for each direction. The filters will be used in two ways in order to evaluate the potential benefits of such a system. First, the filters will be used to simulate specific fixed listening situations to corroborate the predicted improvements in speech reception and compare them with binaural listening. Second, a Simulink model has been developed that can dynamically select filters from a look-up table and filter multiple sources in real-time. This model can be driven by input from an eye tracker in order to simulate a complete gaze-directed system in use and evaluate its usability in realistic listening scenarios.

Last modified 2020-01-06 19:23:55