Reichenbach T (2019)
Publication Type: Conference contribution
Publication year: 2019
Publisher: International Commission for Acoustics (ICA)
Book Volume: 2019-September
Pages Range: 7659-7662
Conference Proceedings Title: Proceedings of the International Congress on Acoustics
ISBN: 9783939296157
DOI: 10.18154/RWTH-CONV-238850
Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we attend selectively to a particular speech signal remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in background noise. In particular, we develop statistical models to relate two characteristic features of speech, pitch and amplitude fluctuations, to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as inform future speech-recognition algorithms.
APA:
Reichenbach, T. (2019). Decoding the neural processing of selective attention to speech. In Martin Ochmann, Vorlander Michael, Janina Fels (Eds.), Proceedings of the International Congress on Acoustics (pp. 7659-7662). Aachen, DE: International Commission for Acoustics (ICA).
MLA:
Reichenbach, Tobias. "Decoding the neural processing of selective attention to speech." Proceedings of the 23rd International Congress on Acoustics: Integrating 4th EAA Euroregio, ICA 2019, Aachen Ed. Martin Ochmann, Vorlander Michael, Janina Fels, International Commission for Acoustics (ICA), 2019. 7659-7662.
BibTeX: Download