Guest Lecture: JProf. Dr. Andreas Weilinghoff (Koblenz) — ‘How Machines Learned to Hear and Understand: A Journey of Speech Processing’
Speaker: JProf. Dr. Andreas Weilinghoff
Date & Time: Wednesday, 21st January 2026, 16:15–17:45
Title: How Machines Learned to Hear and Understand: A Journey of Speech Processing
Place: B 211
Abstract: Since the successful introduction of transformer architectures with attention mechanisms (Vaswani et al. 2017), substantial advances have been made across numerous areas of Natural Language Processing (NLP). While public discourse has largely focused on chatbots (e.g. ChatGPT, Gemini, DeepSeek), speech recognition has likewise undergone remarkable progress in recent years (Jurafsky and Martin 2025).
This talk provides an overview of the historical development of speech technology, tracing its
evolution from Edison’s phonograph to contemporary systems such as Amazon Alexa and
beyond. Particular emphasis is placed on recent AI-driven innovations that have opened up new possibilities for corpus-based research and related fields, especially with regard to the automatic processing and transcription of spoken data. In addition, the talk explores further rapidly developing domains, including spoken machine translation and human–robot interaction.
A key component of the talk is a study comparing the speed and accuracy of human transcription with state-of-the-art end-to-end automatic speech recognition (ASR) models. I will examine the extent to which AI systems can support – or potentially replace – human contributions to corpus preparation, as well as the limitations that currently remain. Based on a reference study of several English varieties, including the ICE Nigeria (Wunder et al. 2008) and ICE Scotland (Schützler et al. 2017) corpora, I will show that a hybrid approach combining human expertise with artificial intelligence produces the most efficient and accurate outcomes for transcription and corpus compilation.


