Deep neural network frontend for continuous EMG-based speech recognition

Michael Wand, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

24 Scopus citations


We report on a Deep Neural Network frontend for a continuous speech recognizer based on Surface Electromyography (EMG). Speech data is obtained by facial electrodes capturing the electric activity generated by the articulatory muscles, thus allowing speech processing without making use of the acoustic signal. The electromyographic signal is preprocessed and fed into the neural network, which is trained on framewise targets; the output layer activations are further processed by a Hidden Markov sequence classifier. We show that such a neural network frontend can be trained on EMG data and yields substantial improvements over previous systems, despite the fact that the available amount of data is very small, just amounting to a few tens of sentences: on the EMG-UKA corpus, we obtain average evaluation set Word Error Rate improvements of more than 32% relative on context-independent phone models and 13% relative on versatile Bundled Phonetic feature (BDPF) models, compared to a conventional system using Gaussian Mixture Models. In particular, on simple context-independent phone models, the new system yields results which are almost as good as with BDPF models, which were specifically designed to cope with small amounts of training data.
Original languageEnglish (US)
Title of host publicationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
PublisherInternational Speech and Communication Association4 Rue des Fauvettes - Lous TourilsBaixas66390
Number of pages5
StatePublished - Jan 1 2016
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14


Dive into the research topics of 'Deep neural network frontend for continuous EMG-based speech recognition'. Together they form a unique fingerprint.

Cite this