Recent technological advances are enabling the development of computational tools that could significantly improve the quality of life of individuals with disabilities or sensory impairments. These include so-called electromyography-to-speech (ETS) conversion models, designed to convert electrical signals produced by skeletal muscles into speech.
Researchers at University of Bremen and SUPSI recently introduced Diff-ETS, a model for ETS conversion that could produce more natural synthesized speech. This model, introduced in a paper posted to the preprint server arXiv, could be used to develop new systems that allow people who are unable to speak, such as patients who underwent a laryngectomy (a surgery to remove part of the human voice box), to communicate with others.
Most previously introduced techniques for ETS conversion have two key components: an EMG encoder and a vocoder. The electromyography (EMG) encoder can convert EMG signals into acoustic speech features, while the vocoder uses these speech features to synthesize speech signals.
"Due to an inadequate amount of available data and noisy signals, the synthesized speech often exhibits a low level of naturalness," Zhao Ren, Kevin Scheck and their colleagues wrote in their paper. "In this work, we propose Diff-ETS, an ETS model which uses a score-based diffusion probabilistic model to enhance the naturalness of synthesized speech. The diffusion model is applied to improve the quality of the acoustic features predicted by an EMG encoder."
In contrast with many other ETS conversion models developed in the past, consisting of an encoder and vocoder, the researchers' model has three components, namely an EMG encoder, a diffusion probabilistic model and a vocoder. The diffusion probabilistic model, the second of these components, is thus a new addition, which could result in more natural synthesized speech.
To read more, click here.