RVAE-EM: Generative Speech Dereverberation Based on Recurrent Variational Auto-Encoder and Convolutive Transfer Function


Abstract

[Code], [PDF]

In indoor scenes, reverberation is a crucial factor in degrading the perceived quality and intelligibility of speech. In this work, we propose a generative dereverberation method. Our approach is based on a probabilistic model utilizing a recurrent variational auto-encoder (RVAE) network and the convolutive transfer function (CTF) approximation. Different from most previous approaches, the output of our RVAE serves as the prior of the clean speech. And our target is the maximum a posteriori (MAP) estimation of clean speech, which is achieved iteratively through the expectation maximization (EM) algorithm. The proposed method integrates the capabilities of network-based speech prior modelling and CTF-based observation modelling. Experiments on single-channel speech dereverberation show that the proposed generative method noticeably outperforms the advanced discriminative networks.


Examples with WSJ0 dataset and simulated RIRs

Our approach has two versions: RVAE-EM-U (trained in an unsupervised manner) and RVAE-EM-S (trained in a supervised manner). In comparison, the unsupervised approaches are marked with *.

Utterance ID Unprocessed Clean VAE-NMF* TCN-SA FullSubNet SGMSE+ RVAE-EM-U* (prop.) RVAE-EM-S (prop.)
#008
#027
#148
#309
#550
#681

Source Code

This work is open sourced at github, see [Code]. If you like this work and are willing to cite us, please use:

@misc{wang2023rvaeem,
                title={RVAE-EM: Generative speech dereverberation based on recurrent variational auto-encoder and convolutive transfer function}, 
                author={Pengyu Wang and Xiaofei Li},
                year={2023},
                eprint={2309.08157},
                archivePrefix={arXiv},
                primaryClass={eess.AS}
          }