Reverberant speech, denoting the speech signal degraded by reverberation, contains crucial knowledge of both anechoic source speech and room impulse response (RIR). This work proposes a variational Bayesian inference (VBI) framework with neural speech prior (VINP) for joint speech dereverberation and blind RIR identification. In VINP, a probabilistic signal model is constructed in the time-frequency (T-F) domain based on convolution transfer function (CTF) approximation. For the first time, we propose using an arbitrary discriminative dereverberation deep neural network (DNN) to estimate the prior distribution of anechoic speech within a probabilistic model. By integrating both reverberant speech and the anechoic speech prior, VINP yields the maximum a posteriori (MAP) and maximum likelihood (ML) estimations of the anechoic speech spectrum and CTF filter, respectively. After simple transformations, the waveforms of anechoic speech and RIR are estimated. VINP is effective for automatic speech recognition (ASR) systems, which sets it apart from most deep learning (DL)-based single-channel dereverberation approaches. Experiments on single-channel speech dereverberation demonstrate that VINP attains state-of-the-art (SOTA) performance in mean opinion score (MOS) and word error rate (WER). % For blind RIR identification, experiments indicate that VINP attains the SOTA level on reverberation time at 60~dB (RT60) and advanced level on direct-to-reverberation ratio (DRR). For blind RIR identification, experiments demonstrate that VINP achieves SOTA performance in estimating reverberation time at 60 dB (RT60) and advanced performance in direct-to-reverberation ratio (DRR) estimation. Codes and audio samples are available online.
Method | Examples | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Unprocessed | ||||||||||
Clean | ||||||||||
CMGAN | ||||||||||
StoRM | ||||||||||
TCN+SA+S | ||||||||||
oSpatialNet* | ||||||||||
VINP-TCN+SA+S | ||||||||||
VINP-oSpatialNet* |
This work is open sourced at github, see [Code]. If you like this work and are willing to cite us, please use:
@misc{wang2025vinp,
title={VINP: Variational Bayesian Inference with Neural Speech Prior for Joint ASR-Effective Speech Dereverberation and Blind RIR Identification},
author={Pengyu Wang and Ying Fang and Xiaofei Li},
year={2025},
eprint={2502.07205},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2502.07205},
}