VINP: Variational Bayesian Inference with Neural Speech Prior for Joint ASR-Effective Speech Dereverberation and Blind RIR Identification


Abstract

[Code], [PDF]

Reverberant speech, denoting the speech signal degraded by the process of reverberation, contains crucial knowledge of both anechoic source speech and room impulse response (RIR). This work proposes a variational Bayesian inference (VBI) framework with neural speech prior (VINP) for joint speech dereverberation and blind RIR identification. In VINP, a probabilistic signal model is constructed in the time-frequency (T-F) domain based on convolution transfer function (CTF) approximation. For the first time, we propose using an arbitrary discriminative dereverberation deep neural network (DNN) to predict the prior distribution of anechoic speech within a probabilistic model. By integrating both reverberant speech and the anechoic speech prior, VINP yields the maximum a posteriori (MAP) and maximum likelihood (ML) estimations of the anechoic speech spectrum and CTF filter, respectively. After simple transformations, the waveforms of anechoic speech and RIR are estimated. Moreover, VINP is effective for automatic speech recognition (ASR) systems, which sets it apart from most deep learning (DL)-based single-channel dereverberation approaches. Experiments on single-channel speech dereverberation demonstrate that VINP reaches an advanced level in most metrics related to human perception and displays unquestionable state-of-the-art (SOTA) performance in ASR-related metrics. For blind RIR identification, experiments indicate that VINP attains the SOTA level in blind estimation of reverberation time at 60 dB (RT60) and direct-to-reverberation ratio (DRR). Codes and audio samples are available online.


Examples on REVERB dataset

Please open this page with Edge or Chrome. The audio playing is problematic in Firefox.
Method SimData RealData
Unprocessed
Oracle
GWPE
SkipConvNet
CMGAN
StoRM
TCN+SA+S
oSpatialNet*
VINP-TCN+SA+S (prop.)
VINP-oSpatialNet (prop.)

Source Code

This work is open sourced at github, see [Code]. If you like this work and are willing to cite us, please use:

@misc{wang2025vinp,
            title={VINP: Variational Bayesian Inference with Neural Speech Prior for Joint ASR-Effective Speech Dereverberation and Blind RIR Identification}, 
            author={Pengyu Wang and Ying Fang and Xiaofei Li},
            year={2025},
            eprint={2502.07205},
            archivePrefix={arXiv},
            primaryClass={eess.AS},
            url={https://arxiv.org/abs/2502.07205}, 
          }