Reverberant speech separation based on audio-visual dictionary learning and binaural Cues

Liu, Qingju, Wang, Wenwu, Jackson, Philip and Barnard, Mark (2012) Reverberant speech separation based on audio-visual dictionary learning and binaural Cues. In: IEEE Statistical Signal Processing Workshop (SSP); 05 - 08 Aug 2012, Michigan, U.S.. (2012 IEEE Statistical Signal Processing Workshop (SSP)) ISSN (print) 2373-0803

Abstract

Probabilistic models of binaural cues, such as the interaural phase difference (IPD) and the interaural level difference (ILD), can be used to obtain the audio mask in the time-frequency (TF) domain, for source separation of binaural mixtures. Those models are, however, often degraded by acoustic noise. In contrast, the video stream contains relevant information about the synchronous audio stream that is not affected by acoustic noise. In this paper, we present a novel method for modeling the audio-visual (AV) coherence based on dictionary learning. A visual mask is constructed from the video signal based on the learnt AV dictionary, and incorporated with the audio mask to obtain a noise-robust audio-visual mask, which is then applied to the binaural signal for source separation. We tested our algorithm on the XM2VTS database, and observed considerable performance improvement for noise corrupted signals.

Actions (Repository Editors)

Item Control Page Item Control Page