Paper

LiMuSE: Lightweight Multi-modal Speaker Extraction

Multi-modal cues, including spatial information, facial expression and voiceprint, are introduced to the speech separation and speaker extraction tasks to serve as complementary information to achieve better performance. However, the introduction of these cues brings about an increasing number of parameters and model complexity, which makes it harder to deploy these models on resource-constrained devices. In this paper, we alleviate the aforementioned problem by proposing a Lightweight Multi-modal framework for Speaker Extraction (LiMuSE). We propose to use GC-equipped TCN, which incorporates Group Communication (GC) and Temporal Convolutional Network (TCN) in the Context Codec module, the audio block and the fusion block. The experiments on the MC_GRID dataset demonstrate that LiMuSE achieves on par or better performance with a much smaller number of parameters and less model complexity. We further investigate the impacts of the quantization of LiMuSE. Our code and dataset are provided.

Results in Papers With Code
(↓ scroll down to see all results)