Learning How to Listen: A Temporal-Frequential Attention Model for Sound Event Detection

29 Oct 2018  ·  Shen Yu-Han, He Ke-Xin, Zhang Wei-Qiang ·

In this paper, we propose a temporal-frequential attention model for sound event detection (SED). Our network learns how to listen with two attention models: a temporal attention model and a frequential attention model. Proposed system learns when to listen using the temporal attention model while it learns where to listen on the frequency axis using the frequential attention model. With these two models, we attempt to make our system pay more attention to important frames or segments and important frequency components for sound event detection. Our proposed method is demonstrated on the task 2 of Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 Challenge and achieves competitive performance.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Audio and Speech Processing

Datasets


  Add Datasets introduced or used in this paper