VOICe: A Sound Event Detection Dataset For Generalizable Domain Adaptation

25 Nov 2019  ·  Gharib Shayan, Drossos Konstantinos, Fagerlund Eemi, Virtanen Tuomas ·

The performance of sound event detection methods can significantly degrade when they are used in unseen conditions (e.g. recording devices, ambient noise). Domain adaptation is a promising way to tackle this problem. In this paper, we present VOICe, the first dataset for the development and evaluation of domain adaptation methods for sound event detection. VOICe consists of mixtures with three different sound events ("baby crying", "glass breaking", and "gunshot"), which are over-imposed over three different categories of acoustic scenes: vehicle, outdoors, and indoors. Moreover, the mixtures are also offered without any background noise. VOICe is freely available online (https://doi.org/10.5281/zenodo.3514950). In addition, using an adversarial-based training method, we evaluate the performance of a domain adaptation method on VOICe.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Audio and Speech Processing

Datasets


Introduced in the Paper:

VOICe

Used in the Paper:

TAU Urban Acoustic Scenes 2019