Affective Burst Detection from Speech using Kernel-fusion Dilated Convolutional Neural Networks

8 Oct 2021  ·  Berkay Kopru, Engin Erzin ·

As speech-interfaces are getting richer and widespread, speech emotion recognition promises more attractive applications. In the continuous emotion recognition (CER) problem, tracking changes across affective states is an important and desired capability. Although CER studies widely use correlation metrics in evaluations, these metrics do not always capture all the high-intensity changes in the affective domain. In this paper, we define a novel affective burst detection problem to accurately capture high-intensity changes of the affective attributes. For this problem, we formulate a two-class classification approach to isolate affective burst regions over the affective state contour. The proposed classifier is a kernel-fusion dilated convolutional neural network (KFDCNN) architecture driven by speech spectral features to segment the affective attribute contour into idle and burst sections. Experimental evaluations are performed on the RECOLA and CreativeIT datasets. The proposed KFDCNN is observed to outperform baseline feedforward neural networks on both datasets.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Human-Computer Interaction Audio and Speech Processing

Datasets


  Add Datasets introduced or used in this paper