Search Results for author: Hyeonuk Nam

Found 10 papers, 7 papers with code

Auditory Neural Response Inspired Sound Event Detection Based on Spectro-temporal Receptive Field

no code implementations20 Jun 2023 Deokki Min, Hyeonuk Nam, Yong-Hwa Park

In this work, we utilized STRF as a kernel of the first convolutional layer in SED model to extract neural response from input sound to make SED model similar to human auditory system.

Event Detection Sound Event Detection

VIFS: An End-to-End Variational Inference for Foley Sound Synthesis

1 code implementation8 Jun 2023 Junhyeok Lee, Hyeonuk Nam, Yong-Hwa Park

Different from TTS models which generate short pronunciation from phonemes and speaker identity, the category-to-sound problem requires generating diverse sounds just from a category index.

Speech Synthesis Variational Inference

Data Augmentation and Squeeze-and-Excitation Network on Multiple Dimension for Sound Event Localization and Detection in Real Scenes

no code implementations24 Jun 2022 Byeong-Yun Ko, Hyeonuk Nam, Seong-Hu Kim, Deokki Min, Seung-Deok Choi, Yong-Hwa Park

Performance of sound event localization and detection (SELD) in real scenes is limited by small size of SELD dataset, due to difficulty in obtaining sufficient amount of realistic multi-channel audio data recordings with accurate label.

Data Augmentation Sound Event Localization and Detection

Frequency Dependent Sound Event Detection for DCASE 2022 Challenge Task 4

1 code implementation23 Jun 2022 Hyeonuk Nam, Seong-Hu Kim, Deokki Min, Byeong-Yun Ko, Seung-Deok Choi, Yong-Hwa Park

While many deep learning methods on other domains have been applied to sound event detection (SED), differences between original domains of the methods and SED have not been appropriately considered so far.

Event Detection Sound Event Detection

Decomposed Temporal Dynamic CNN: Efficient Time-Adaptive Network for Text-Independent Speaker Verification Explained with Speaker Activation Map

1 code implementation29 Mar 2022 Seong-Hu Kim, Hyeonuk Nam, Yong-Hwa Park

To extract accurate speaker information for text-independent speaker verification, temporal dynamic CNNs (TDY-CNNs) adapting kernels to each time bin was proposed.

Data Augmentation Text-Independent Speaker Verification

FilterAugment: An Acoustic Environmental Data Augmentation Method

1 code implementation7 Oct 2021 Hyeonuk Nam, Seong-Hu Kim, Yong-Hwa Park

Thus, training acoustic models for audio and speech tasks requires regularization on various acoustic environments in order to achieve robust performance in real life applications.

Data Augmentation Event Detection +2

Temporal Dynamic Convolutional Neural Network for Text-Independent Speaker Verification and Phonemetic Analysis

1 code implementation7 Oct 2021 Seong-Hu Kim, Hyeonuk Nam, Yong-Hwa Park

The temporal dynamic model adapts itself to phonemes without explicitly given phoneme information during training, and results show the necessity to consider phoneme variation within utterances for more accurate and robust text-independent speaker verification.

Speaker Recognition Text-Independent Speaker Recognition +1

Deep learning based cough detection camera using enhanced features

no code implementations28 Jul 2021 Gyeong-Tae Lee, Hyeonuk Nam, Seong-Hu Kim, Sang-Min Choi, Youngkey Kim, Yong-Hwa Park

Finally, a test F1 score of 91. 9% (test accuracy of 97. 2%) was achieved from G-net with the MFCC-V-A feature (named Spectroflow), an acoustic feature effective for use in cough detection.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.