no code implementations • 28 Feb 2021 • Gautam Krishna, Mason Carnahan, Shilpa Shamapant, Yashitha Surendranath, Saumya Jain, Arundhati Ghosh, Co Tran, Jose del R Millan, Ahmed H Tewfik
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously with aphasia, apraxia, and dysarthria speech.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 13 Aug 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Morgan M Hagood, Ahmed H. Tewfik
In this paper, we demonstrate speech recognition using electroencephalography (EEG) signals obtained using dry electrodes on a limited English vocabulary consisting of three vowels and one word using a deep learning model.
no code implementations • 1 Jun 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
In this paper we introduce a recurrent neural network (RNN) based variational autoencoder (VAE) model with a new constrained loss function that can generate more meaningful electroencephalography (EEG) features from raw EEG features to improve the performance of EEG based speech recognition systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 29 May 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
The electroencephalography (EEG) signals recorded in parallel with speech are used to perform isolated and continuous speech recognition.
no code implementations • 29 May 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
In this paper we demonstrate that it is possible to generate more meaningful electroencephalography (EEG) features from raw EEG features using generative adversarial networks (GAN) to improve the performance of EEG based continuous speech recognition systems.
no code implementations • 29 May 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
In [1, 2] authors provided preliminary results for synthesizing speech from electroencephalography (EEG) features where they first predict acoustic features from EEG features and then the speech is reconstructed from the predicted acoustic features using griffin lim reconstruction algorithm.
no code implementations • 16 May 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
In this paper we explore predicting facial or lip video features from electroencephalography (EEG) features and predicting EEG features from recorded facial or lip video frames using deep learning models.
no code implementations • 9 Apr 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
In this paper we introduce attention-regression model to demonstrate predicting acoustic features from electroencephalography (EEG) features recorded in parallel with spoken sentences.
no code implementations • 7 Mar 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
In this paper we explore speaker identification using electroencephalography (EEG) signals.
no code implementations • 29 Feb 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Yan Han, Ahmed H. Tewfik
In this paper we demonstrate predicting electroencephalograpgy (EEG) features from acoustic features using recurrent neural network (RNN) based regression model and generative adversarial network (GAN).
no code implementations • 22 Feb 2020 • Gautam Krishna, Co Tran, Yan Han, Mason Carnahan
In this paper we demonstrate speech synthesis using different electroencephalography (EEG) feature sets recently introduced in [1].
no code implementations • 6 Feb 2020 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed Tewfik
Our results demonstrate the feasibility of using EEG signals for performing continuous silent speech recognition.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 31 Dec 2019 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed H. Tewfik
In this paper we investigate continuous speech recognition using electroencephalography (EEG) features using recently introduced end-to-end transformer based automatic speech recognition (ASR) model.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 16 Dec 2019 • Gautam Krishna, Mason Carnahan, Co Tran, Ahmed H. Tewfik
In this paper we investigate whether electroencephalography (EEG) features can be used to improve the performance of continuous visual speech recognition systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 24 Nov 2019 • Gautam Krishna, Co Tran, Mason Carnahan, Yan Han, Ahmed H. Tewfik
In this paper we introduce various techniques to improve the performance of electroencephalography (EEG) features based continuous speech recognition (CSR) systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 8 Nov 2019 • Gautam Krishna, Co Tran, Yan Han, Mason Carnahan, Ahmed H. Tewfik
In this paper we demonstrate that performance of voice activity detection (VAD) system operating in presence of background noise can be improved by concatenating acoustic input features with electroencephalography (EEG) features.
Sound Audio and Speech Processing Signal Processing
no code implementations • 13 Sep 2019 • Gautam Krishna, Co Tran, Yan Han, Mason Carnahan, Ahmed H. Tewfik
In this paper we demonstrate spoken speech enhancement using electroencephalography (EEG) signals using a generative adversarial network (GAN) based model, gated recurrent unit (GRU) regression based model, temporal convolutional network (TCN) regression model and finally using a mixed TCN GRU regression model.
no code implementations • 14 Aug 2019 • Gautam Krishna, Yan Han, Co Tran, Mason Carnahan, Ahmed H. Tewfik
In this paper we first demonstrate continuous noisy speech recognition using electroencephalography (EEG) signals on English vocabulary using different types of state of the art end-to-end automatic speech recognition (ASR) models, we further provide results obtained using EEG data recorded under different experimental conditions.
Audio and Speech Processing Sound
no code implementations • 17 Jun 2019 • Yan Han, Gautam Krishna, Co Tran, Mason Carnahan, Ahmed H. Tewfik
In this paper we demonstrate that performance of a speaker verification system can be improved by concatenating electroencephalography (EEG) signal features with speech signal features or only using EEG signal features.
no code implementations • 17 Jun 2019 • Gautam Krishna, Co Tran, Mason Carnahan, Ahmed H. Tewfik
In this paper we demonstrate end-to-end continuous speech recognition (CSR) using electroencephalography (EEG) signals with no speech signal as input.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 17 Jun 2019 • Gautam Krishna, Co Tran, Yan Han, Mason Carnahan, Ahmed H. Tewfik
In this paper we demonstrate continuous noisy speech recognition using connectionist temporal classification (CTC) model on limited Chinese vocabulary using electroencephalography (EEG) features with no speech signal as input and we further demonstrate single CTC model based continuous noisy speech recognition on limited joint English and Chinese vocabulary using EEG features with no speech signal as input.