Federated Acoustic Modeling For Automatic Speech Recognition

8 Feb 2021  ·  Xiaodong Cui, Songtao Lu, Brian Kingsbury ·

Data privacy and protection is a crucial issue for any automatic speech recognition (ASR) service provider when dealing with clients. In this paper, we investigate federated acoustic modeling using data from multiple clients. A client's data is stored on a local data server and the clients communicate only model parameters with a central server, and not their data. The communication happens infrequently to reduce the communication cost. To mitigate the non-iid issue, client adaptive federated training (CAFT) is proposed to canonicalize data across clients. The experiments are carried out on 1,150 hours of speech data from multiple domains. Hybrid LSTM acoustic models are trained via federated learning and their performance is compared to traditional centralized acoustic model training. The experimental results demonstrate the effectiveness of the proposed federated acoustic modeling strategy. We also show that CAFT can further improve the performance of the federated acoustic model.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Distributed, Parallel, and Cluster Computing Audio and Speech Processing

Datasets


  Add Datasets introduced or used in this paper