Search Results for author: Heejae Kim

Found 2 papers, 0 papers with code

Breaking MLPerf Training: A Case Study on Optimizing BERT

no code implementations4 Feb 2024 YongDeok Kim, Jaehyung Ahn, Myeongwoo Kim, Changin Choi, Heejae Kim, Narankhuu Tuvshinjargal, Seungwon Lee, Yanzi Zhang, Yuan Pei, Xiongzhan Linghu, Jingkun Ma, Lin Chen, Yuehua Dai, Sungjoo Yoo

Speeding up the large-scale distributed training is challenging in that it requires improving various components of training including load balancing, communication, optimizers, etc.

Hyperparameter Optimization

On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods

no code implementations25 Sep 2019 Heejae Kim, Taewoo Kim, Chan-Hyun Youn

Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e. g., mobile-cloud ecosystems.

Federated Learning Hyperparameter Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.