1 code implementation • 21 Mar 2022 • Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art method for graph-based learning tasks.
Ranked #1 on Node Classification on Reddit
1 code implementation • ICLR 2022 • Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, Yingyan Lin
Notably, little is known regarding the convergence rate of GCN training with both stale features and stale feature gradients.
1 code implementation • 2 Feb 2022 • Youjie Li, Amar Phanishayee, Derek Murray, Jakub Tarnawski, Nam Sung Kim
Deep neural networks (DNNs) have grown exponentially in size over the past decade, leaving only those who have massive datacenter-based resources with the ability to develop and train such models.
no code implementations • 1 Jan 2021 • Cheng Wan, Youjie Li, Nam Sung Kim, Yingyan Lin
While it can be natural to leverage graph partition and distributed training for tackling this challenge, this direction has only been slightly touched on previously due to the unique challenge posed by the GCN structures, especially the excessive amount of boundary nodes in each partitioned subgraph, which can easily explode the required memory and communications for distributed training of GCNs.
no code implementations • NeurIPS 2018 • Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr
Data parallelism can boost the training speed of convolutional neural networks (CNN), but could suffer from significant communication costs caused by gradient aggregation.
no code implementations • NeurIPS 2018 • Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing
Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands.