Search Results for author: William Won

Found 4 papers, 1 papers with code

TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Machine Learning

no code implementations11 Apr 2023 William Won, Midhilesh Elavazhagan, Sudarshan Srinivasan, Ajaya Durg, Samvit Kaul, Swati Gupta, Tushar Krishna

To this end, this paper introduces TACOS, an automated synthesizer that generates topology-aware collective algorithms for common distributed machine learning collectives across arbitrary input network topologies.

ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale

3 code implementations24 Mar 2023 William Won, Taekyung Heo, Saeed Rashidi, Srinivas Sridharan, Sudarshan Srinivasan, Tushar Krishna

In this paper, we extend the open-source ASTRA-sim infrastructure and endow it with the capabilities to model state-of-the-art and emerging distributed training models and platforms.

Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models

no code implementations9 Oct 2021 Saeed Rashidi, William Won, Sudarshan Srinivasan, Srinivas Sridharan, Tushar Krishna

Distributed training is a solution to reduce DNN training time by splitting the task across multiple NPUs (e. g., GPU/TPU).

Scheduling

LIBRA: Enabling Workload-aware Multi-dimensional Network Topology Optimization for Distributed Training of Large AI Models

no code implementations24 Sep 2021 William Won, Saeed Rashidi, Sudarshan Srinivasan, Tushar Krishna

As model sizes in machine learning continue to scale, distributed training is necessary to accommodate model weights within each device and to reduce training time.

Cannot find the paper you are looking for? You can Submit a new open access paper.