Search Results for author: Rhui Dih Lee

Found 2 papers, 0 papers with code

Efficiently Distilling LLMs for Edge Applications

no code implementations1 Apr 2024 Achintya Kundu, Fabian Lim, Aaron Chew, Laura Wynter, Penny Chong, Rhui Dih Lee

Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced.

Decoder

Transfer-Once-For-All: AI Model Optimization for Edge

no code implementations27 Mar 2023 Achintya Kundu, Laura Wynter, Rhui Dih Lee, Luis Angel Bathen

Hence, we propose Transfer-Once-For-All (TOFA) for supernet-style training on small data sets with constant computational training cost over any number of edge deployment scenarios.

Model Optimization Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.