Search Results for author: John Tan Chong Min

Found 3 papers, 2 papers with code

Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks

no code implementations9 Dec 2022 Shiyu Liu, Rohan Ghosh, John Tan Chong Min, Mehul Motani

(ii) In addition to the strong theoretical motivation, SILO is empirically optimal in the sense of matching an Oracle, which exhaustively searches for the optimal value of max_lr via grid search.

Network Pruning

DropNet: Reducing Neural Network Complexity via Iterative Pruning

1 code implementation ICML 2020 John Tan Chong Min, Mehul Motani

Modern deep neural networks require a significant amount of computing time and power to train and deploy, which limits their usage on edge devices.

Brick Tic-Tac-Toe: Exploring the Generalizability of AlphaZero to Novel Test Environments

1 code implementation13 Jul 2022 John Tan Chong Min, Mehul Motani

Hence, current RL methods are largely not generalizable to a test environment which is conceptually similar but different from what the method has been trained on, which we term the novel test environment.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.