no code implementations • 1 Mar 2024 • Liang Luo, Buyun Zhang, Michael Tsang, Yinbin Ma, Ching-Hsiang Chu, Yuxin Chen, Shen Li, Yuchen Hao, Yanli Zhao, Guna Lakshminarayanan, Ellie Dingqiao Wen, Jongsoo Park, Dheevatsa Mudigere, Maxim Naumov
We study a mismatch between the deep learning recommendation models' flat architecture, common distributed training paradigm and hierarchical data center topology.
no code implementations • 11 Mar 2022 • Buyun Zhang, Liang Luo, Xi Liu, Jay Li, Zeliang Chen, Weilin Zhang, Xiaohan Wei, Yuchen Hao, Michael Tsang, Wenjun Wang, Yang Liu, Huayu Li, Yasmine Badr, Jongsoo Park, Jiyan Yang, Dheevatsa Mudigere, Ellie Wen
To overcome the challenge brought by DHEN's deeper and multi-layer structure in training, we propose a novel co-designed training system that can further improve the training efficiency of DHEN.
no code implementations • 1 Mar 2021 • Michael Tsang, James Enouen, Yan Liu
Interpretation of deep learning models is a very challenging problem because of their large number of parameters, complex connections between nodes, and unintelligible feature representations.
no code implementations • 2 Feb 2021 • Mohammad H. Jafari, Christina Luong, Michael Tsang, Ang Nan Gu, Nathan Van Woudenberg, Robert Rohling, Teresa Tsang, Purang Abolmaesumi
We tackle a specifically challenging problem, where training labels are noisy and highly sparse.
no code implementations • 28 Jun 2020 • Loc Trinh, Michael Tsang, Sirisha Rambhatla, Yan Liu
In this paper we propose a novel human-centered approach for detecting forgery in face images, using dynamic prototypes as a form of visual explanations.
1 code implementation • NeurIPS 2020 • Michael Tsang, Sirisha Rambhatla, Yan Liu
Feature attribution is a way to analyze the impact of features on predictions.
1 code implementation • ICLR 2020 • Michael Tsang, Dehua Cheng, Hanpeng Liu, Xue Feng, Eric Zhou, Yan Liu
Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable.
no code implementations • 11 Jun 2019 • Conner Chyung, Michael Tsang, Yan Liu
In an attempt to gather a deeper understanding of how convolutional neural networks (CNNs) reason about human-understandable concepts, we present a method to infer labeled concept data from hidden layer activations and interpret the concepts through a shallow decision tree.
no code implementations • ICLR 2019 • Michael Tsang, Youbang Sun, Dongxu Ren, Yan Liu
Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models.
no code implementations • NeurIPS 2018 • Michael Tsang, Hanpeng Liu, Sanjay Purushotham, Pavankumar Murali, Yan Liu
Neural networks are known to model statistical interactions, but they entangle the interactions at intermediate hidden layers for shared representation learning.
no code implementations • ICLR 2018 • Michael Tsang, Dehua Cheng, Yan Liu
Interpreting neural networks is a crucial and challenging task in machine learning.