1 code implementation • NeurIPS 2023 • Zihan Chen, Howard H. Yang, Tony Q. S. Quek, Kai Fong Ernest Chong
Personalized federated learning (PFL) has been widely investigated to address the challenge of data heterogeneity, especially when a single generic model is inadequate in satisfying the diverse performance requirements of local clients simultaneously.
no code implementations • 6 Oct 2023 • Zihan Chen, Howard H. Yang, Y. C. Tay, Kai Fong Ernest Chong, Tony Q. S. Quek
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
1 code implementation • 19 Jul 2023 • Xia Huang, Kai Fong Ernest Chong
To tackle the limitation of entropy maximization, we propose $(\alpha, \beta)$-generalized KL divergence, $\mathcal{D}_{\text{KL}}^{\alpha, \beta}(p\|q)$, which can be used to identify significantly more NC instances.
1 code implementation • CVPR 2023 • Jingyi Xu, Tushar Vaidya, Yufei Wu, Saket Chandra, Zhangsheng Lai, Kai Fong Ernest Chong
We introduce algebraic machine reasoning, a new reasoning framework that is well-suited for abstract reasoning.
1 code implementation • CVPR 2022 • Jingyi Xu, Zihan Chen, Tony Q. S. Quek, Kai Fong Ernest Chong
Although there exist methods in centralized learning for tackling label noise, such methods do not perform well on heterogeneous label noise in FL settings, due to the typically smaller sizes of client datasets and data privacy requirements in FL.
no code implementations • 12 Aug 2021 • Zihan Chen, Kai Fong Ernest Chong, Tony Q. S. Quek
Federated learning (FL) offers a solution to train a global machine learning model while still maintaining data privacy, without needing access to data stored locally at the clients.
1 code implementation • 27 May 2021 • Jingyi Xu, Tony Q. S. Quek, Kai Fong Ernest Chong
In particular, we shall assume that a small subset of any given noisy dataset is known to have correct labels, which we treat as "positive", while the remaining noisy subset is treated as "unlabeled".
Ranked #7 on Image Classification on Clothing1M (using clean data) (using extra training data)
no code implementations • 1 Jan 2021 • Xia Huang, Kai Fong Ernest Chong
At the heart of our framework is a discriminator that predicts whether an input dataset has maximum Shannon entropy, which shall be used on multiple new datasets $\hat{\mathcal{D}}$ synthesized from $\mathcal{D}$ via the insertion of additional label noise.
no code implementations • ICLR 2020 • Kai Fong Ernest Chong
(ii) There exists some $\lambda>0$ (depending only on $f$ and $\sigma$), such that the UAP still holds if we restrict all non-bias weights $w$ in the first layer to satisfy $|w|>\lambda$.