no code implementations • 22 Nov 2023 • Zheng Zhang, Cuong Nguyen, Kevin Wells, Thanh-Toan Do, Gustavo Carneiro
The ill-posedness of the LNL task requires the adoption of strong assumptions or the use of multiple noisy labels per training image, resulting in accurate models that work well in isolation but fail to optimise human-AI collaborative classification (HAI-CC).
1 code implementation • 15 Jun 2023 • Rahil Mehrizi, Arash Mehrjou, Maryana Alegro, Yi Zhao, Benedetta Carbone, Carl Fishwick, Johanna Vappiani, Jing Bi, Siobhan Sanford, Hakan Keles, Marcus Bantscheff, Cuong Nguyen, Patrick Schwab
High-content cellular imaging, transcriptomics, and proteomics data provide rich and complementary views on the molecular layers of biology that influence cellular states and function.
no code implementations • 31 May 2023 • Arpit Garg, Cuong Nguyen, Rafael Felix, Thanh-Toan Do, Gustavo Carneiro
To address IDN, Label Noise Learning (LNL) incorporates a sample selection stage to differentiate clean and noisy-label samples.
no code implementations • 20 Mar 2023 • Arpit Garg, Cuong Nguyen, Rafael Felix, Thanh-Toan Do, Gustavo Carneiro
The prevalence of noisy-label samples poses a significant challenge in deep learning, inducing overfitting effects.
no code implementations • 4 Jan 2023 • Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
Developing meta-learning algorithms that are un-biased toward a subset of training tasks often requires hand-designed criteria to weight tasks, potentially resulting in sub-optimal solutions.
no code implementations • 4 Jan 2023 • Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
To meet this requirement without relying on additional $2C - 2$ manual annotations per instance, we propose a method that automatically generates additional noisy labels by estimating the noisy label distribution based on nearest neighbours.
1 code implementation • 2 Sep 2022 • Arpit Garg, Cuong Nguyen, Rafael Felix, Thanh-Toan Do, Gustavo Carneiro
Noisy labels are unavoidable yet troublesome in the ecosystem of deep learning because models can easily overfit them.
Ranked #1 on Learning with noisy labels on CIFAR-100
no code implementations • 17 Aug 2022 • Dung Anh Hoang, Cuong Nguyen, Belagiannis Vasileios, Gustavo Carneiro
In this paper, we analyse the meta-learning algorithm and propose new criteria to characterise the utility of the validation set, based on: 1) the informativeness of the validation set; 2) the class distribution balance of the set; and 3) the correctness of the labels of the set.
no code implementations • 10 Aug 2021 • Balagopal Unnikrishnan, Cuong Nguyen, Shafa Balaram, Chao Li, Chuan Sheng Foo, Pavitra Krishnaswamy
Specifically, we describe adaptations for scenarios with 2D and 3D inputs, uni and multi-label classification, and class distribution mismatch between labeled and unlabeled portions of the training data.
1 code implementation • 28 Apr 2021 • Gian Marco Visani, Alexandra Hope Lee, Cuong Nguyen, David M. Kent, John B. Wong, Joshua T. Cohen, Michael C. Hughes
We develop an Approximate Bayesian Computation approach that draws samples from the posterior distribution over the model's transition and duration parameters given aggregate counts from a specific location, thus adapting the model to a region or individual hospital site of interest.
1 code implementation • 27 Jan 2021 • Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
Recent advances in meta-learning has led to remarkable performances on several few-shot learning benchmarks.
1 code implementation • 5 Mar 2020 • Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
We introduce a new and rigorously-formulated PAC-Bayes meta-learning algorithm that solves few-shot learning.
1 code implementation • 27 Jul 2019 • Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability distribution of model parameter prior for few-shot learning.
no code implementations • 17 Jul 2019 • Gabriel Maicas, Cuong Nguyen, Farbod Motlagh, Jacinto C. Nascimento, Gustavo Carneiro
Meta-training has been empirically demonstrated to be the most effective pre-training method for few-shot learning of medical image classifiers (i. e., classifiers modeled with small training sets).