When Optimizing $f$-divergence is Robust with Label Noise

ICLR 2021  ·  Jiaheng Wei, Yang Liu ·

We show when maximizing a properly defined $f$-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise. Leveraging its variational form, we derive a nice decoupling property for a family of $f$-divergence measures when label noise presents, where the divergence is shown to be a linear combination of the variational difference defined on the clean distribution and a bias term introduced due to the noise. The above derivation helps us analyze the robustness of different $f$-divergence functions. With established robustness, this family of $f$-divergence functions arises as useful metrics for the problem of learning with noisy labels, which do not require the specification of the labels' noise rate. When they are possibly not robust, we propose fixes to make them so. In addition to the analytical results, we present thorough experimental evidence. Our code is available at https://github.com/UCSC-REAL/Robust-f-divergence-measures.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N F-div Accuracy (mean) 57.10 # 17
Learning with noisy labels CIFAR-10N-Aggregate F-div Accuracy (mean) 91.64 # 13
Learning with noisy labels CIFAR-10N-Random1 F-div Accuracy (mean) 89.70 # 15
Learning with noisy labels CIFAR-10N-Random2 F-div Accuracy (mean) 89.79 # 13
Learning with noisy labels CIFAR-10N-Random3 F-div Accuracy (mean) 89.55 # 14
Learning with noisy labels CIFAR-10N-Worst F-div Accuracy (mean) 82.53 # 17
Image Classification Clothing1M Robust f-divergence Accuracy 73.09% # 34

Methods


No methods listed for this paper. Add relevant methods here