Understanding Data Augmentation from a Robustness Perspective

7 Sep 2023  ·  Zhendong Liu, Jie Zhang, Qiangqiang He, Chongjun Wang ·

In the realm of visual recognition, data augmentation stands out as a pivotal technique to amplify model robustness. Yet, a considerable number of existing methodologies lean heavily on heuristic foundations, rendering their intrinsic mechanisms ambiguous. This manuscript takes both a theoretical and empirical approach to understanding the phenomenon. Theoretically, we frame the discourse around data augmentation within game theory's constructs. Venturing deeper, our empirical evaluations dissect the intricate mechanisms of emblematic data augmentation strategies, illuminating that these techniques primarily stimulate mid- and high-order game interactions. Beyond the foundational exploration, our experiments span multiple datasets and diverse augmentation techniques, underscoring the universal applicability of our findings. Recognizing the vast array of robustness metrics with intricate correlations, we unveil a streamlined proxy. This proxy not only simplifies robustness assessment but also offers invaluable insights, shedding light on the inherent dynamics of model game interactions and their relation to overarching system robustness. These insights provide a novel lens through which we can re-evaluate model safety and robustness in visual recognition tasks.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here