Robustness Evaluation of Regression Tasks with Skewed Domain Preferences

15 Dec 2022  ·  Nuno Costa, Nuno Moniz ·

In natural phenomena, data distributions often deviate from normality. One can think of cataclysms as a self-explanatory example: events that occur almost never, and at the same time are many standard deviations away from the common outcome. In many scientific contexts it is exactly these tail events that researchers are most interested in anticipating, so that adequate measures can be taken to prevent or attenuate a major impact on society. Despite such efforts, we have yet to provide definite answers to crucial issues in evaluating predictive solutions in domains such as weather, pollution, health. In this paper, we deal with two encapsulated problems simultaneously. First, assessing the performance of regression models when non-uniform preferences apply - not all values are equally relevant concerning the accuracy of their prediction, and there's a particular interest in the most extreme values. Second, assessing the robustness of models when dealing with uncertainty regarding the actual underlying distribution of values relevant for such problems. We show how different levels of relevance associated with target values may impact experimental conclusions, and demonstrate the practical utility of the proposed methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here