1 code implementation • 13 Oct 2022 • Sharat Agarwal, Saket Anand, Chetan Arora
In this work, we propose an ADA strategy, which given a frame, identifies a set of classes that are hardest for the model to predict accurately, thereby recommending semantically meaningful regions to be annotated in a selected frame.
1 code implementation • 20 Oct 2021 • Sharat Agarwal, Sumanyu Muku, Saket Anand, Chetan Arora
Through a series of experiments, we validate that curating contextually fair data helps make model predictions fair by balancing the true positive rate for the protected class across groups without compromising on the model's overall performance.
1 code implementation • ECCV 2020 • Sharat Agarwal, Himanshu Arora, Saket Anand, Chetan Arora
Contextual Diversity (CD) hinges on a crucial observation that the probability vector predicted by a CNN for a region of interest typically contains information from a larger receptive field.