no code implementations • 24 Nov 2023 • Nathan Blake, Hana Chockler, David A. Kelly, Santiago Calderon Pena, Akchunya Chanchal
Existing tools for explaining the output of image classifiers can be divided into white-box, which rely on access to the model internals, and black-box, agnostic to the model.
no code implementations • 23 Nov 2023 • David A. Kelly, Hana Chockler, Daniel Kroening, Nathan Blake, Aditi Ramaswamy, Melane Navaratnarajah, Aaditya Shivakumar
In this paper, we propose a new black-box explainability algorithm and tool, YO-ReX, for efficient explanation of the outputs of object detectors.
no code implementations • 25 Sep 2023 • Hana Chockler, David A. Kelly, Daniel Kroening
Existing explanation tools for image classifiers usually give only a single explanation for an image's classification.