no code implementations • 18 Apr 2024 • Shreya Shankar, J. D. Zamfirescu-Pereira, Björn Hartmann, Aditya G. Parameswaran, Ian Arawjo
In particular, we identify a phenomenon we dub \emph{criteria drift}: users need criteria to grade outputs, but grading outputs helps users define criteria.
no code implementations • 7 Aug 2023 • Aditya G. Parameswaran, Shreya Shankar, Parth Asawa, Naman jain, Yujie Wang
Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone.
no code implementations • 16 Sep 2022 • Shreya Shankar, Rolando Garcia, Joseph M. Hellerstein, Aditya G. Parameswaran
Organizations rely on machine learning engineers (MLEs) to operationalize ML, i. e., deploy and maintain ML pipelines in production.
no code implementations • 23 May 2022 • Shreya Shankar, Bernease Herman, Aditya G. Parameswaran
While most work on evaluating machine learning (ML) models focuses on computing accuracy on batches of data, tracking accuracy alone in a streaming setting (i. e., unbounded, timestamp-ordered datasets) fails to appropriately identify when models are performing unexpectedly.
2 code implementations • NeurIPS 2020 • Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, aditi raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli
In this work, we propose a first-order dual SDP algorithm that (1) requires memory only linear in the total number of network activations, (2) only requires a fixed number of forward/backward passes through the network per iteration.
no code implementations • 11 Feb 2020 • Rohit Jammula, Vishnu Rajan Tejus, Shreya Shankar
Deep learning models have the capacity to fundamentally revolutionize medical imaging analysis, and they have particularly interesting applications in computer-aided diagnosis.
no code implementations • NeurIPS 2018 • Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, Jascha Sohl-Dickstein
Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich.
no code implementations • 22 Nov 2017 • Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, D. Sculley
Further, we analyze classifiers trained on these data sets to assess the impact of these training distributions and find strong differences in the relative performance on images from different locales.