no code implementations • 30 Apr 2024 • Nandhini Swaminathan, David Danks
Group dynamics might require that one agent (e. g., the AI system) compensate for biases and errors in another agent (e. g., the human), but this compensation should be carefully developed.
no code implementations • 22 Mar 2024 • Nandhini Swaminathan, David Danks
This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology.
no code implementations • 30 Jan 2024 • Daniel Trusilo, David Danks
This paper presents a theoretical analysis and practical approach to the moral responsibilities when developing AI systems for non-military applications that may nonetheless be used for conflict applications.
no code implementations • 25 Jan 2024 • Jennifer Chien, David Danks
Algorithmic harms are commonly categorized as either allocative or representational.
no code implementations • 5 Sep 2023 • Jennifer Chien, David Danks
The applications of personalized recommender systems are rapidly expanding: encompassing social media, online shopping, search engine results, and more.
no code implementations • 18 May 2022 • Mohammadsajad Abavisani, David Danks, Vince Calhoun, Sergey Plis
Graphical structures estimated by causal learning algorithms from time series data can provide highly misleading causal information if the causal timescale of the generating process fails to match the measurement timescale of the data.
no code implementations • 26 Jul 2019 • Mauricio Gonzalez Soto, David Danks, Hugo J. Escalante Balderas, L. Enrique Sucar
Decision-making under uncertainty and causal thinking are fundamental aspects of intelligent reasoning.
no code implementations • 25 Feb 2016 • Antti Hyttinen, Sergey Plis, Matti Järvisalo, Frederick Eberhardt, David Danks
This paper focuses on causal structure estimation from time series data in which measurements are obtained at a coarser timescale than the causal timescale of the underlying system.
no code implementations • NeurIPS 2015 • Sergey Plis, David Danks, Cynthia Freeman, Vince Calhoun
That is, these algorithms all learn causal structure without assuming any particular relation between the measurement and system timescales; they are thus rate-agnostic.
no code implementations • NeurIPS 2013 • Erich Kummerfeld, David Danks
Structure learning algorithms for graphical models have focused almost exclusively on stable environments in which the underlying generative process does not change; that is, they assume that the generating model is globally stationary.
no code implementations • NeurIPS 2008 • David Danks, Clark Glymour, Robert E. Tillman
In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset.