1 code implementation • 30 Nov 2023 • Phillip Howard, Avinash Madasu, Tiep Le, Gustavo Lujan Moreno, Anahita Bhiwandiwalla, Vasudev Lal
Our approach utilizes Stable Diffusion with cross attention control to produce sets of counterfactual image-text pairs that are highly similar in their depiction of a subject (e. g., a given occupation) while differing only in their depiction of intersectional social attributes (e. g., race & gender).
no code implementations • 14 Nov 2023 • Xin Su, Tiep Le, Steven Bethard, Phillip Howard
An important open question in the use of large language models for knowledge-intensive tasks is how to effectively integrate knowledge from three sources: the model's parametric memory, external structured knowledge, and external unstructured knowledge.
no code implementations • 4 Oct 2023 • Phillip Howard, Avinash Madasu, Tiep Le, Gustavo Lujan Moreno, Vasudev Lal
While vision-language models (VLMs) have achieved remarkable performance improvements recently, there is growing evidence that these models also posses harmful biases with respect to social attributes such as gender and race.
no code implementations • 10 May 2017 • Tiep Le, Tran Cao Son, Enrico Pontelli, William Yeoh
Under consideration in Theory and Practice of Logic Programming (TPLP).
no code implementations • 7 May 2014 • Tiep Le, Enrico Pontelli, Tran Cao Son, William Yeoh
The field of Distributed Constraint Optimization Problems (DCOPs) has gained momentum, thanks to its suitability in capturing complex problems (e. g., multi-agent coordination and resource allocation problems) that are naturally distributed and cannot be realistically addressed in a centralized manner.