no code implementations • 3 May 2024 • Catherine Chen, Jack Merullo, Carsten Eickhoff
Neural models have demonstrated remarkable performance across diverse ranking tasks.
1 code implementation • 26 Oct 2023 • William Rudman, Catherine Chen, Carsten Eickhoff
Representations from large language models (LLMs) are known to be dominated by a small subset of dimensions with exceedingly high variance.
1 code implementation • 16 Jun 2023 • Catherine Chen, Carsten Eickhoff
Explainable Information Retrieval (XIR) is a growing research area focused on enhancing transparency and trustworthiness of the complex decision-making processes taking place in modern information retrieval systems.
1 code implementation • 1 Jun 2023 • Catherine Chen, Zejiang Shen, Dan Klein, Gabriel Stanovsky, Doug Downey, Kyle Lo
Recent work has shown that infusing layout features into language models (LMs) improves processing of visually-rich documents such as scientific papers.
no code implementations • 20 Dec 2022 • Boyi Li, Rodolfo Corona, Karttikeya Mangalam, Catherine Chen, Daniel Flaherty, Serge Belongie, Kilian Q. Weinberger, Jitendra Malik, Trevor Darrell, Dan Klein
Are multimodal inputs necessary for grammar induction?
1 code implementation • 17 Oct 2022 • Catherine Chen, Carsten Eickhoff
In this paper, we use psychometrics and crowdsourcing to identify human-centered factors of explainability in Web search systems and introduce SSE (Search System Explainability), an evaluation metric for explainable IR (XIR) search systems.
no code implementations • NAACL 2021 • Catherine Chen, Kevin Lin, Dan Klein
The tree reconciliation module treats the task as a graph optimization problem and outputs the maximum spanning tree of this graph.
1 code implementation • 24 Feb 2019 • Catherine Chen, Qihong Lu, Andre Beukers, Christopher Baldassano, Kenneth A. Norman
We can perform these bindings for arbitrary fillers -- we understand this sentence even if we have never heard the names "Alice," "tea," or "Bob" before.