Search Results for author: Virginia K. Felkner

Found 3 papers, 1 papers with code

GPT is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction

no code implementations24 May 2024 Virginia K. Felkner, Jennifer A. Thompson, Jonathan May

We also extend the previous work to a new community and set of biases: the Jewish community and antisemitism.

WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models

1 code implementation26 Jun 2023 Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May

We present WinoQueer: a benchmark specifically designed to measure whether large language models (LLMs) encode biases that are harmful to the LGBTQ+ community.

Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models

no code implementations23 Jun 2022 Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May

This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in large language models (LLMs) such as BERT.

Bias Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.