Search Results for author: Angela Schöpke-Gonzalez

Found 1 papers, 0 papers with code

How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments

no code implementations12 Sep 2023 Angela Schöpke-Gonzalez, Siqi Wu, Sagar Kumar, Paul J. Resnick, Libby Hemphill

In designing instructions for annotation tasks to generate training data for these algorithms, researchers often treat the harm concepts that we train algorithms to detect - 'hateful', 'offensive', 'toxic', 'racist', 'sexist', etc.

Cannot find the paper you are looking for? You can Submit a new open access paper.