Search Results for author: Aaron J. Li

Found 1 papers, 1 papers with code

More RLHF, More Trust? On The Impact of Human Preference Alignment On Language Model Trustworthiness

1 code implementation29 Apr 2024 Aaron J. Li, Satyapriya Krishna, Himabindu Lakkaraju

The surge in Large Language Models (LLMs) development has led to improved performance on cognitive tasks as well as an urgent need to align these models with human values in order to safely exploit their power.

Ethics Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.