Search Results for author: Łukasz Bartoszcze

Found 2 papers, 0 papers with code

Representation noising effectively prevents harmful fine-tuning on LLMs

no code implementations23 May 2024 Domenic Rosati, Jan Wehner, Kai Williams, Łukasz Bartoszcze, David Atanasov, Robie Gonzales, Subhabrata Majumdar, Carsten Maple, Hassan Sajjad, Frank Rudzicz

We provide empirical evidence that the effectiveness of our defence lies in its "depth": the degree to which information about harmful representations is removed across all layers of the LLM.

Immunization against harmful fine-tuning attacks

no code implementations26 Feb 2024 Domenic Rosati, Jan Wehner, Kai Williams, Łukasz Bartoszcze, Jan Batzner, Hassan Sajjad, Frank Rudzicz

Approaches to aligning large language models (LLMs) with human values has focused on correcting misalignment that emerges from pretraining.

Cannot find the paper you are looking for? You can Submit a new open access paper.