no code implementations • 4 Dec 2022 • Ayrton San Joaquin, Filip Skubacz
We study the performance of monolingual and multilingual language models on the task of question-answering (QA) on three diverse languages: English, Finnish and Japanese.
no code implementations • 4 Dec 2022 • Ayrton San Joaquin, Ardy Haroen
Large Language Models are affected by the phenomena of memorizing and forgetting their training data.
no code implementations • 31 Mar 2022 • Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak significant private details of training points belonging to other parties.