no code implementations • 3 Feb 2024 • Tianshi Li, Sauvik Das, Hao-Ping Lee, Dakuo Wang, Bingsheng Yao, Zhiping Zhang
The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns.
no code implementations • 16 Nov 2023 • Yao Dou, Isadora Krsek, Tarek Naous, Anubha Kabra, Sauvik Das, Alan Ritter, Wei Xu
Motivated by the user feedback, we introduce the task of self-disclosure abstraction, which is paraphrasing disclosures into less specific terms while preserving their utility, e. g., "Im 16F" to "I'm a teenage girl".
no code implementations • 3 Oct 2023 • Yang Chen, Ethan Mendes, Sauvik Das, Wei Xu, Alan Ritter
While data leaks should be prevented, it is also crucial to examine the trade-off between the privacy protection and model utility of proposed approaches.
no code implementations • 20 Sep 2023 • Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li
To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users.