no code implementations • 1 Jan 2024 • Jinglong Luo, Yehong Zhang, JiaQi Zhang, Xin Mu, Hui Wang, Yue Yu, Zenglin Xu
However, the application of SMPC in Privacy-Preserving Inference (PPI) for large language models, particularly those based on the Transformer architecture, often leads to considerable slowdowns or declines in performance.
no code implementations • 16 Oct 2023 • Haoran Li, Yulin Chen, Jinglong Luo, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, Yangqiu Song
The advancement of large language models (LLMs) has significantly enhanced the ability to effectively tackle various downstream NLP tasks and unify these tasks into generative pipelines.
no code implementations • 26 Jun 2023 • Jinglong Luo, Yehong Zhang, JiaQi Zhang, Shuang Qin, Hui Wang, Yue Yu, Zenglin Xu
In contrast to existing studies that protect the data privacy of GPR via homomorphic encryption, differential privacy, or federated learning, our proposed method is more practical and can be used to preserve the data privacy of both the model inputs and outputs for various data-sharing scenarios (e. g., horizontally/vertically-partitioned data).
no code implementations • 21 Feb 2023 • Yifei Zhang, Dun Zeng, Jinglong Luo, Zenglin Xu, Irwin King
Trustworthy artificial intelligence (AI) technology has revolutionized daily life and greatly benefited human society.