no code implementations • 1 Jan 2024 • Jinglong Luo, Yehong Zhang, JiaQi Zhang, Xin Mu, Hui Wang, Yue Yu, Zenglin Xu
However, the application of SMPC in Privacy-Preserving Inference (PPI) for large language models, particularly those based on the Transformer architecture, often leads to considerable slowdowns or declines in performance.
no code implementations • 19 Dec 2023 • Xin Mu, Yu Wang, Zhengan Huang, Junzuo Lai, Yehong Zhang, Hui Wang, Yue Yu
In the rapidly growing digital economy, protecting intellectual property (IP) associated with digital products has become increasingly important.
no code implementations • 13 Oct 2023 • Dan-Xuan Liu, Yu-Ran Gu, Chao Qian, Xin Mu, Ke Tang
In this paper, we propose a new framework MR-EMO based on Evolutionary Multi-objective Optimization, which reformulates Migrant Resettlement as a bi-objective optimization problem that maximizes the expected number of employed migrants and minimizes the number of dispatched migrants simultaneously, and employs a Multi-Objective Evolutionary Algorithm (MOEA) to solve the bi-objective problem.
no code implementations • 4 Aug 2023 • Xin Mu, Yu Wang, Yehong Zhang, JiaQi Zhang, Hui Wang, Yang Xiang, Yue Yu
Understanding the life cycle of the machine learning (ML) model is an intriguing area of research (e. g., understanding where the model comes from, how it is trained, and how it is used).
1 code implementation • 7 Dec 2022 • Fangqi Zhu, Jun Gao, Changlong Yu, Wei Wang, Chen Xu, Xin Mu, Min Yang, Ruifeng Xu
First, the pretrained language models adopted by current works ignore event-level knowledge, resulting in an inability to capture the correlations between events well.
no code implementations • 4 Sep 2022 • Xin Mu, Ming Pang, Feida Zhu
In this paper, we introduce Data Provenance via Differential Auditing (DPDA), a practical framework for auditing data provenance with a different approach based on statistically significant differentials, i. e., after carefully designed transformation, perturbed input data from the target model's training set would result in much more drastic changes in the output than those from the model's non-training set.
no code implementations • ACL 2019 • Liqun Liu, Funan Mu, Pengyu Li, Xin Mu, Jing Tang, Xingsheng Ai, Ran Fu, LiFeng Wang, Xing Zhou
In this paper, we introduce NeuralClassifier, a toolkit for neural hierarchical multi-label text classification.
General Classification Hierarchical Multi-label Classification +4
no code implementations • 30 May 2016 • Xin Mu, Kai Ming Ting, Zhi-Hua Zhou
This is the first time, as far as we know, that completely random trees are used as a single common core to solve all three sub problems: unsupervised learning, supervised learning and model update in data streams.