no code implementations • 16 Apr 2024 • Zexin Li, Yiyang Lin, Zijie Fang, Shuyan Li, Xiu Li
In this paper, we propose the Attention-Based Varifocal Generative Adversarial Network (AV-GAN), which solves multiple problems in pathologic image translation tasks, such as uneven translation difficulty in different regions, mutual interference of multiple resolution information, and nuclear deformation.
no code implementations • 12 Sep 2023 • Yufei Li, Zexin Li, Wei Yang, Cong Liu
Recent advancements in language models (LMs) have gained substantial attentions on their capability to generate human-like responses.
no code implementations • 29 Aug 2023 • Zexin Li, Tao Ren, Xiaoxi He, Cong Liu
This process ensures the scheduling framework's compatibility with MIMONet and maximizes efficiency.
no code implementations • 29 Aug 2023 • Zexin Li, Aritra Samanta, Yufei Li, Andrea Soltoggio, Hyoseung Kim, Cong Liu
These components collaboratively tackle the trade-offs in on-device DRL training, improving timing and algorithm performance while minimizing the risk of out-of-memory (OOM) errors.
no code implementations • 29 Jul 2023 • Shahab Nikkhoo, Zexin Li, Aritra Samanta, Yufei Li, Cong Liu
Our work introduces a new angle for manipulation in recent multi-agent RL social dilemmas that utilize a unique reward function for incentivization.
no code implementations • 22 Jul 2023 • Zexin Li, Xiaoxi He, Yufei Li, Shahab Nikkhoo, Wei Yang, Lothar Thiele, Cong Liu
In this paper, we propose MIMONet, a novel on-device multi-input multi-output (MIMO) DNN framework that achieves high accuracy and on-device efficiency in terms of critical performance metrics such as latency, energy, and memory usage.
1 code implementation • 20 May 2023 • Yiming Chen, Simin Chen, Zexin Li, Wei Yang, Cong Liu, Robby T. Tan, Haizhou Li
Despite much success in natural language processing (NLP), pre-trained language models typically lead to a high computational cost during inference.
1 code implementation • 5 May 2023 • Yufei Li, Zexin Li, Yingfan Gao, Cong Liu
Such language models are, however, vulnerable to various adversarial samples as studied in traditional tasks such as text classification, which inspires our curiosity about their robustness in DG systems.
1 code implementation • CVPR 2023 • Zexin Li, Bangjie Yin, Taiping Yao, Juefeng Guo, Shouhong Ding, Simin Chen, Cong Liu
A hard challenge in developing practical face recognition (FR) attacks is due to the black-box nature of the target FR model, i. e., inaccessible gradient and parameter information to attackers.