1 code implementation • 16 May 2024 • Ruizhe Chen, Tianxiang Hu, Yang Feng, Zuozhu Liu
To bridge this gap, we introduce a pioneering method for pinpointing PII-sensitive neurons (privacy neurons) within LLMs.
no code implementations • 15 May 2024 • Ruizhe Chen, Yichen Li, Zikai Xiao, Zuozhu Liu
Existing debiasing methods inevitably make unreasonable or undesired predictions as they are designated and evaluated to achieve parity across different social groups but leave aside individual facts, resulting in modified existing knowledge.
no code implementations • 18 Apr 2024 • Xiaotang Gai, Chenyi Zhou, Jiaxiang Liu, Yang Feng, Jian Wu, Zuozhu Liu
Moreover, we design a novel framework which finetunes lightweight pretrained generative models by incorporating medical decision-making rationales into the training process.
1 code implementation • 16 Apr 2024 • Songtao Jiang, Tuo Zheng, Yan Zhang, Yeying Jin, Zuozhu Liu
Mixture of Expert Tuning (MoE-Tuning) has effectively enhanced the performance of general MLLMs with fewer parameters, yet its application in resource-limited medical settings has not been fully explored.
2 code implementations • 6 Apr 2024 • Songtao Jiang, Yan Zhang, Chenyi Zhou, Yeying Jin, Yang Feng, Jian Wu, Zuozhu Liu
In this paper, we present a novel approach, Joint Visual and Text Prompting (VTPrompt), that employs fine-grained visual information to enhance the capability of MLLMs in VQA, especially for object-oriented perception.
1 code implementation • 26 Feb 2024 • Zhaopeng Feng, Yan Zhang, Hao Li, Wenqiang Liu, Jun Lang, Yang Feng, Jian Wu, Zuozhu Liu
Large Language Models (LLMs) have achieved impressive results in Machine Translation (MT).
no code implementations • 20 Feb 2024 • Jianhong Bai, Tianyu He, Yuchi Wang, Junliang Guo, Haoji Hu, Zuozhu Liu, Jiang Bian
Recent advances in text-guided video editing have showcased promising results in appearance editing (e. g., stylization).
1 code implementation • 17 Jan 2024 • Zikai Xiao, Zihan Chen, Liyinglan Liu, Yang Feng, Jian Wu, Wanlu Liu, Joey Tianyi Zhou, Howard Hao Yang, Zuozhu Liu
Federated Long-Tailed Learning (Fed-LT), a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution, has garnered considerable attention in recent times.
2 code implementations • 10 Jan 2024 • Zijie Meng, Yan Zhang, Zhaopeng Feng, Zuozhu Liu
Subsequently, we propose Filter Choices based Reasoning (FCR) to improve model performance on MCQs with low ($\mathcal{CS}$).
1 code implementation • 22 Nov 2023 • Huimin Xiong, Kunle Li, Kaiyuan Tan, Yang Feng, Joey Tianyi Zhou, Jin Hao, Haochao Ying, Jian Wu, Zuozhu Liu
Optical Intraoral Scanners (IOS) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
1 code implementation • 15 Nov 2023 • Tingyu Xie, Qi Li, Yan Zhang, Zuozhu Liu, Hongwei Wang
Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently.
1 code implementation • 14 Nov 2023 • Yan Zhang, Zhaopeng Feng, Zhiyang Teng, Zuozhu Liu, Haizhou Li
Text embedding models have significantly contributed to advancements in natural language processing by adeptly capturing semantic properties of textual data.
1 code implementation • 16 Oct 2023 • Tingyu Xie, Qi Li, Jian Zhang, Yan Zhang, Zuozhu Liu, Hongwei Wang
Large language models (LLMs) exhibited powerful capability in various natural language processing tasks.
1 code implementation • NeurIPS 2023 • Zikai Xiao, Zihan Chen, Songshang Liu, Hualiang Wang, Yang Feng, Jin Hao, Joey Tianyi Zhou, Jian Wu, Howard Hao Yang, Zuozhu Liu
Data privacy and long-tailed distribution are the norms rather than the exception in many real-world tasks.
no code implementations • 5 Oct 2023 • Jianhong Bai, Yuchen Yang, Huanpeng Chu, Hualiang Wang, Zuozhu Liu, Ruizhe Chen, Xiaoxuan He, Lianrui Mu, Chengfei Cai, Haoji Hu
Quantization has emerged as a promising direction for model compression.
1 code implementation • NeurIPS 2023 • Jianhong Bai, Zuozhu Liu, Hualiang Wang, Ruizhe Chen, Lianrui Mu, Xiaomeng Li, Joey Tianyi Zhou, Yang Feng, Jian Wu, Haoji Hu
In this paper, we formally define a more realistic task as distribution-agnostic generalized category discovery (DA-GCD): generating fine-grained predictions for both close- and open-set classes in a long-tailed open-world setting.
no code implementations • 20 Sep 2023 • Yifu Zhang, Zuozhu Liu, Yang Feng, Renjing Xu
Accurate representation of tooth position is extremely important in treatment.
no code implementations • 5 Jul 2023 • Jiaxiang Liu, Tianxiang Hu, Yang Feng, Wanghui Ding, Zuozhu Liu
In computer-assisted orthodontics, three-dimensional tooth models are required for many medical treatments.
no code implementations • 5 Jul 2023 • Jiaxiang Liu, Tianxiang Hu, Yan Zhang, Xiaotang Gai, Yang Feng, Zuozhu Liu
Recent advances in pretrained vision-language models (VLMs) such as CLIP have shown great performance for zero-shot natural image recognition and exhibit benefits in medical applications.
2 code implementations • 8 Jun 2023 • Jianhong Bai, Zuozhu Liu, Hualiang Wang, Jin Hao, Yang Feng, Huanpeng Chu, Haoji Hu
Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect.
2 code implementations • 15 Feb 2023 • Shenghao Hao, Peiyuan Liu, Yibing Zhan, Kaixun Jin, Zuozhu Liu, Mingli Song, Jenq-Neng Hwang, Gaoang Wang
Although cross-view multi-object tracking has received increased attention in recent years, existing datasets still have several issues, including 1) missing real-world scenarios, 2) lacking diverse scenes, 3) owning a limited number of tracks, 4) comprising only static cameras, and 5) lacking standard benchmarks, which hinder the investigation and comparison of cross-view tracking methods.
1 code implementation • 30 Oct 2022 • Yiming Chen, Yan Zhang, Bin Wang, Zuozhu Liu, Haizhou Li
Most sentence embedding techniques heavily rely on expensive human-annotated sentence pairs as the supervised signals.
no code implementations • 29 Oct 2022 • Huimin Xiong, Kunle Li, Kaiyuan Tan, Yang Feng, Joey Tianyi Zhou, Jin Hao, Zuozhu Liu
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva.
1 code implementation • 22 Aug 2022 • Hualiang Wang, Siming Fu, Xiaoxuan He, Hangxiang Fang, Zuozhu Liu, Haoji Hu
To our knowledge, this is the first work to measure representation quality of classifiers and features from the perspective of distribution overlap coefficient.
no code implementations • 30 Jun 2022 • Zihan Chen, Songshang Liu, Hualiang Wang, Howard H. Yang, Tony Q. S. Quek, Zuozhu Liu
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
no code implementations • 11 Mar 2022 • Jin Hao, Jiaxiang Liu, Jin Li, Wei Pan, Ruizhe Chen, Huimin Xiong, Kaiwei Sun, Hangzheng Lin, Wanlu Liu, Wanghui Ding, Jianfei Yang, Haoji Hu, Yueling Zhang, Yang Feng, Zeyu Zhao, Huikai Wu, Youyi Zheng, Bing Fang, Zuozhu Liu, Zhihe Zhao
Here, we present a Deep Dental Multimodal Analysis (DDMA) framework consisting of a CBCT segmentation model, an intraoral scan (IOS) segmentation model (the most accurate digital dental model), and a fusion model to generate 3D fused crown-root-bone structures with high fidelity and accurate occlusal and dentition information.
no code implementations • 17 Feb 2022 • Howard H. Yang, Zuozhu Liu, Yaru Fu, Tony Q. S. Quek, H. Vincent Poor
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems, in which a server and a host of clients collaboratively train a statistical model utilizing the data and computation resources of the clients without directly exposing their privacy-sensitive data.
1 code implementation • ICCV 2021 • Gaoang Wang, Renshu Gu, Zuozhu Liu, Weijie Hu, Mingli Song, Jenq-Neng Hwang
In this paper, we try to explore the significance of motion patterns for vehicle tracking without appearance information.
1 code implementation • ACL 2021 • Yan Zhang, Ruidan He, Zuozhu Liu, Lidong Bing, Haizhou Li
As high-quality labeled data is scarce, unsupervised sentence representation learning has attracted much attention.
1 code implementation • EMNLP 2020 • Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, Lidong Bing
With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity.
1 code implementation • EMNLP 2020 • Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing
However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.
Ranked #20 on Semantic Textual Similarity on STS16
no code implementations • 25 Nov 2019 • Zuozhu Liu, Thiparat Chotibut, Christopher Hillar, Shaowei Lin
Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks.
no code implementations • 17 Aug 2019 • Howard H. Yang, Zuozhu Liu, Tony Q. S. Quek, H. Vincent Poor
Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration.
Information Theory Signal Processing Information Theory
no code implementations • 4 Dec 2017 • Mohammad Emtiyaz Khan, Zuozhu Liu, Voot Tangkaratt, Yarin Gal
Overall, this paper presents Vprop as a principled, computationally-efficient, and easy-to-implement method for Bayesian deep learning.
no code implementations • 21 Nov 2017 • Zuozhu Liu, Tony Q. S. Quek, Shaowei Lin
The quest for biologically plausible deep learning is driven, not just by the desire to explain experimentally-observed properties of biological neural networks, but also by the hope of discovering more efficient methods for training artificial networks.
no code implementations • 15 Nov 2017 • Mohammad Emtiyaz Khan, Wu Lin, Voot Tangkaratt, Zuozhu Liu, Didrik Nielsen
We present the Variational Adaptive Newton (VAN) method which is a black-box optimization method especially suitable for explorative-learning tasks such as active learning and reinforcement learning.