no code implementations • 24 May 2024 • Xudong Han, Nobuyuki Oishi, Yueying Tian, Elif Ucurum, Rupert Young, Chris Chatwin, Philip Birch
Many Multi-Object Tracking (MOT) approaches exploit motion information to associate all the detected objects across frames.
no code implementations • 31 Mar 2024 • Lizhi Lin, Honglin Mu, Zenan Zhai, Minghan Wang, Yuxia Wang, Renxi Wang, Junjie Gao, Yixuan Zhang, Wanxiang Che, Timothy Baldwin, Xudong Han, Haonan Li
Generative models are rapidly gaining popularity and being integrated into everyday applications, raising concerns over their safety issues as various vulnerabilities are exposed.
1 code implementation • 19 Feb 2024 • Yuxia Wang, Zenan Zhai, Haonan Li, Xudong Han, Lizhi Lin, Zhenxuan Zhang, Jingru Zhao, Preslav Nakov, Timothy Baldwin
Previous studies have proposed comprehensive taxonomies of the risks posed by LLMs, as well as corresponding prompts that can be used to examine the safety mechanisms of LLMs.
1 code implementation • 18 Feb 2024 • Renxi Wang, Haonan Li, Xudong Han, Yixuan Zhang, Timothy Baldwin
However, LLMs are optimized for language generation instead of tool use during training or alignment, limiting their effectiveness as agents.
1 code implementation • 17 Dec 2023 • Renxi Wang, Haonan Li, Minghao Wu, Yuxia Wang, Xudong Han, Chiyu Zhang, Timothy Baldwin
Instruction tuning significantly enhances the performance of large language models (LLMs) across various tasks.
no code implementations • 30 Aug 2023 • Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin, Eric Xing
We release two open versions of the model -- the foundation Jais model, and an instruction-tuned Jais-chat variant -- with the aim of promoting research on Arabic LLMs.
1 code implementation • 25 Aug 2023 • Yuxia Wang, Haonan Li, Xudong Han, Preslav Nakov, Timothy Baldwin
With the rapid evolution of large language models (LLMs), new and hard-to-predict harmful capabilities are emerging.
1 code implementation • 16 Aug 2023 • Ning Guo, Xudong Han, Xiaobo Liu, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Jiansheng Dai, Fang Wan, Chaoyang Song
Robots play a critical role as the physical agent of human operators in exploring the ocean.
no code implementations • 16 Aug 2023 • Xiaobo Liu, Xudong Han, Wei Hong, Fang Wan, Chaoyang Song
Proprioception is the "sixth sense" that detects limb postures with motor neurons.
1 code implementation • 11 Feb 2023 • Xudong Han, Timothy Baldwin, Trevor Cohn
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
1 code implementation • 17 Oct 2022 • Xudong Han, Aili Shen, Trevor Cohn, Timothy Baldwin, Lea Frermann
Mitigating bias in training on biased datasets is an important open problem.
1 code implementation • NAACL 2022 • Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann
Real-world datasets often encode stereotypes and societal biases.
2 code implementations • 4 May 2022 • Xudong Han, Aili Shen, Yitong Li, Lea Frermann, Timothy Baldwin, Trevor Cohn
This paper presents fairlib, an open-source framework for assessing and improving classification fairness.
1 code implementation • 12 Mar 2022 • Xudong Han, Timothy Baldwin, Trevor Cohn
Adversarial training is a common approach for bias mitigation in natural language processing.
no code implementations • 22 Sep 2021 • Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann
Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes.
no code implementations • EMNLP 2021 • Shivashankar Subramanian, Xudong Han, Timothy Baldwin, Trevor Cohn, Lea Frermann
Bias is pervasive in NLP models, motivating the development of automatic debiasing techniques.
no code implementations • 16 Sep 2021 • Xudong Han, Timothy Baldwin, Trevor Cohn
Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups.
no code implementations • 29 Jan 2021 • Linhan Yang, Xudong Han, Weijie Guo, Fang Wan, Jia Pan, Chaoyang Song
This paper presents a novel design of a soft tactile finger with omni-directional adaptation using multi-channel optical fibers for rigid-soft interactive grasping.
Robotics
1 code implementation • EACL 2021 • Xudong Han, Timothy Baldwin, Trevor Cohn
Adversarial learning can learn fairer and less biased models of language than standard methods.
1 code implementation • IJCNLP 2019 • Xudong Han, Philip Schulz, Trevor Cohn
In addition, we present a model that operates in the HSV color space.