no code implementations • ACL 2022 • Moxin Li, Fuli Feng, Hanwang Zhang, Xiangnan He, Fengbin Zhu, Tat-Seng Chua
Neural discrete reasoning (NDR) has shown remarkable progress in combining deep models with discrete reasoning.
no code implementations • 15 Mar 2024 • Moxin Li, Wenjie Wang, Fuli Feng, Fengbin Zhu, Qifan Wang, Tat-Seng Chua
Confidence estimation aiming to evaluate output trustability is crucial for the application of large language models (LLM), especially the black-box ones.
no code implementations • 23 Feb 2024 • Yang Deng, Yong Zhao, Moxin Li, See-Kiong Ng, Tat-Seng Chua
Despite the remarkable abilities of Large Language Models (LLMs) to answer questions, they often display a considerable level of overconfidence even when the question does not have a definitive answer.
no code implementations • 24 Jan 2024 • Fengbin Zhu, Ziyang Liu, Fuli Feng, Chao Wang, Moxin Li, Tat-Seng Chua
In this work, we address question answering (QA) over a hybrid of tabular and textual data that are very common content on the Web (e. g. SEC filings), where discrete reasoning capabilities are often required.
no code implementations • 23 May 2023 • Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua
In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group.
1 code implementation • 3 May 2023 • Fengbin Zhu, Chao Wang, Fuli Feng, Zifeng Ren, Moxin Li, Tat-Seng Chua
Discrete reasoning over table-text documents (e. g., financial reports) gains increasing attention in recent two years.