no code implementations • 3 Feb 2024 • Yanbin Wei, Shuai Fu, Weisen Jiang, James T. Kwok, Yu Zhang
In this paper, we take the first step in incorporating visual information into graph reasoning tasks and propose a new benchmark GITQA, where each sample is a tuple (graph, image, textual description).
1 code implementation • 6 Jan 2024 • Shuhao Chen, Yulong Zhang, Weisen Jiang, Jiangang Lu, Yu Zhang
Recent advances achieved by deep learning models rely on the independent and identically distributed assumption, hindering their applications in real-world scenarios with domain shifts.
no code implementations • 3 Oct 2023 • Weisen Jiang, Baijiong Lin, Han Shi, Yu Zhang, Zhenguo Li, James T. Kwok
Recently, various merging methods have been proposed to build a multi-task model from task-specific finetuned models without retraining.
no code implementations • 23 Sep 2023 • Yulong Zhang, Shuhao Chen, Weisen Jiang, Yu Zhang, Jiangang Lu, James T. Kwok
However, the performance of existing UDA methods is constrained by the large domain shift and limited target domain data.
1 code implementation • 21 Sep 2023 • Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
Our MetaMath-7B model achieves 66. 4% on GSM8K and 19. 4% on MATH, exceeding the state-of-the-art models of the same size by 11. 5% and 8. 7%.
Ranked #54 on Arithmetic Reasoning on GSM8K (using extra training data)
1 code implementation • 23 Aug 2023 • Baijiong Lin, Weisen Jiang, Feiyang Ye, Yu Zhang, Pengguang Chen, Ying-Cong Chen, Shu Liu, James T. Kwok
Multi-task learning (MTL), a learning paradigm to learn multiple related tasks simultaneously, has achieved great success in various fields.
no code implementations • 15 Aug 2023 • Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu, Yu Zhang, Zhenguo Li, James T. Kwok
Instead of using forward or backward reasoning alone, we propose FOBAR to combine FOrward and BAckward Reasoning for verification.
1 code implementation • 1 Jun 2023 • Weisen Jiang, Yu Zhang, James T. Kwok
Combining meta-learning the prompt pool and RepVerb, we propose MetaPrompter for effective structured prompting.
no code implementations • 28 Apr 2023 • Weisen Jiang, Hansi Yang, Yu Zhang, James Kwok
Sharpness-aware minimization (SAM), which searches for flat minima by min-max optimization, has been shown to be useful in improving model generalization.
no code implementations • NeurIPS 2021 • Weisen Jiang, James Kwok, Yu Zhang
We study the problem of meta-learning, which has proved to be advantageous to accelerate learning new tasks with a few samples.
no code implementations • 29 Sep 2021 • Weisen Jiang, James Kwok, Yu Zhang
We propose a MUlti-Subspace structured Meta-Learning (MUSML) algorithm to learn the subspace bases.