no code implementations • 5 Feb 2024 • Zihan Wang, Yunxuan Li, Yuexin Wu, Liangchen Luo, Le Hou, Hongkun Yu, Jingbo Shang
Process supervision, using a trained verifier to evaluate the intermediate steps generated by reasoner, has demonstrated significant improvements in multi-step problem solving.
no code implementations • 15 Nov 2023 • Lei Shu, Nevan Wichers, Liangchen Luo, Yun Zhu, Yinxiao Liu, Jindong Chen, Lei Meng
Evaluating natural language systems poses significant challenges, particularly in the realms of natural language understanding and high-level reasoning.
no code implementations • 15 Nov 2023 • Yun Zhu, Nevan Wichers, Chu-Cheng Lin, Xinyi Wang, Tianlong Chen, Lei Shu, Han Lu, Canoee Liu, Liangchen Luo, Jindong Chen, Lei Meng
Parameter Efficient Tuning has been an prominent approach to adapt the Large Language Model to downstream tasks.
no code implementations • 7 Oct 2023 • Liangchen Luo, Zi Lin, Yinxiao Liu, Lei Shu, Yun Zhu, Jingbo Shang, Lei Meng
In the era of large language models (LLMs), this study explores the ability of LLMs to deliver accurate critiques across various tasks.
no code implementations • 22 Aug 2023 • Yun Zhu, Yinxiao Liu, Felix Stahlberg, Shankar Kumar, Yu-Hui Chen, Liangchen Luo, Lei Shu, Renjie Liu, Jindong Chen, Lei Meng
Large Language Models (LLMs) have demonstrated impressive capabilities for text rewriting.
1 code implementation • 25 May 2023 • Lei Shu, Liangchen Luo, Jayakumar Hoskere, Yun Zhu, Yinxiao Liu, Simon Tong, Jindong Chen, Lei Meng
In this work, we develop new strategies for instruction tuning and reinforcement learning to better align LLMs for cross-sentence rewriting tasks using diverse wording and structures expressed through natural languages including 1) generating rewriting instruction data from Wiki edits and public corpus through instruction generation and chain-of-thought prompting; 2) collecting comparison data for reward model training through a new ranking function.
no code implementations • 18 Jun 2021 • Marco Fornoni, Chaochao Yan, Liangchen Luo, Kimberly Wilber, Alex Stark, Yin Cui, Boqing Gong, Andrew Howard
When interacting with objects through cameras, or pictures, users often have a specific intent.
no code implementations • 10 Dec 2020 • Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard
Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.
2 code implementations • 17 Nov 2019 • Guangxiang Zhao, Xu sun, Jingjing Xu, Zhiyuan Zhang, Liangchen Luo
In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures.
Ranked #8 on Machine Translation on WMT2014 English-French
5 code implementations • ICLR 2019 • Liangchen Luo, Yuanhao Xiong, Yan Liu, Xu sun
Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods.
no code implementations • 13 Nov 2018 • Qi Zeng, Liangchen Luo, Wenhao Huang, Yang Tang
Extracting valuable facts or informative summaries from multi-dimensional tables, i. e. insight mining, is an important task in data analysis and business intelligence.
no code implementations • 12 Nov 2018 • Liangchen Luo, Wenhao Huang, Qi Zeng, Zaiqing Nie, Xu sun
Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues.
1 code implementation • EMNLP 2018 • Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, Xu sun
Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs.
Ranked #1 on Text Generation on DailyDialog