no code implementations • EACL (AdaptNLP) 2021 • Tianyu Chen, Shaohan Huang, Furu Wei, JianXin Li
In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples.
no code implementations • 8 Apr 2024 • Tianyu Chen, Yiming Zhang, Guoxin Yu, Dapeng Zhang, Li Zeng, Qing He, Xiang Ao
In this paper, we extend financial sentiment analysis~(FSA) to event-level since events usually serve as the subject of the sentiment in financial text.
no code implementations • 25 Feb 2024 • Tianyu Chen, Haoyi Zhou, Ying Li, Hao Wang, Chonghan Gao, Shanghang Zhang, JianXin Li
Foundation models have revolutionized knowledge acquisition across domains, and our study introduces OmniArch, a paradigm-shifting approach designed for building foundation models in multi-physics scientific computing.
no code implementations • 19 Jan 2024 • Ziqi Yuan, Haoyi Zhou, Tianyu Chen, JianXin Li
The analysis of persistent homology demonstrates its effectiveness in capturing the topological structure formed by normal edge features.
1 code implementation • NeurIPS 2023 • Tianyu Chen, Kevin Bello, Bryon Aragam, Pradeep Ravikumar
Structural causal models (SCMs) are widely used in various disciplines to represent causal relationships among variables in complex systems.
no code implementations • 31 May 2023 • Tianyu Chen, Yuan Xie, Shuai Zhang, Shaohan Huang, Haoyi Zhou, JianXin Li
Music representation learning is notoriously difficult for its complex human-related concepts contained in the sequence of numerical signals.
no code implementations • 16 May 2023 • Tianyu Chen
To solve this problem, the Character Image Feature Encoder model proposed in this paper enables the user to use the process by simply providing a picture of the character to make the image of the character in the generated image match the expectation.
no code implementations • 24 Apr 2023 • Yuan Xie, Tianyu Chen, Ji Xu
Underwater acoustic recognition for ship-radiated signals has high practical application value due to the ability to recognize non-line-of-sight targets.
no code implementations • 19 Jul 2022 • Yuan Xie, Shaohan Huang, Tianyu Chen, Furu Wei
Sparsely Mixture of Experts (MoE) has received great interest due to its promising scaling capability with affordable computational overhead.
no code implementations • 1 Jun 2022 • Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity.
no code implementations • Findings (ACL) 2022 • Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e. g., search history, medical record, bank account).
no code implementations • 29 Sep 2021 • Tianyu Chen, Haoyi Zhou, He Mingrui, JianXin Li
Pre-trained language models (e. g, BERT, GPT-3) have revolutionized the NLP research and fine-tuning becomes the indispensable step of downstream adaptation.