2 code implementations • 25 Apr 2024 • Haizhou Shi, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Hao Wang
In this survey, we provide a comprehensive overview of the current research progress on LLMs within the context of CL.
no code implementations • 28 Jan 2024 • Yun Zhu, Yaoke Wang, Haizhou Shi, Siliang Tang
In this paper, we propose ENGINE, a parameter- and memory-efficient fine-tuning method for textual graphs with an LLM encoder.
no code implementations • 11 Oct 2023 • Yun Zhu, Yaoke Wang, Haizhou Shi, Zhenshuo Zhang, Dian Jiao, Siliang Tang
These pre-trained models can be applied to various downstream Web applications, saving training time and improving downstream (target) performance.
1 code implementation • 24 Jul 2023 • Yun Zhu, Haizhou Shi, Zhenshuo Zhang, Siliang Tang
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
no code implementations • 9 Mar 2023 • Zhenshuo Zhang, Yun Zhu, Haizhou Shi, Siliang Tang
Albeit having gained significant progress lately, large-scale graph representation learning remains expensive to train and deploy for two main reasons: (i) the repetitive computation of multi-hop message passing and non-linearity in graph neural networks (GNNs); (ii) the computational cost of complex pairwise contrastive learning loss.
no code implementations • 2 Dec 2021 • Wenqiao Zhang, Xin Eric Wang, Siliang Tang, Haizhou Shi, Haocheng Shi, Jun Xiao, Yueting Zhuang, William Yang Wang
Such a setting can help explain the decisions of captioning models and prevents the model from hallucinating object words in its description.
no code implementations • 29 Sep 2021 • Haizhou Shi, Youcai Zhang, Zijin Shen, Siliang Tang, Yaqian Li, Yandong Guo, Yueting Zhuang
This paper investigates the feasibility of federated representation learning under the constraints of communication cost and privacy protection.
no code implementations • 30 Jul 2021 • Haizhou Shi, Youcai Zhang, Siliang Tang, Wenjie Zhu, Yaqian Li, Yandong Guo, Yueting Zhuang
It is a consensus that small models perform quite poorly under the paradigm of self-supervised contrastive learning.
no code implementations • 26 Jul 2021 • Zixuan Ni, Haizhou Shi, Siliang Tang, Longhui Wei, Qi Tian, Yueting Zhuang
After investigating existing strategies, we observe that there is a lack of study on how to prevent the inter-phase confusion.
1 code implementation • ACL 2021 • Tao Chen, Haizhou Shi, Siliang Tang, Zhigang Chen, Fei Wu, Yueting Zhuang
The journey of reducing noise from distant supervision (DS) generated training data has been started since the DS was first introduced into the relation extraction (RE) task.
no code implementations • 1 Jan 2021 • Haizhou Shi, Dongliang Luo, Siliang Tang, Jian Wang, Yueting Zhuang
Recently, a newly proposed self-supervised framework Bootstrap Your Own Latent (BYOL) seriously challenges the necessity of negative samples in contrastive-based learning frameworks.
no code implementations • 22 Nov 2020 • Haizhou Shi, Dongliang Luo, Siliang Tang, Jian Wang, Yueting Zhuang
Recently, a newly proposed self-supervised framework Bootstrap Your Own Latent (BYOL) seriously challenges the necessity of negative samples in contrastive learning frameworks.
no code implementations • CVPR 2020 • Juncheng Li, Xin Wang, Siliang Tang, Haizhou Shi, Fei Wu, Yueting Zhuang, William Yang Wang
Visual navigation is a task of training an embodied agent by intelligently navigating to a target object (e. g., television) using only visual observations.
1 code implementation • 7 Jul 2019 • Jiacheng Li, Haizhou Shi, Siliang Tang, Fei Wu, Yueting Zhuang
To solve this problem, we propose a method to mine the cross-modal rules to help the model infer these informative concepts given certain visual input.
Ranked #11 on Visual Storytelling on VIST