no code implementations • ICML 2020 • Long-Kai Huang, Jialin Pan
In this paper, we study the leading eigenvector problem in a statistically distributed setting and propose a communication-efficient algorithm based on Riemannian optimization, which trades local computation for global communication.
no code implementations • 23 Apr 2024 • Yikun Zhang, Geyan Ye, Chaohao Yuan, Bo Han, Long-Kai Huang, Jianhua Yao, Wei Liu, Yu Rong
We design a Hierarchical Adaptive Alignment model to concurrently learn the fine-grained fragment correspondence between two modalities and align these representations of fragments in three levels.
no code implementations • 18 Apr 2024 • Chaohao Yuan, Songyou Li, Geyan Ye, Yikun Zhang, Long-Kai Huang, Wenbing Huang, Wei Liu, Jianhua Yao, Yu Rong
The core challenge of de novo protein design lies in creating proteins with specific functions or properties, guided by certain conditions.
1 code implementation • 1 Mar 2024 • Huan Ma, Yan Zhu, Changqing Zhang, Peilin Zhao, Baoyuan Wu, Long-Kai Huang, QinGhua Hu, Bingzhe Wu
Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired datasets.
no code implementations • ICCV 2023 • Yunqiao Yang, Long-Kai Huang, Ying WEI
A multitude of prevalent pre-trained models mark a major milestone in the development of artificial intelligence, while fine-tuning has been a common practice that enables pretrained models to figure prominently in a wide array of target datasets.
1 code implementation • 18 Jun 2023 • Shuang Zhou, Xiao Huang, Ninghao Liu, Huachi Zhou, Fu-Lai Chung, Long-Kai Huang
In this paper, we base on the phenomenon and propose a general and novel research problem of generalized graph anomaly detection that aims to effectively identify anomalies on both the training-domain graph and unseen testing graph to eliminate potential dangers.
1 code implementation • 21 Sep 2022 • Shuang Zhou, Xiao Huang, Ninghao Liu, Fu-Lai Chung, Long-Kai Huang
In this paper, we base on the phenomenon and propose a general and novel research problem of generalized graph anomaly detection that aims to effectively identify anomalies on both the training-domain graph and unseen testing graph to eliminate potential dangers.
no code implementations • 21 Aug 2022 • Ziqiao Zhang, Yatao Bian, Ailin Xie, Pengju Han, Long-Kai Huang, Shuigeng Zhou
Self-supervised pre-training is gaining increasingly more popularity in AI-aided drug discovery, leading to more and more pre-trained models with the promise that they can extract better feature representations for molecules.
no code implementations • 9 Jun 2022 • Yichen Wu, Long-Kai Huang, Ying WEI
The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks.
1 code implementation • 20 Mar 2022 • Jiying Zhang, Xi Xiao, Long-Kai Huang, Yu Rong, Yatao Bian
In this paper, we present a novel optimal transport-based fine-tuning framework called GTOT-Tuning, namely, Graph Topology induced Optimal Transport fine-Tuning, for GNN style backbones.
Ranked #1 on Graph Classification on BBBP
1 code implementation • 24 Jan 2022 • Yuanfeng Ji, Lu Zhang, Jiaxiang Wu, Bingzhe Wu, Long-Kai Huang, Tingyang Xu, Yu Rong, Lanqing Li, Jie Ren, Ding Xue, Houtim Lai, Shaoyong Xu, Jing Feng, Wei Liu, Ping Luo, Shuigeng Zhou, Junzhou Huang, Peilin Zhao, Yatao Bian
AI-aided drug discovery (AIDD) is gaining increasing popularity due to its promise of making the search for new pharmaceuticals quicker, cheaper and more efficient.
no code implementations • NeurIPS 2021 • Huaxiu Yao, Ying WEI, Long-Kai Huang, Ding Xue, Junzhou Huang, Zhenhui (Jessie) Li
More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles.
no code implementations • 17 Jun 2021 • Long-Kai Huang, Ying WEI, Yu Rong, Qiang Yang, Junzhou Huang
Transferability estimation has been an essential tool in selecting a pre-trained model and the layers in it for transfer learning, to transfer, so as to maximize the performance on a target task and prevent negative transfer.
1 code implementation • 26 Jul 2020 • Huaxiu Yao, Long-Kai Huang, Linjun Zhang, Ying WEI, Li Tian, James Zou, Junzhou Huang, Zhenhui Li
Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.
no code implementations • ICCV 2019 • Long-Kai Huang, Jianda Chen, Sinno Jialin Pan
Recent years have witnessed the success of learning to hash in fast large-scale image retrieval.
no code implementations • 6 Apr 2017 • Long-Kai Huang, Qiang Yang, Wei-Shi Zheng
Specifically, a new loss function is proposed to measure the similarity loss between a pair of data samples in hamming space.