no code implementations • 15 Mar 2024 • Hengxing Cai, Xiaochen Cai, Shuwen Yang, Jiankun Wang, Lin Yao, Zhifeng Gao, Junhan Chang, Sihang Li, Mingjun Xu, Changxin Wang, Hongshuai Wang, Yongge Li, Mujie Lin, Yaqi Li, Yuqi Yin, Linfeng Zhang, Guolin Ke
Scientific literature often includes a wide range of multimodal elements, such as molecular structure, tables, and charts, which are hard for text-focused LLMs to understand and analyze.
no code implementations • 4 Mar 2024 • Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Hongshuai Wang, Yongge Li, Mujie Lin, Shuwen Yang, Jiankun Wang, Yuqi Yin, Yaqi Li, Linfeng Zhang, Guolin Ke
Recent breakthroughs in Large Language Models (LLMs) have revolutionized natural language understanding and generation, igniting a surge of interest in leveraging these technologies in the field of scientific literature analysis.
no code implementations • 8 Jan 2024 • Qingsi Lai, Lin Yao, Zhifeng Gao, Siyuan Liu, Hongshuai Wang, Shuqi Lu, Di He, LiWei Wang, Cheng Wang, Guolin Ke
XtalNet represents a significant advance in CSP, enabling the prediction of complex structures from PXRD data without the need for external databases or manual intervention.
no code implementations • 24 Apr 2023 • Zhifeng Gao, Xiaohong Ji, Guojiang Zhao, Hongshuai Wang, Hang Zheng, Guolin Ke, Linfeng Zhang
Recently deep learning based quantitative structure-activity relationship (QSAR) models has shown surpassing performance than traditional methods for property prediction tasks in drug discovery.
1 code implementation • 16 Mar 2023 • Shuqi Lu, Zhifeng Gao, Di He, Linfeng Zhang, Guolin Ke
Uni-Mol+ first generates a raw 3D molecule conformation from inexpensive methods such as RDKit.
no code implementations • 14 Feb 2023 • Yuejiang Yu, Shuqi Lu, Zhifeng Gao, Hang Zheng, Guolin Ke
What's more, they claim to perform better than traditional molecular docking, but the approach of comparison is not fair, since traditional methods are not designed for docking on the whole protein without a given pocket.
no code implementations • 14 Feb 2023 • Gengmo Zhou, Zhifeng Gao, Zhewei Wei, Hang Zheng, Guolin Ke
However, to our surprise, we design a simple and cheap algorithm (parameter-free) based on the traditional methods and find it is comparable to or even outperforms deep learning based MCG methods in the widely used GEOM-QM9 and GEOM-Drugs benchmarks.
no code implementations • 13 Feb 2023 • Lin Yao, Ruihan Xu, Zhifeng Gao, Guolin Ke, Yuhang Wang
The central problem in cryo-electron microscopy (cryo-EM) is to recover the 3D structure from noisy 2D projection images which requires estimating the missing projection angles (poses).
1 code implementation • ChemRxiv 2022 • Gengmo Zhou, Zhifeng Gao, Qiankun Ding, Hang Zheng, Hongteng Xu, Zhewei Wei, Linfeng Zhang, Guolin Ke
Uni-Mol is composed of two models with the same SE(3)-equivariant transformer architecture: a molecular pretraining model trained by 209M molecular conformations; a pocket pretraining model trained by 3M candidate protein pocket data.
Ranked #1 on Molecular Property Prediction on MUV
no code implementations • 9 Apr 2021 • Disheng Tang, Wei Cao, Jiang Bian, Tie-Yan Liu, Zhifeng Gao, Shun Zheng, Jue Liu
We used a stochastic metapopulation model with a hierarchical structure and fitted the model to the positive cases in the US from the start of outbreak to the end of 2020.
3 code implementations • 17 Mar 2021 • Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment.
Ranked #1 on Video Prediction on KTH (Cond metric)
1 code implementation • 1 Jan 2021 • Yihan He, Wei Cao, Shun Zheng, Zhifeng Gao, Jiang Bian
In this work, we present a new method named Fourier Temporal State Embedding (FTSE) to address the temporal information in dynamic graph representation learning.
no code implementations • 1 Jan 2021 • Yihan He, Wei Cao, Shun Zheng, Zhifeng Gao, Jiang Bian
In recent years, research communities have been developing stochastic sampling methods to handle large graphs when it is unreal to put the whole graph into a single batch.
1 code implementation • 25 Nov 2020 • Chao Du, Zhifeng Gao, Shuo Yuan, Lining Gao, Ziyan Li, Yifan Zeng, Xiaoqiang Zhu, Jian Xu, Kun Gai, Kuang-Chih Lee
In this paper, we propose a novel Deep Uncertainty-Aware Learning (DUAL) method to learn CTR models based on Gaussian processes, which can provide predictive uncertainty estimations while maintaining the flexibility of deep neural networks.
1 code implementation • 8 Sep 2019 • Zhining Liu, Wei Cao, Zhifeng Gao, Jiang Bian, Hechang Chen, Yi Chang, Tie-Yan Liu
To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers.
no code implementations • ICLR 2019 • Bowen Wu, Nan Jiang, Zhifeng Gao, Mengyuan Li, Zongsheng Wang, Suke Li, Qihang Feng, Wenge Rong, Baoxun Wang
Recent advances in sequence-to-sequence learning reveal a purely data-driven approach to the response generation task.
11 code implementations • ICML 2018 • Yunbo Wang, Zhifeng Gao, Mingsheng Long, Jian-Min Wang, Philip S. Yu
We present PredRNN++, an improved recurrent network for video predictive learning.
Ranked #1 on Video Prediction on KTH (Cond metric)
no code implementations • NeurIPS 2017 • Yunbo Wang, Mingsheng Long, Jian-Min Wang, Zhifeng Gao, Philip S. Yu
The core of this network is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously.
Ranked #6 on Video Prediction on Human3.6M
no code implementations • 10 Jan 2017 • Kele Xu, Xi Liu, Hengxing Cai, Zhifeng Gao
During the last decades, the number of new full-reference image quality assessment algorithms has been increasing drastically.