1 code implementation • 6 Mar 2024 • Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu
Pre-training has been investigated to improve the efficiency and performance of training neural operators in data-scarce settings.
1 code implementation • 4 Feb 2024 • Huanran Chen, Yinpeng Dong, Shitong Shao, Zhongkai Hao, Xiao Yang, Hang Su, Jun Zhu
Diffusion models are recently employed as generative classifiers for robust classification.
no code implementations • 1 Feb 2024 • Songming Liu, Chang Su, Jiachen Yao, Zhongkai Hao, Hang Su, Youjia Wu, Jun Zhu
Physics-informed neural networks (PINNs) have shown promise in solving various partial differential equations (PDEs).
1 code implementation • 17 Jan 2024 • Hong Wang, Zhongkai Hao, Jie Wang, Zijie Geng, Zhen Wang, Bin Li, Feng Wu
To the best of our knowledge, SKR is the first attempt to address the time-consuming nature of data generation for learning neural operators.
1 code implementation • 19 Oct 2023 • Zipeng Xiao, Zhongkai Hao, Bokai Lin, Zhijie Deng, Hang Su
Neural operators, as an efficient surrogate model for learning the solutions of PDEs, have received extensive attention in the field of scientific machine learning.
1 code implementation • NeurIPS 2023 • Zaixi Zhang, Zepu Lu, Zhongkai Hao, Marinka Zitnik, Qi Liu
In the initial stage, the residue types and backbone coordinates are refined using a hierarchical context encoder, complemented by two structure refinement modules that capture both inter-residue and pocket-ligand interactions.
1 code implementation • 15 Jun 2023 • Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu
In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry.
no code implementations • 5 Jun 2023 • Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu
Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs) in various fields by minimizing a weighted sum of PDE loss and boundary loss.
1 code implementation • 30 May 2023 • Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Ze Cheng, Jun Zhu
The neural operator has emerged as a powerful tool in learning mappings between function spaces in PDEs.
no code implementations • 9 Mar 2023 • Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Songming Liu, Dong Yan, Jun Zhu
Extensive experiments in both image-based and state-based tasks show that TAD can significantly improve the performance of handling different tasks simultaneously, especially for those with high TDR, and display a strong generalization ability to unseen tasks.
2 code implementations • 28 Feb 2023 • Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, Jun Zhu
However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.
1 code implementation • 15 Nov 2022 • Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, Jun Zhu
Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm.
1 code implementation • 6 Oct 2022 • Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Ze Cheng
We present a unified hard-constraint framework for solving geometrically complex PDEs with neural networks, where the most commonly used Dirichlet, Neumann, and Robin boundary conditions (BCs) are considered.
2 code implementations • 30 Sep 2022 • Fan Bao, Min Zhao, Zhongkai Hao, Peiyao Li, Chongxuan Li, Jun Zhu
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties.
1 code implementation • 15 Sep 2022 • Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Dong Yan, Jun Zhu
In this paper, we reveal that the instability is also related to a new notion of Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of the replay buffer for evaluation and optimization.
no code implementations • 15 Sep 2022 • Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Jian Song, Ze Cheng
In this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints.
no code implementations • 9 Jun 2022 • Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
1 code implementation • ICML Workshop AML 2021 • Zhengyi Wang, Zhongkai Hao, Ziqiao Wang, Hang Su, Jun Zhu
In this work, we propose Cluster Attack -- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible.
1 code implementation • 7 Jul 2020 • Zhongkai Hao, Chengqiang Lu, Zheyuan Hu, Hao Wang, Zhenya Huang, Qi Liu, Enhong Chen, Cheekong Lee
Here we propose a novel framework called Active Semi-supervised Graph Neural Network (ASGN) by incorporating both labeled and unlabeled molecules.