1 code implementation • 13 Mar 2024 • Teng Xiao, Chao Cui, Huaisheng Zhu, Vasant G. Honavar
Recent advancements in biology and chemistry have leveraged multi-modal learning, integrating molecules and their natural language descriptions to enhance drug discovery.
1 code implementation • 11 Mar 2024 • Huaisheng Zhu, Teng Xiao, Vasant G Honavar
However, practical applications call for methods that generate diverse, and ideally novel, molecules with the desired properties.
1 code implementation • NeurIPS 2023 • Teng Xiao, Huaisheng Zhu, Zhengyu Chen, Suhang Wang
Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.
no code implementations • 2 Oct 2023 • Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu
A natural question is "could alignment really prevent those open-sourced large language models from being misused to generate undesired content?''.
no code implementations • 19 Jun 2023 • Huaisheng Zhu, Guoji Fu, Zhimeng Guo, Zhiwei Zhang, Teng Xiao, Suhang Wang
Graph Neural Networks (GNNs) have shown great power in various domains.
no code implementations • 21 May 2023 • Huaisheng Zhu, Dongsheng Luo, Xianfeng Tang, Junjie Xu, Hui Liu, Suhang Wang
Directly adopting existing post-hoc explainers for explaining link prediction is sub-optimal because: (i) post-hoc explainers usually adopt another strategy or model to explain a target model, which could misinterpret the target model; and (ii) GNN explainers for node classification identify crucial subgraphs around each node for the explanation; while for link prediction, one needs to explain the prediction for each pair of nodes based on graph structure and node attributes.
no code implementations • 18 Apr 2022 • Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, Suhang Wang
Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society.
no code implementations • 30 Mar 2022 • Huaisheng Zhu, Suhang Wang
The lack of sensitive attributes challenges many existing works.
1 code implementation • 5 Jun 2021 • Liang Qu, Huaisheng Zhu, Ruiqi Zheng, Yuhui Shi, Hongzhi Yin
Imbalanced classification on graphs is ubiquitous yet challenging in many real-world applications, such as fraudulent node detection.