Search Results for author: Tianle Zhang

Found 10 papers, 5 papers with code

Continuous Geometry-Aware Graph Diffusion via Hyperbolic Neural PDE

no code implementations3 Jun 2024 Jiaxu Liu, Xinping Yi, Sihao Wu, Xiangyu Yin, Tianle Zhang, Xiaowei Huang, Jin Shi

While Hyperbolic Graph Neural Network (HGNN) has recently emerged as a powerful tool dealing with hierarchical graph data, the limitations of scalability and efficiency hinder itself from generalizing to deep models.

Preferred-Action-Optimized Diffusion Policies for Offline Reinforcement Learning

no code implementations29 May 2024 Tianle Zhang, Jiayi Guan, Lin Zhao, Yihang Li, Dongjiang Li, Zecui Zeng, Lei Sun, Yue Chen, Xuelong Wei, Lusong Li, Xiaodong He

Meanwhile, based on the diffusion model, preferred actions within the same behavior distribution are automatically generated through the critic function.

Offline RL reinforcement-learning +1

Towards Fairness-Aware Adversarial Learning

1 code implementation27 Feb 2024 Yanghao Zhang, Tianle Zhang, Ronghui Mu, Xiaowei Huang, Wenjie Ruan

As a generalization of conventional AT, we re-define the problem of adversarial training as a min-max-max framework, to ensure both robustness and fairness of the trained model.

Fairness

Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching

1 code implementation7 Feb 2024 Yuchen Zhang, Tianle Zhang, Kai Wang, Ziyao Guo, Yuxuan Liang, Xavier Bresson, Wei Jin, Yang You

Specifically, we employ a curriculum learning strategy to train expert trajectories with more diverse supervision signals from the original graph, and then effectively transfer the information into the condensed graph with expanding window matching.

Two Trades is not Baffled: Condensing Graph via Crafting Rational Gradient Matching

1 code implementation7 Feb 2024 Tianle Zhang, Yuchen Zhang, Kun Wang, Kai Wang, Beining Yang, Kaipeng Zhang, Wenqi Shao, Ping Liu, Joey Tianyi Zhou, Yang You

Training on large-scale graphs has achieved remarkable results in graph representation learning, but its cost and storage have raised growing concerns.

Graph Representation Learning

Reward Certification for Policy Smoothed Reinforcement Learning

no code implementations11 Dec 2023 Ronghui Mu, Leandro Soriano Marcolino, Tianle Zhang, Yanghao Zhang, Xiaowei Huang, Wenjie Ruan

Reinforcement Learning (RL) has achieved remarkable success in safety-critical areas, but it can be weakened by adversarial attacks.

reinforcement-learning Reinforcement Learning (RL)

Can pre-trained models assist in dataset distillation?

1 code implementation5 Oct 2023 Yao Lu, Xuguang Chen, Yuchen Zhang, Jianyang Gu, Tianle Zhang, Yifan Zhang, Xiaoniu Yang, Qi Xuan, Kai Wang, Yang You

Dataset Distillation (DD) is a prominent technique that encapsulates knowledge from a large-scale original dataset into a small synthetic dataset for efficient training.

Symplectic Structure-Aware Hamiltonian (Graph) Embeddings

no code implementations9 Sep 2023 Jiaxu Liu, Xinping Yi, Tianle Zhang, Xiaowei Huang

In traditional Graph Neural Networks (GNNs), the assumption of a fixed embedding manifold often limits their adaptability to diverse graph geometries.

Node Classification Riemannian optimization

PTDE: Personalized Training with Distilled Execution for Multi-Agent Reinforcement Learning

no code implementations17 Oct 2022 Yiqun Chen, Hangyu Mao, Jiaxin Mao, Shiguang Wu, Tianle Zhang, Bin Zhang, Wei Yang, Hongxing Chang

Furthermore, we introduce a novel paradigm named Personalized Training with Distilled Execution (PTDE), wherein agent-personalized global information is distilled into the agent's local information.

Learning-To-Rank reinforcement-learning +2

PRoA: A Probabilistic Robustness Assessment against Functional Perturbations

1 code implementation5 Jul 2022 Tianle Zhang, Wenjie Ruan, Jonathan E. Fieldsend

Our experiments demonstrate the effectiveness and flexibility of PRoA in terms of evaluating the probabilistic robustness against a broad range of functional perturbations, and PRoA can scale well to various large-scale deep neural networks compared to existing state-of-the-art baselines.

Cannot find the paper you are looking for? You can Submit a new open access paper.