1 code implementation • 8 Feb 2024 • Gangda Deng, Hongkuan Zhou, Hanqing Zeng, Yinglong Xia, Christopher Leung, Jianbo Li, Rajgopal Kannan, Viktor Prasanna
Recently, Temporal Graph Neural Networks (TGNNs) have demonstrated state-of-the-art performance in various high-impact applications, including fraud detection and content recommendation.
no code implementations • 14 Sep 2023 • Hongkuan Zhou, Aifen Sui, Wei Cao, Zhenshan Bing
More research attention has recently been given to end-to-end autonomous driving technologies where the entire driving pipeline is replaced with a single neural network because of its simpler structure and faster inference time.
no code implementations • 14 Jul 2023 • Hongkuan Zhou, Da Zheng, Xiang Song, George Karypis, Viktor Prasanna
Evenworse, the tremendous overhead to synchronize the node memory make it impractical to be deployed to distributed GPU clusters.
no code implementations • 30 May 2023 • Hongkuan Zhou, Zhenshan Bing, Xiangtong Yao, Xiaojie Su, Chenguang Yang, Kai Huang, Alois Knoll
In this evaluation, we set up ten tasks and achieved an average 30% improvement in our approach compared to the current state-of-the-art approach, demonstrating a high generalization capability in both simulated environments and the real world.
no code implementations • 21 Mar 2023 • Hongkuan Zhou, Aifen Sui, Letian Shi, Yinxian Li
In recent times, there has been a growing focus on end-to-end autonomous driving technologies.
no code implementations • 21 Sep 2022 • Xiangtong Yao, Zhenshan Bing, Genghang Zhuang, KeJia Chen, Hongkuan Zhou, Kai Huang, Alois Knoll
We propose a dual-MDP meta-reinforcement learning method that enables learning new tasks efficiently with symmetrical behaviors and language instructions.
2 code implementations • 28 Mar 2022 • Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, George Karypis
Our temporal parallel sampler achieves an average of 173x speedup on a multi-core CPU compared with the baselines.
1 code implementation • 10 Mar 2022 • Hongkuan Zhou, Bingyi Zhang, Rajgopal Kannan, Viktor Prasanna, Carl Busart
Taking advantage of the model optimizations, we propose a principled hardware architecture using batching, pipelining, and prefetching techniques to further improve the performance.
1 code implementation • 9 Sep 2021 • Hongkuan Zhou, James Orme-Rogers, Rajgopal Kannan, Viktor Prasanna
SeDyT consists of two components: a Temporal Graph Neural Network that generates dynamic entity embeddings in the past and a sequence model that predicts the entity embeddings in the future.
1 code implementation • 10 May 2021 • Hongkuan Zhou, Ajitesh Srivastava, Hanqing Zeng, Rajgopal Kannan, Viktor Prasanna
In this paper, we propose to accelerate GNN inference by pruning the dimensions in each layer with negligible accuracy loss.
2 code implementations • 5 Oct 2020 • Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
For feature propagation within subgraphs, we improve cache utilization and reduce DRAM traffic by data partitioning.
7 code implementations • ICLR 2020 • Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs.
Ranked #1 on Link Property Prediction on ogbl-citation2
2 code implementations • 28 Oct 2018 • Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
However, a major challenge is to reduce the complexity of layered GCNs and make them parallelizable and scalable on very large graphs -- state-of the art techniques are unable to achieve scalability without losing accuracy and efficiency.