no code implementations • Findings (EMNLP) 2021 • Kaiyu Huang, Hao Yu, Junpeng Liu, Wei Liu, Jingxiang Cao, Degen Huang
Experimental results on five benchmarks and four cross-domain datasets show the lexicon-based graph convolutional network successfully captures the information of candidate words and helps to improve performance on the benchmarks (Bakeoff-2005 and CTB6) and the cross-domain datasets (SIGHAN-2010).
no code implementations • 4 May 2024 • Xin Gao, Xin Yang, Hao Yu, Yan Kang, Tianrui Li
Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL).
1 code implementation • 4 Apr 2024 • Hanyu Lai, Xiao Liu, Iat Long Iong, Shuntian Yao, Yuxuan Chen, Pengbo Shen, Hao Yu, Hanchen Zhang, Xiaohan Zhang, Yuxiao Dong, Jie Tang
Large language models (LLMs) have fueled many intelligent agent tasks, such as web navigation -- but most existing agents perform far from satisfying in real-world webpages due to three factors: (1) the versatility of actions on webpages, (2) HTML text exceeding model processing capacity, and (3) the complexity of decision-making due to the open-domain nature of web.
no code implementations • 20 Mar 2024 • Hao Yu
Furthermore, with synthetic data generate from cluster expansion model at near-DFT levels, we obtained enlarged dataset to assess the demands on data for training accurate prediction models using graph neural networks for systems featuring interacting defects.
no code implementations • 12 Mar 2024 • Masoud Shokrnezhad, Hao Yu, Tarik Taleb, Richard Li, Kyunghan Lee, Jaeseung Song, Cedric Westphal
Hence, this paper presents the concept of Adaptable CNC (ACNC) as an autonomous Machine Learning (ML)-aided mechanism crafted for the joint orchestration of computing and network resources, catering to dynamic and voluminous user requests with stringent requirements.
no code implementations • 11 Mar 2024 • Linyi Li, Shijie Geng, Zhenwen Li, Yibo He, Hao Yu, Ziyue Hua, Guanghan Ning, Siwei Wang, Tao Xie, Hongxia Yang
Large Language Models for understanding and generating code (code LLMs) have witnessed tremendous progress in recent years.
no code implementations • 3 Mar 2024 • Junwen Huang, Hao Yu, Kuan-Ting Yu, Nassir Navab, Slobodan Ilic, Benjamin Busam
MatchU is a generic approach that fuses 2D texture and 3D geometric cues for 6D pose prediction of unseen objects.
no code implementations • 22 Feb 2024 • Yu Gu, Yiheng Shu, Hao Yu, Xiao Liu, Yuxiao Dong, Jie Tang, Jayanth Srinivasa, Hugo Latapie, Yu Su
The applications of large language models (LLMs) have expanded well beyond the confines of text processing, signaling a new era where LLMs are envisioned as generalist language agents capable of operating within complex real-world environments.
no code implementations • 21 Feb 2024 • Ziyi Guan, Hantao Huang, Yupeng Su, Hong Huang, Ngai Wong, Hao Yu
Large Language Models (LLMs) have greatly advanced the natural language processing paradigm.
no code implementations • 5 Feb 2024 • Hao Yu, Zebin Huang, Qingbo Liu, Ignacio Carlucho, Mustafa Suphi Erden
We compared the elbow movement controlled by an RL agent with the motion of an actual human elbow in terms of the impedance identified in torque-perturbation experiments.
1 code implementation • 30 Jan 2024 • Hao Yu, Yingxiao Du, Jianxin Wu
In this paper, we aim to enhance the accuracy of the worst-performing categories and utilize the harmonic mean and geometric mean to assess the model's performance.
no code implementations • 27 Dec 2023 • Xin Yang, Hao Yu, Xin Gao, Hao Wang, Junbo Zhang, Tianrui Li
The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones.
no code implementations • 28 Nov 2023 • Dayu Hu, Ke Liang, Hao Yu, Xinwang Liu
This model leverages exogenous gene network information to facilitate the clustering process, generating discriminative representations.
no code implementations • 26 Nov 2023 • Jieyu Yao, Hao Yu, Paul Judge, Jiabin Jia, Sasa Djokic, Verner Püvi, Matti Lehtonen, Jan Meyer
The results by feature importance analysis show the detailed relationships between each order of harmonic voltage and current in the distribution system.
no code implementations • 20 Nov 2023 • Man Chen, Wenquan Dong, Hao Yu, Iain Woodhouse, Casey M. Ryan, Haoyu Liu, Selena Georgiou, Edward T. A. Mitchard
Consequently, we proposed a novel deep learning framework termed the multi-modal attention remote sensing network (MARSNet) to estimate forest dominant height by extrapolating dominant height derived from GEDI, using Setinel-1 data, ALOS-2 PALSAR-2 data, Sentinel-2 optical data and ancillary data.
no code implementations • 6 Nov 2023 • Wenquan Dong, Edward T. A. Mitchard, Hao Yu, Steven Hancock, Casey M. Ryan
AU-FC achieved intermediate R2 of 0. 64, RMSE of 44. 92 Mgha-1, and bias of -0. 56 Mg ha-1, outperforming RF but underperforming AU model using spatial information.
1 code implementation • 8 Oct 2023 • Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji
When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.
no code implementations • 26 Sep 2023 • Xinhang Wan, Jiyuan Liu, Hao Yu, Ao Li, Xinwang Liu, Ke Liang, Zhibin Dong, En Zhu
Precisely, considering that data correlations play a vital role in clustering and prior knowledge ought to guide the clustering process of a new view, we develop a data buffer with fixed size to store filtered structural information and utilize it to guide the generation of a robust partition matrix via contrastive learning.
1 code implementation • 21 Sep 2023 • Meng Liu, Ke Liang, Dayu Hu, Hao Yu, Yue Liu, Lingyuan Meng, Wenxuan Tu, Sihang Zhou, Xinwang Liu
We observe that these audiovisual data naturally have temporal attributes, such as the time information for each frame in the video.
no code implementations • ICCV 2023 • Hao Yu, Xu Cheng, Wei Peng, Weihao Liu, Guoying Zhao
Visible-infrared person re-identification (VI-ReID) is a challenging task due to large cross-modality discrepancies and intra-class variations.
no code implementations • ICCV 2023 • Zhiying Leng, Shun-Cheng Wu, Mahdi Saleh, Antonio Montanaro, Hao Yu, Yin Wang, Nassir Navab, Xiaohui Liang, Federico Tombari
In this work, we propose the first precise hand-object reconstruction method in hyperbolic space, namely Dynamic Hyperbolic Attention Network (DHANet), which leverages intrinsic properties of hyperbolic space to learn representative features.
no code implementations • 19 Aug 2023 • Hao Yu, Zachary Yang, Kellin Pelrine, Jean Francois Godbout, Reihaneh Rabbany
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks.
1 code implementation • 7 Aug 2023 • Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting.
1 code implementation • 25 Jul 2023 • Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, Slobodan Ilic, Dewen Hu, Kai Xu
They seek correspondences over downsampled superpoints, which are then propagated to dense points.
Ranked #5 on Point Cloud Registration on FP-O-H
2 code implementations • 13 Jun 2023 • Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM).
no code implementations • 8 Jun 2023 • Hao Yu, Chuan Ma, Meng Liu, Tianyu Du, Ming Ding, Tao Xiang, Shouling Ji, Xinwang Liu
Through empirical evaluation, comparing G$^2$uardFL with cutting-edge defenses, such as FLAME (USENIX Security 2022) [28] and DeepSight (NDSS 2022) [36], against various backdoor attacks including 3DFed (SP 2023) [20], our results demonstrate its significant effectiveness in mitigating backdoor attacks while having a negligible impact on the aggregated model's performance on benign samples (i. e., the primary task performance).
no code implementations • 30 Mar 2023 • Binbin Li, Xinyu Du, Yao Hu, Hao Yu, Wende Zhang
Online camera-to-ground calibration is to generate a non-rigid body transformation between the camera and the road surface in a real-time manner.
1 code implementation • CVPR 2023 • Zheng Qin, Hao Yu, Changjian Wang, Yuxing Peng, Kai Xu
We first design a local spatial consistency measure over the deformation graph of the point cloud, which evaluates the spatial compatibility only between the correspondences in the vicinity of a graph node.
1 code implementation • CVPR 2023 • Hao Yu, Zheng Qin, Ji Hou, Mahdi Saleh, Dongsheng Li, Benjamin Busam, Slobodan Ilic
To this end, we introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task.
no code implementations • 2 Mar 2023 • Jiayuan Zhuang, Zheng Qin, Hao Yu, Xucan Chen
Classification and localization are two main sub-tasks in object detection.
no code implementations • CVPR 2023 • Hao Yu, Xu Cheng, Wei Peng
Visible-infrared recognition (VI recognition) is a challenging task due to the enormous visual difference across heterogeneous images.
1 code implementation • 27 Sep 2022 • Hao Yu, Ji Hou, Zheng Qin, Mahdi Saleh, Ivan Shugurov, Kai Wang, Benjamin Busam, Slobodan Ilic
More specifically, 3D structures of the whole frame are first represented by our global PPF signatures, from which structural descriptors are learned to help geometric descriptors sense the 3D world beyond local regions.
1 code implementation • 22 Jul 2022 • Fenia Christopoulou, Gerasimos Lampouras, Milan Gritta, Guchun Zhang, Yinpeng Guo, Zhongqi Li, Qi Zhang, Meng Xiao, Bo Shen, Lin Li, Hao Yu, Li Yan, Pingyi Zhou, Xin Wang, Yuchi Ma, Ignacio Iacobacci, Yasheng Wang, Guangtai Liang, Jiansheng Wei, Xin Jiang, Qianxiang Wang, Qun Liu
We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation, i. e. the synthesis of programming language solutions given a natural language problem description.
1 code implementation • 16 Mar 2022 • ZiFan Chen, Jie Zhao, Hao Yu, Yue Zhang, Li Zhang
Accurate and efficient lumbar spine disease identification is crucial for clinical diagnosis.
no code implementations • 9 Mar 2022 • Fu Li, Hao Yu, Ivan Shugurov, Benjamin Busam, Shaowu Yang, Slobodan Ilic
Pose estimation of 3D objects in monocular images is a fundamental and long-standing problem in computer vision.
2 code implementations • CVPR 2022 • Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, Kai Xu
Such sparse and loose matching requires contextual features capturing the geometric structure of the point clouds.
2 code implementations • 26 Jan 2022 • Yun-Hao Cao, Hao Yu, Jianxin Wu
Vision Transformers (ViTs) is emerging as an alternative to convolutional neural networks (CNNs) for visual recognition.
1 code implementation • 30 Nov 2021 • Hao Yu, Jianxin Wu
Recently, vision transformer (ViT) and its variants have achieved promising performances in various computer vision tasks.
1 code implementation • NeurIPS 2021 • Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, Slobodan Ilic
We study the problem of extracting correspondences between a pair of point clouds for registration.
1 code implementation • NeurIPS 2021 • Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, Slobodan Ilic
We study the problem of extracting correspondences between a pair of point clouds for registration.
no code implementations • 20 Apr 2021 • Zhenning Li, Hao Yu, Guohui Zhang, Shangjia Dong, Cheng-Zhong Xu
Inefficient traffic control may cause numerous problems such as traffic congestion and energy waste.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 12 Jan 2021 • Hao Yu, Huanyu Wang, Jianxin Wu
In this paper, we find that mixup constantly explores the representation space, and inspired by the exploration-exploitation dilemma in reinforcement learning, we propose mixup Without hesitation (mWh), a concise, effective, and easy-to-use training algorithm.
no code implementations • 27 Nov 2020 • Meng Shen, Hao Yu, Liehuang Zhu, Ke Xu, Qi Li, Xiaojiang Du
Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems.
no code implementations • 4 Nov 2020 • Yuan Cheng, Yuchao Yang, Hai-Bao Chen, Ngai Wong, Hao Yu
Real-time understanding in video is crucial in various AI applications such as autonomous driving.
no code implementations • 2 Oct 2020 • Yuan Hui, Zheng Yang, Hao Yu
The magnetization evolution of the free layer in an orthogonal spin-torque device is studied based on a macrospin model.
Mesoscale and Nanoscale Physics
no code implementations • 28 Feb 2020 • Rui Lin, Ching-Yun Ko, Zhuolun He, Cong Chen, Yuan Cheng, Hao Yu, Graziano Chesi, Ngai Wong
The emerging edge computing has promoted immense interests in compacting a neural network without sacrificing much accuracy.
no code implementations • 12 Feb 2020 • Nataniel Ruiz, Hao Yu, Danielle A. Allessio, Mona Jalal, Ajjen Joshi, Thomas Murray, John J. Magee, Jacob R. Whitehill, Vitaly Ablavsky, Ivon Arroyo, Beverly P. Woolf, Stan Sclaroff, Margrit Betke
In this work, we propose a video-based transfer learning approach for predicting problem outcomes of students working with an intelligent tutoring system (ITS).
no code implementations • 19 Jan 2020 • Yun Bai, Xixi Li, Hao Yu, Suling Jia
Sparse and short news headlines can be arbitrary, noisy, and ambiguous, making it difficult for classic topic model LDA (latent Dirichlet allocation) designed for accommodating long text to discover knowledge from them.
no code implementations • NeurIPS 2019 • Hao Yu
In this paper, we propose a new parallel multi-block stochastic ADMM for distributed stochastic optimization, where each node is only required to perform simple stochastic gradient descent updates.
no code implementations • 10 May 2019 • Hao Yu, Rong Jin
We show that for stochastic non-convex optimization under the P-L condition, the classical data-parallel SGD with exponentially increasing batch sizes can achieve the fastest known $O(1/(NT))$ convergence with linear speedup using only $\log(T)$ communication rounds.
no code implementations • 9 May 2019 • Hao Yu, Rong Jin, Sen yang
Recent developments on large-scale distributed machine learning applications, e. g., deep neural networks, benefit enormously from the advances in distributed non-convex optimization techniques, e. g., distributed Stochastic Gradient Descent (SGD).
no code implementations • NeurIPS 2018 • Xiaohan Wei, Hao Yu, Qing Ling, Michael Neely
In this paper, we show that by leveraging a local error bound condition on the dual function, the proposed algorithm can achieve a better primal convergence time of $\mathcal{O}\l(\varepsilon^{-2/(2+\beta)}\log_2(\varepsilon^{-1})\r)$, where $\beta\in(0, 1]$ is a local error bound parameter.
2 code implementations • 6 Nov 2018 • Krishna Kumar Singh, Hao Yu, Aron Sarmasi, Gautam Pradeep, Yong Jae Lee
Our approach only needs to modify the input image and can work with any network to improve its performance.
no code implementations • 1 Nov 2018 • Hao Yu, Vivek Kulkarni, William Wang
First, we introduce methods that learn network representations of entities in the knowledge graph capturing these varied aspects of similarity.
no code implementations • 17 Jul 2018 • Hao Yu, Sen yang, Shenghuo Zhu
Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker.
no code implementations • 27 May 2018 • Juyong Zhang, Yuxin Yao, Yue Peng, Hao Yu, Bailin Deng
We propose a novel method to accelerate Lloyd's algorithm for K-Means clustering.
no code implementations • 21 May 2018 • Yuan Cheng, Guangya Li, Hai-Bao Chen, Sheldon X. -D. Tan, Hao Yu
As it requires a huge number of parameters when exposed to high dimensional inputs in video detection and classification, there is a grand challenge to develop a compact yet accurate video comprehension at terminal devices.
no code implementations • 10 Apr 2018 • Hao Yu, Zhaoning Zhang, Zheng Qin, Hao Wu, Dongsheng Li, Jun Zhao, Xicheng Lu
LRM is a general method for real-time detectors, as it utilizes the final feature map which exists in all real-time detectors to mine hard examples.
2 code implementations • 24 Mar 2018 • Zheng Qin, Zhaoning Zhang, Shiqing Zhang, Hao Yu, Yuxing Peng
Compact neural networks are inclined to exploit "sparsely-connected" convolutions such as depthwise convolution and group convolution for employment in mobile applications.
no code implementations • NeurIPS 2017 • Hao Yu, Michael J. Neely, Xiaohan Wei
This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i. i. d.
no code implementations • 5 May 2017 • Minne Li, Zhaoning Zhang, Hao Yu, Xinyuan Chen, Dongsheng Li
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling technique, to choose the training examples according to this influence during hard example mining, and thus enhance the performance of object detectors.
no code implementations • 20 Feb 2017 • Yixing Li, Zichuan Liu, Kai Xu, Hao Yu, Fengbo Ren
For processing static data in large batch sizes, the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9. 5x higher energy efficiency.
no code implementations • 12 Dec 2016 • Zichuan Liu, Yixing Li, Fengbo Ren, Hao Yu
In this paper, we develop a binary convolutional encoder-decoder network (B-CEDNet) for natural scene text processing (NSTP).
no code implementations • 8 Apr 2016 • Hao Yu, Michael J. Neely
That prior work proposes an algorithm to achieve $O(\sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints.