no code implementations • 4 Mar 2024 • Youfang Han, Wei Zhao, Xiangjin Chen, Xin Meng
The accuracy of the proposed SNN is 83. 85% and 93. 52% on the two datasets respectively.
no code implementations • 16 May 2023 • Di Xu, Yang Zhao, Xiang Hao, Xin Meng
We introduce a novel dataset consisting of images depicting pink eggs that have been identified as Pomacea canaliculata eggs, accompanied by corresponding bounding box annotations.
1 code implementation • 19 Nov 2022 • Zhenglun Kong, Haoyu Ma, Geng Yuan, Mengshu Sun, Yanyue Xie, Peiyan Dong, Xin Meng, Xuan Shen, Hao Tang, Minghai Qin, Tianlong Chen, Xiaolong Ma, Xiaohui Xie, Zhangyang Wang, Yanzhi Wang
Vision transformers (ViTs) have recently obtained success in many applications, but their intensive computation and heavy memory usage at both training and inference time limit their generalization.
no code implementations • 15 Nov 2022 • Peiyan Dong, Mengshu Sun, Alec Lu, Yanyue Xie, Kenneth Liu, Zhenglun Kong, Xin Meng, Zhengang Li, Xue Lin, Zhenman Fang, Yanzhi Wang
While vision transformers (ViTs) have continuously achieved new milestones in the field of computer vision, their sophisticated network architectures with high computation and memory costs have impeded their deployment on resource-limited edge devices.
1 code implementation • 2 Nov 2022 • Xuan Shen, Zhenglun Kong, Minghai Qin, Peiyan Dong, Geng Yuan, Xin Meng, Hao Tang, Xiaolong Ma, Yanzhi Wang
That is, there exists a subset of input image patches such that a ViT can be trained from scratch by using only this subset of patches and achieve similar accuracy to the ViTs trained by using all image patches.
no code implementations • 22 Feb 2022 • Hongtao Wu, Jikai Ye, Xin Meng, Chris Paxton, Gregory Chirikjian
We propose a visual foresight model for pick-and-place rearrangement manipulation which is able to learn efficiently.
1 code implementation • 27 Dec 2021 • Zhenglun Kong, Peiyan Dong, Xiaolong Ma, Xin Meng, Mengshu Sun, Wei Niu, Xuan Shen, Geng Yuan, Bin Ren, Minghai Qin, Hao Tang, Yanzhi Wang
Moreover, our framework can guarantee the identified model to meet resource specifications of mobile devices and FPGA, and even achieve the real-time execution of DeiT-T on mobile platforms.
Ranked #4 on Efficient ViTs on ImageNet-1K (with DeiT-S)
no code implementations • 29 Sep 2021 • Zhenglun Kong, Peiyan Dong, Xiaolong Ma, Xin Meng, Mengshu Sun, Wei Niu, Bin Ren, Minghai Qin, Hao Tang, Yanzhi Wang
Recently, Vision Transformer (ViT) has continuously established new milestones in the computer vision field, while the high computation and memory cost makes its propagation in industrial production difficult.
no code implementations • 12 Aug 2021 • Hongtao Wu, Xin Meng, Sipu Ruan, Gregory Chirikjian
Results show that our method enables the robot to autonomously seat the teddy bear on the 12 previously unseen chairs with a very high success rate.
no code implementations • 14 Jan 2019 • Yi Zhen, Hang Chen, Xu Zhang, Meng Liu, Xin Meng, Jian Zhang, Jiantao Pu
To investigate whether and to what extent central serous chorioretinopathy (CSC) depicted on color fundus photographs can be assessed using deep learning technology.