1 code implementation • 7 Nov 2021 • Ruiyang Liu, Yinghui Li, Linmi Tao, Dun Liang, Hai-Tao Zheng
In the GPU era, the locally and globally weighted summations are the current mainstreams, represented by the convolution and self-attention mechanism, as well as MLP.
no code implementations • 31 May 2021 • Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, Dun Liang, Ralph R. Martin, Shi-Min Hu
In the first week of May, 2021, researchers from four different institutions: Google, Tsinghua University, Oxford University and Facebook, shared their latest work [16, 7, 12, 17] on arXiv. org almost at the same time, each proposing new learning architectures, consisting mainly of linear layers, claiming them to be comparable, or even superior to convolutional-based models.
1 code implementation • 19 May 2021 • Guo-Wei Yang, Wen-Yang Zhou, Hao-Yang Peng, Dun Liang, Tai-Jiang Mu, Shi-Min Hu
Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability.
no code implementations • 24 Nov 2018 • Song-Hai Zhang, Zhengping Zhou, Bin Liu, Xin Dong, Dun Liang, Peter Hall, Shi-Min Hu
In this work, we propose a novel topic consisting of two dual tasks: 1) given a scene, recommend objects to insert, 2) given an object category, retrieve suitable background scenes.
no code implementations • 16 Jul 2018 • Dun Liang, Yuanchen Guo, Shaokui Zhang, Song-Hai Zhang, Peter Hall, Min Zhang, Shi-Min Hu
Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time.
no code implementations • CVPR 2016 • Zhe Zhu, Dun Liang, SongHai Zhang, Xiaolei Huang, Baoli Li, Shimin Hu
We call this benchmark Tsinghua-Tencent 100K.