1 code implementation • Findings (NAACL) 2022 • Shusen Wang, Bin Duan, Yanan Wu, Yajing Xu
In this paper, we propose a novel method based on Instance Ranking and Label Calibration strategies (IRLC) to learn discriminative representations for open relation extraction.
1 code implementation • COLING 2022 • Bin Duan, Shusen Wang, Xingxian Liu, Yajing Xu
To mitigate the catastrophic forgetting issue, we design the consistency regularization loss to make better use of the pseudo-labels and jointly train the model with both unsupervised and supervised data.
no code implementations • 21 Mar 2024 • Bin Xie, Hao Tang, Bin Duan, Dawen Cai, Yan Yan
Each pair of auxiliary mask and box prompts, which can solve the requirements of extra prompts, is associated with class label predictions by the sum of the auxiliary classifier token and the learnable global classifier tokens in the mask decoder of SAM to solve the predictions of semantic labels.
no code implementations • 21 Mar 2024 • Junyi Wu, Bin Duan, Weitai Kang, Hao Tang, Yan Yan
To incorporate the influence of token transformation into interpretation, we propose TokenTM, a novel post-hoc explanation method that utilizes our introduced measurement of token transformation effects.
no code implementations • 10 Mar 2024 • Bin Duan, Yuzhang Shang, Dawen Cai, Yan Yan
In this paper, we propose an online multi-spectral neuron tracing method with uniquely designed modules, where no offline training are required.
1 code implementation • ICCV 2023 • Bin Duan, Ming Zhong, Yan Yan
Moreover, we derive a set of theoretical guarantees for our sanity-checked image registration method, with experimental results supporting our theoretical findings and their effectiveness in increasing the sanity of models without sacrificing any performance.
1 code implementation • 8 Mar 2023 • Xingxian Liu, Bin Duan, Bo Xiao, Yajing Xu
Previous works typically concatenate the query with meeting transcripts and implicitly model the query relevance only at the token level with attention mechanism.
no code implementations • 27 Jan 2023 • Bin Duan, Keshav Bhandari, Gaowen Liu, Yan Yan
Moreover, we present a novel Siamese representation Learning framework for Omnidirectional Flow (SLOF) estimation, which is trained in a contrastive manner via a hybrid loss that combines siamese contrastive and optical flow losses.
no code implementations • 7 Aug 2022 • Keshav Bhandari, Bin Duan, Gaowen Liu, Hugo Latapie, Ziliang Zong, Yan Yan
Optical flow estimation in omnidirectional videos faces two significant issues: the lack of benchmark datasets and the challenge of adapting perspective video-based methods to accommodate the omnidirectional nature.
no code implementations • 17 Jul 2022 • Bin Xie, Hao Tang, Bin Duan, Dawen Cai, Yan Yan
Brain vessel image segmentation can be used as a promising biomarker for better prevention and treatment of different diseases.
1 code implementation • 13 Jul 2022 • Yuzhang Shang, Dan Xu, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan
Relying on the premise that the performance of a binary neural network can be largely restored with eliminated quantization error between full-precision weight vectors and their corresponding binary vectors, existing works of network binarization frequently adopt the idea of model robustness to reach the aforementioned objective.
no code implementations • 30 Jan 2022 • Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan
Extensive experiments on CIFAR-10 and CIFAR-100 demonstrate the superiority of our novel Fourier analysis based MBP compared to other traditional MBP algorithms.
no code implementations • ICCV 2021 • Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan
Knowledge distillation has become one of the most important model compression techniques by distilling knowledge from larger teacher networks to smaller student ones.
no code implementations • 14 Aug 2020 • Bin Duan, Hao Tang, Wei Wang, Ziliang Zong, Guowei Yang, Yan Yan
Recent works have shown that attention mechanism is beneficial to the fusion process.
1 code implementation • 3 Jul 2019 • Bin Duan, Wei Wang, Hao Tang, Hugo Latapie, Yan Yan
However, in machine learning, this cross-modal learning is a nontrivial task because different modalities have no homogeneous properties.