no code implementations • 25 Mar 2024 • Guangqian Yang, Kangrui Du, Zhihan Yang, Ye Du, Yongping Zheng, Shujun Wang
Our proposed framework is built on a masked Vim autoencoder to learn a unified multi-modal representation and long-dependencies contained in 3D medical images.
no code implementations • 14 Mar 2024 • Qingqiu Li, Xiaohan Yan, Jilan Xu, Runtian Yuan, Yuejie Zhang, Rui Feng, Quanli Shen, Xiaobo Zhang, Shujun Wang
For finding and existence, we regard them as image tags, applying an image-tag recognition decoder to associate image features with their respective tags within each sample and constructing soft labels for contrastive learning to improve the semantic association of different image-report pairs.
no code implementations • 20 Feb 2024 • Guoqi Yu, Jing Zou, Xiaowei Hu, Angelica I. Aviles-Rivero, Jing Qin, Shujun Wang
Predicting multivariate time series is crucial, demanding precise modeling of intricate patterns, including inter-series dependencies and intra-series variations.
no code implementations • 30 Nov 2023 • Lihao Liu, Yanqi Cheng, Zhongying Deng, Shujun Wang, Dongdong Chen, Xiaowei Hu, Pietro Liò, Carola-Bibiane Schönlieb, Angelica Aviles-Rivero
Multi-object tracking in traffic videos is a crucial research area, offering immense potential for enhancing traffic monitoring accuracy and promoting road safety measures through the utilisation of advanced machine learning algorithms.
no code implementations • 22 Nov 2023 • Yanqi Cheng, Lipei Zhang, Zhenda Shen, Shujun Wang, Lequan Yu, Raymond H. Chan, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero
In this work, we introduce Single-Shot PnP methods (SS-PnP), shifting the focus to solving inverse problems with minimal data.
1 code implementation • 14 Sep 2023 • Ziyu Guo, Weiqin Zhao, Shujun Wang, Lequan Yu
Considering that the information from different resolutions is complementary and can benefit each other during the learning process, we further design a novel Bidirectional Interaction block to establish communication between different levels within the WSI pyramids.
1 code implementation • ICCV 2023 • Yanyan Huang, Weiqin Zhao, Shujun Wang, Yu Fu, Yuming Jiang, Lequan Yu
In this paper, we propose the FIRST continual learning framework for WSI analysis, named ConSlide, to tackle the challenges of enormous image size, utilization of hierarchical structure, and catastrophic forgetting by progressive model updating on multiple sequential datasets.
no code implementations • 4 Aug 2023 • Juncheng Wang, Jindong Wang, Xixu Hu, Shujun Wang, Xing Xie
Empirical risk minimization (ERM) is a fundamental machine learning paradigm.
no code implementations • 2 Aug 2023 • Yijun Yang, Shujun Wang, Lihao Liu, Sarah Hickman, Fiona J Gilbert, Carola-Bibiane Schönlieb, Angelica I. Aviles-Rivero
This work devises MammoDG, a novel deep-learning framework for generalisable and reliable analysis of cross-domain multi-center mammography data.
no code implementations • 18 Mar 2023 • Shujun Wang, Angelica I Aviles-Rivero, Zoe Kourtzi, Carola-Bibiane Schönlieb
We demonstrate, through extensive experiments on ADNI, that our proposed HGIB framework outperforms existing state-of-the-art hypergraph neural networks for Alzheimer's disease prognosis.
no code implementations • 14 Mar 2023 • Zhening Huang, Xiaoyang Wu, Hengshuang Zhao, Lei Zhu, Shujun Wang, Georgios Hadjidemetriou, Ioannis Brilakis
For feature aggregation, it improves feature modeling by allowing the network to learn from both local points and neighboring geometry partitions, resulting in an enlarged data-tailored receptive field.
no code implementations • 21 Feb 2023 • Weiqin Zhao, Shujun Wang, Maximus Yeung, Tianye Niu, Lequan Yu
Whole slide image (WSI) has been widely used to assist automated diagnosis under the deep learning fields.
no code implementations • 17 Nov 2022 • Zhongying Deng, Yanqi Chen, Lihao Liu, Shujun Wang, Rihuan Ke, Carola-Bibiane Schonlieb, Angelica I Aviles-Rivero
Firstly, TrafficCAM provides both pixel-level and instance-level semantic labelling along with a large range of types of vehicles and pedestrians.
2 code implementations • 12 Oct 2022 • Fuying Wang, Yuyin Zhou, Shujun Wang, Varut Vardhanabhuti, Lequan Yu
In this paper, we present a novel Multi-Granularity Cross-modal Alignment (MGCA) framework for generalized medical visual representation learning by harnessing the naturally exhibited semantic correspondences between medical image and radiology reports at three different levels, i. e., pathological region-level, instance-level, and disease-level.
no code implementations • 18 Sep 2022 • Yanqi Cheng, Lihao Liu, Shujun Wang, Yueming Jin, Carola-Bibiane Schönlieb, Angelica I. Aviles-Rivero
This is the question that we address in this work.
1 code implementation • 13 Sep 2021 • Yijun Yang, Shujun Wang, Lei Zhu, Lequan Yu
Particularly, for the Extrinsic Consistency, we leverage the knowledge across multiple source domains to enforce data-level consistency.
1 code implementation • 7 Jan 2021 • Kang Li, Shujun Wang, Lequan Yu, Pheng-Ann Heng
In this way, the dual teacher models would transfer acquired inter- and intra-domain knowledge to the student model for further integration and exploitation.
1 code implementation • 13 Oct 2020 • Shujun Wang, Lequan Yu, Kang Li, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains to make the semantic features more discriminative.
no code implementations • 13 Oct 2020 • Shujun Wang, Yaxi Zhu, Lequan Yu, Hao Chen, Huangjing Lin, Xiangbo Wan, Xinjuan Fan, Pheng-Ann Hen
The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis.
no code implementations • 4 Oct 2020 • Kang Li, Lequan Yu, Shujun Wang, Pheng-Ann Heng
Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e. g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity.
no code implementations • ECCV 2020 • Shujun Wang, Lequan Yu, Caizi Li, Chi-Wing Fu, Pheng-Ann Heng
To this end, we present a new domain generalization framework that learns how to generalize across domains simultaneously from extrinsic relationship supervision and intrinsic self-supervision for images from multi-source domains.
no code implementations • 13 Jul 2020 • Kang Li, Shujun Wang, Lequan Yu, Pheng-Ann Heng
Medical image annotations are prohibitively time-consuming and expensive to obtain.
1 code implementation • 10 Oct 2019 • Haoran Dou, Xin Yang, Jikuan Qian, Wufeng Xue, Hao Qin, Xu Wang, Lequan Yu, Shujun Wang, Yi Xiong, Pheng-Ann Heng, Dong Ni
In this study, we propose a novel reinforcement learning (RL) framework to automatically localize fetal brain standard planes in 3D US.
no code implementations • 8 Oct 2019 • José Ignacio Orlando, Huazhu Fu, João Barbossa Breda, Karel van Keer, Deepti. R. Bathula, Andrés Diaz-Pinto, Ruogu Fang, Pheng-Ann Heng, Jeyoung Kim, Joonho Lee, Joonseok Lee, Xiaoxiao Li, Peng Liu, Shuai Lu, Balamurali Murugesan, Valery Naranjo, Sai Samarth R. Phaye, Sharath M. Shankaranarayana, Apoorva Sikka, Jaemin Son, Anton Van Den Hengel, Shujun Wang, Junyan Wu, Zifeng Wu, Guanghui Xu, Yongli Xu, Pengshuai Yin, Fei Li, Yanwu Xu, Xiulan Zhang, Hrvoje Bogunović
As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.
8 code implementations • 16 Jul 2019 • Lequan Yu, Shujun Wang, Xiaomeng Li, Chi-Wing Fu, Pheng-Ann Heng
We design a novel uncertainty-aware scheme to enable the student model to gradually learn from the meaningful and reliable targets by exploiting the uncertainty information.
1 code implementation • 26 Jun 2019 • Shujun Wang, Lequan Yu, Kang Li, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng
The cross-domain discrepancy (domain shift) hinders the generalization of deep neural networks to work on different domain datasets. In this work, we present an unsupervised domain adaptation framework, called Boundary and Entropy-driven Adversarial Learning (BEAL), to improve the OD and OC segmentation performance, especially on the ambiguous boundary regions.
no code implementations • 20 Feb 2019 • Shujun Wang, Lequan Yu, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng
In this paper, we present a novel patchbased Output Space Adversarial Learning framework (pOSAL) to jointly and robustly segment the OD and OC from different fundus image datasets.
Ranked #2 on Optic Disc Segmentation on REFUGE