no code implementations • 18 Jan 2024 • Zhongliang Guo, Junhao Dong, Yifei Qian, Kaixuan Wang, Weiye Li, Ziheng Guo, Yuheng Wang, Yanli Li, Ognjen Arandjelović, Lei Fang
Neural style transfer (NST) generates new images by combining the style of one image with the content of another.
no code implementations • 13 Jan 2024 • Junxi Chen, Junhao Dong, Xiaohua Xie
Recently, many studies utilized adversarial examples (AEs) to raise the cost of malicious image editing and copyright violation powered by latent diffusion models (LDMs).
no code implementations • 16 Nov 2023 • Zhu Meng, Junhao Dong, Limei Guo, Fei Su, Guangxi Wang, Zhicheng Zhao
Since signet ring cells (SRCs) are associated with high peripheral metastasis rate and dismal survival, they play an important role in determining surgical approaches and prognosis, while they are easily missed by even experienced pathologists.
no code implementations • 12 Oct 2023 • Qiang Li, Dan Zhang, Shengzhao Lei, Xun Zhao, Porawit Kamnoedboon, Weiwei Li, Junhao Dong, Shuyan Li
Despite the promising performance of existing visual models on public benchmarks, the critical assessment of their robustness for real-world applications remains an ongoing challenge.
no code implementations • 19 Jul 2023 • Junhao Dong, Zhu Meng, Delong Liu, Zhicheng Zhao, Fei Su
Prototype-based classification is a classical method in machine learning, and recently it has achieved remarkable success in semi-supervised semantic segmentation.
no code implementations • 16 May 2023 • Junxi Chen, Junhao Dong, Xiaohua Xie
However, a recent work showed the inequality phenomena in $l_{\infty}$-adversarial training and revealed that the $l_{\infty}$-adversarially trained model is vulnerable when a few important pixels are perturbed by i. i. d.
no code implementations • 24 Mar 2023 • Junhao Dong, Junxi Chen, Xiaohua Xie, JianHuang Lai, Hao Chen
In this exposition, we present a comprehensive survey on recent advances in adversarial attack and defense for medical image analysis with a novel taxonomy in terms of the application scenario.
no code implementations • CVPR 2023 • Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, JianHuang Lai, Xiaohua Xie
To circumvent this issue, we propose a novel adversarial training scheme that encourages the model to produce similar outputs for an adversarial example and its ``inverse adversarial'' counterpart.
no code implementations • 26 Apr 2022 • Junhao Dong, YuAn Wang, JianHuang Lai, Xiaohua Xie
DeepFake face swapping presents a significant threat to online security and social media, which can replace the source face in an arbitrary photo/video with the target face of an entirely different person.
1 code implementation • 24 Feb 2022 • Yunhao Du, Junfeng Wan, Yanyun Zhao, Binyu Zhang, Zhihang Tong, Junhao Dong
In recent years, algorithms for multiple object tracking tasks have benefited from great progresses in deep models and video quality.
no code implementations • CVPR 2022 • Junhao Dong, YuAn Wang, Jian-Huang Lai, Xiaohua Xie
Extensive experiments show that our method can significantly outperform state-of-the-art adversarially robust FSIC methods on two standard benchmarks.