Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation

24 Aug 2022  ·  Jianlong Yuan, Jinchao Ge, Zhibin Wang, Yifan Liu ·

Consistency regularization has been widely studied in recent semisupervised semantic segmentation methods, and promising performance has been achieved. In this work, we propose a new consistency regularization framework, termed mutual knowledge distillation (MKD), combined with data and feature augmentation. We introduce two auxiliary mean-teacher models based on consistency regularization. More specifically, we use the pseudo-labels generated by a mean teacher to supervise the student network to achieve a mutual knowledge distillation between the two branches. In addition to using image-level strong and weak augmentation, we also discuss feature augmentation. This involves considering various sources of knowledge to distill the student network. Thus, we can significantly increase the diversity of the training samples. Experiments on public benchmarks show that our framework outperforms previous state-of-the-art (SOTA) methods under various semi-supervised settings. Code is available at semi-mmseg.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here