Semi-supervised object detection based on single-stage detector for thighbone fracture localization

20 Oct 2022  ·  Jinman Wei, Jinkun Yao, Guoshan Zhanga, Bin Guan, Yueming Zhang, Shaoquan Wang ·

The thighbone is the largest bone supporting the lower body. If the thighbone fracture is not treated in time, it will lead to lifelong inability to walk. Correct diagnosis of thighbone disease is very important in orthopedic medicine. Deep learning is promoting the development of fracture detection technology. However, the existing computer aided diagnosis (CAD) methods baesd on deep learning rely on a large number of manually labeled data, and labeling these data costs a lot of time and energy. Therefore, we develop a object detection method with limited labeled image quantity and apply it to the thighbone fracture localization. In this work, we build a semi-supervised object detection(SSOD) framework based on single-stage detector, which including three modules: adaptive difficult sample oriented (ADSO) module, Fusion Box and deformable expand encoder (Dex encoder). ADSO module takes the classification score as the label reliability evaluation criterion by weighting, Fusion Box is designed to merge similar pseudo boxes into a reliable box for box regression and Dex encoder is proposed to enhance the adaptability of image augmentation. The experiment is conducted on the thighbone fracture dataset, which includes 3484 training thigh fracture images and 358 testing thigh fracture images. The experimental results show that the proposed method achieves the state-of-the-art AP in thighbone fracture detection at different labeled data rates, i.e. 1%, 5% and 10%. Besides, we use full data to achieve knowledge distillation, our method achieves 86.2% AP50 and 52.6% AP75.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here