Segmentation of Surgical Instruments for Minimally-Invasive Robot-Assisted Procedures Using Generative Deep Neural Networks

5 Jun 2020  ·  Iñigo Azqueta-Gavaldon, Florian Fröhlich, Klaus Strobl, Rudolph Triebel ·

This work proves that semantic segmentation on minimally invasive surgical instruments can be improved by using training data that has been augmented through domain adaptation. The benefit of this method is twofold. Firstly, it suppresses the need of manually labeling thousands of images by transforming synthetic data into realistic-looking data. To achieve this, a CycleGAN model is used, which transforms a source dataset to approximate the domain distribution of a target dataset. Secondly, this newly generated data with perfect labels is utilized to train a semantic segmentation neural network, U-Net. This method shows generalization capabilities on data with variability regarding its rotation- position- and lighting conditions. Nevertheless, one of the caveats of this approach is that the model is unable to generalize well to other surgical instruments with a different shape from the one used for training. This is driven by the lack of a high variance in the geometric distribution of the training data. Future work will focus on making the model more scale-invariant and able to adapt to other types of surgical instruments previously unseen by the training.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods