Fakult├Ąt Informatik und Mathematik
Regensburg Center for Artificial Intelligence
Regensburg Center of Biomedical Engineering
Regensburg Center of Health Sciences and Technology

Prof. Dr. rer nat. Christoph Palm

Scene segmentation surgical robots

Project description

Motivation

For the interpretation of images it is essential to know where which objects are located in the image. To achieve this goal, each pixel in the image must be assigned an object class, also called semantic segmentation. If this information is known, it can be used for more advanced problems, such as the detection of risk structures, automatic camera tracking, quality measurements during an operation or an operation step recognition.

Goals and procedure

The project aims at the automatic analysis of endoscopic videos. For this purpose, medically relevant objects are detected in the video images with the help of artificial intelligence. These objects can be e.g. the instruments of a surgical robot (manipulator, joint, shaft), anatomical objects (kidney, intestine, ...) as well as medical material (needles, sutures, clamps, ...). The recognition and differentiation of the objects is based on a so-called "semantic segmentation". The method used for semantic segmentation is based on deep neural networks and is based on an encoder-decoder architecture, which means that the encoder first extracts distinctive features from the input image that are essential for the task. This encoded representation of the image is then converted into a mask image by the decoder. This mask image specifies the corresponding object categories for each pixel. This means that after applying the method, it is known where which objects are located in the image. The project requires expertise in the analysis of endoscopic videos, which has already been proven in a competition [1].

[1] https://endovissub2018-roboticscenesegmentation.grand-challenge.org/