Fakultät Informatik und Mathematik
Regensburg Center for Artificial Intelligence
Regensburg Center of Biomedical Engineering
Regensburg Center of Health Sciences and Technology

Prof. Dr. rer nat. Christoph Palm

Publikationen

ReMIC (Prof. Palm)

2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000

2020

Bildverarbeitung für Medizin 2020
Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm Optically tracked and 3D printed haptic phantom hand for surgical training system
Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Friederike Prinz, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus
Maximilian Weiherer, Martin Zorn, Thomas Wittenberg, Christoph Palm Retrospective Color Shading Correction for Endoscopic Images
Ching-Sheng Chang, Jin-Fa Lin, Ming-Ching Lee, Christoph Palm Semantic Lung Segmentation Using Convolutional Neural Networks

2019

Alanna Ebigbo, Christoph Palm, Andreas Probst, Robert Mendel, Johannes Manzeneder, Friederike Prinz, Luis Antonio de Souza, João P. Papa, Peter Siersema, Helmut Messmann A technical review of artificial intelligence as applied to gastrointestinal endoscopy: clarifying the terminology
Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm Abstract: Imitating Human Soft Tissue with Dual-Material 3D Printing
Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann Artificial Intelligence in Early Barrett's Cancer: The Segmentation Task
Leandro A. Passos, Luis Antonio de Souza, Robert Mendel, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Christoph Palm, João P. Papa Barrett’s esophagus analysis using infinity Restricted Boltzmann Machines
Bildverarbeitung für die Medizin 2019
Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma
Johannes Maier, Jerome Perret, Martina Simon, Stephanie Schmitt-Rüth, Thomas Wittenberg, Christoph Palm Force-feedback assisted and virtual fixtures based K-wire drilling simulation
Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm Imitating human soft tissue on basis of a dual-material 3D print using a support-filled metamaterial to provide bimanual haptic for a hand surgery training system
Peter Brown, RELISH Consortium, Yaoqi Zhou Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Luis Antonio de Souza, Luis Claudio Sugi Afonso, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Robert Mendel, Christian Hook, Christoph Palm, João P. Papa Learning visual representations with optimum-path forest and its applications to Barrett's esophagus and adenocarcinoma diagnosis
Luise Middel, Christoph Palm, Marius Erdt Synthesis of Medical Images Using GANs

2018

Rebecca Wöhl, Johannes Maier, Sebastian Gehmert, Christoph Palm, Birgit Riebschläger, Michael Nerlich, Michaela Huber 3D Analysis of Osteosyntheses Material using semi-automated CT Segmentation
Felix Graßmann, Judith Mengelkamp, Caroline Brandl, Sebastian Harsch, Martina E. Zimmermann, Birgit Linkohr, Annette Peters, Iris M. Heid, Christoph Palm, Bernhard H. F. Weber A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography
Thomas Eixelberger, Thomas Wittenberg, Jerome Perret, Uwe Katzky, Martina Simon, Stephanie Schmitt-Rüth, Mathias Hofer, M. Sorge, R. Jacob, Felix B. Engel, A. Gostian, Christoph Palm, Daniela Franz A haptic model for virtual petrosal bone milling
Luis Antonio de Souza, Christoph Palm, Robert Mendel, Christian Hook, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Silke Weber, João P. Papa A survey on Barrett's esophagus analysis using machine learning
Luis Antonio de Souza, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Joao P. Papa, Robert Mendel, Christoph Palm Barrett's Esophagus Identification Using Color Co-occurrence Matrices
Bildverarbeitung für die Medizin 2018
Daniela Franz, Maria Dreher, Martin Prinzen, Matthias Teßmann, Christoph Palm, Uwe Katzky, Jerome Perret, Mathias Hofer, Thomas Wittenberg CT-basiertes virtuelles Fräsen am Felsenbein
Johannes Maier, Michaela Huber, Uwe Katzky, Jerome Perret, Thomas Wittenberg, Christoph Palm Force-Feedback-assisted Bone Drilling Simulation Based on CT Data

2017

Luis Antonio de Souza, Christian Hook, João P. Papa, Christoph Palm Barrett's Esophagus Analysis Using SURF Features
Luis Antonio de Souza, Luis Claudio Sugi Afonso, Christoph Palm, João P. Papa Barrett's Esophagus Identification Using Optimum-Path Forest
Robert Mendel, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Christoph Palm Barrett’s Esophagus Analysis Using Convolutional Neural Networks
Josef A. Schröder, Matthias Semmelmann, Heiko Siegmund, Claudia Grafe, Matthias Evert, Christoph Palm Improved interactive computer-assisted approach for evaluation of ultrastructural cilia abnormalities
Rebecca Wöhl, Michaela Huber, Markus Loibl, Birgit Riebschläger, Michael Nerlich, Christoph Palm The Impact of Semi-Automated Segmentation and 3D Analysis on Testing New Osteosynthesis Material

2016

Daniela Franz, Uwe Katzky, S. Neumann, Jerome Perret, Mathias Hofer, Michaela Huber, Stephanie Schmitt-Rüth, S. Haug, K. Weber, Martin Prinzen, Christoph Palm, Thomas Wittenberg Haptisches Lernen für Cochlea Implantationen
Christoph Palm, Heiko Siegmund, Matthias Semmelmann, Claudia Grafe, Matthias Evert, Josef A. Schröder Interactive Computer-assisted Approach for Evaluation of Ultrastructural Cilia Abnormalities

2015

Markus Hutterer, Elke Hattingen, Christoph Palm, Martin Andreas Proescholdt, Peter Hau Current standards and new concepts in MRI and PET response assessment of antiangiogenic therapies in high-grade glioma patients
Joachim Weber, Christian Doenitz, Alexander Brawanski, Christoph Palm Data-Parallel MRI Brain Segmentation in Clinicial Use
Alexander Zehner, Alexander Eduard Szalo, Christoph Palm GraphMIC: Easy Prototyping of Medical Image Computing Applications
Alexander Eduard Szalo, Alexander Zehner, Christoph Palm GraphMIC: Medizinische Bildverarbeitung in der Lehre

2014

Christoph Palm Fusion of Serial 2D Section Images and MRI Reference

2013

Christoph Palm, T. Schanze Biomedical Image and Signal Computing (BISC 2013)
Joachim Weber, Alexander Brawanski, Christoph Palm Parallelization of FSL-Fast segmentation of MRI brain data
Thomas M. Deserno, Heinz Handels, Klaus-Hermann Maier-Hein, Sven Mersmann, Christoph Palm, Thomas Tolxdorff, Gudrun Wagenknecht, Thomas Wittenberg Viewpoints on Medical Image Processing

2011

Tobias Osterholt, Dagmar Salber, Andreas Matusch, Johanna Sabine Becker, Christoph Palm IMAGENA: Image Generation and Analysis
Johanna Sabine Becker, Andreas Matusch, Julia Susanne Becker, Bei Wu, Christoph Palm, Albert Johann Becker, Dagmar Salber Mass spectrometric imaging (MSI) of metals using advanced BrainMet techniques for biomedical research
Markus Axer, Katrin Amunts, David Gräßel, Christoph Palm, Jürgen Dammers, Hubertus Axer, Uwe Pietrzyk, Karl Zilles Novel Approach to the Human Connectome

2010

Johanna Sabine Becker, Miroslav Zoriy, Andreas Matusch, Bei Wu, Dagmar Salber, Christoph Palm, Julia Susanne Becker Bioimaging of Metals by Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS)
Johanna Sabine Becker, Andreas Matusch, Christoph Palm, Dagmar Salber, Kathryn A. Morton, Julia Susanne Becker Bioimaging of metals in brain tissue by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and metallomics
Andreas Matusch, Candan Depboylu, Christoph Palm, Bei Wu, Günter U. Höglinger, Martin K.-H. Schäfer, Johanna Sabine Becker Cerebral bio-imaging of Cu, Fe, Zn and Mn in the MPTP mouse model of Parkinsons disease using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS)
Björn Eiben, Christoph Palm, Uwe Pietrzyk, Christos Davatzikos, Katrin Amunts Error Correction using Registration for Blockface Volume Reconstruction of Serial Histological Sections of the Human Brain
Jürgen Dammers, Markus Axer, David Gräßel, Christoph Palm, Karl Zilles, Katrin Amunts, Uwe Pietrzyk Signal enhancement in polarized light imaging by means of independent component analysis
Christoph Palm, Markus Axer, David Gräßel, Jürgen Dammers, Johannes Lindemeyer, Karl Zilles, Uwe Pietrzyk, Katrin Amunts Towards ultra-high resolution fibre tract mapping of the human brain

2009

Christoph Palm, Andreas Vieten, Dagmar Salber, Uwe Pietrzyk Evaluation of Registration Strategies for Multi-modality Images of Rat Brain Slices
Björn Eiben, Dietmar Kunz, Uwe Pietrzyk, Christoph Palm Level-Set-Segmentierung von Rattenhirn MRTs
Nicole Schubert, Uwe Pietrzyk, Martin Reißel, Christoph Palm Reduktion von Rissartefakten durch nicht-lineare Registrierung in histologischen Schnittbildern
David Gräßel, Markus Axer, Christoph Palm, Jürgen Dammers, Katrin Amunts, Uwe Pietrzyk, Karl Zilles Visualization of Fiber Tracts in the Postmortem Human Brain by Means of Polarized Light

2008

Andreas Mang, Julia A. Schnabel, William R. Crum, Marc Modat, Oscar Camara-Rey, Christoph Palm, Gisele Brasil Caseiras, H. Rolf Jäger, Sébastien Ourselin, Thorsten M. Buzug, David J. Hawkes Consistency of parametric registration in serial MRI studies of brain tumor progression
Christoph Palm, Penny P. Graeme, William R. Crum, Julia A. Schnabel, Uwe Pietrzyk, David J. Hawkes Fusion of Rat Brain Histology and MRI using Weighted Multi-Image Mutual Information
Thomas Beyer, Markus Weigert, Harald H. Quick, Uwe Pietrzyk, Florian Vogt, Christoph Palm, Gerald Antoch, Stefan P. Müller, Andreas Bockisch MR-based attenuation correction for torso-PET/MR imaging
Christoph Palm, Uwe Pietrzyk Time-Dependent Joint Probability Speed Function for Level-Set Segmentation of Rat-Brain Slices
Markus Weigert, Uwe Pietrzyk, Stefan P. Müller, Christoph Palm, Thomas Beyer Whole-body PET/CT imaging

2007

Christoph Palm, William R. Crum, Uwe Pietrzyk, David J. Hawkes Application of Fluid and Elastic Registration Methods to Histological Rat Brain Sections
Markus Weigert, Thomas Beyer, Harald H. Quick, Uwe Pietrzyk, Christoph Palm, Stefan P. Müller Generation of a MRI reference data set for the validation of automatic, non-rigid image co-registration algorithms
Markus Dehnhardt, Christoph Palm, Andreas Vieten, Andreas Bauer, Uwe Pietrzyk Quantifying the A1AR distribution in peritumoral zones around experimental F98 and C6 rat brain tumours
Markus Weigert, Christoph Palm, Harald H. Quick, Stefan P. Müller, Uwe Pietrzyk, Thomas Beyer Template for MR-based attenuation correction for whole-body PET/MR imaging
Markus Axer, Hubertus Axer, Christoph Palm, David Gräßel, Karl Zilles, Uwe Pietrzyk Visualization of Nerve Fibre Orientation in the Visual Cortex of the Human Brain by Means of Polarized Light

2006

Christoph Palm, Andreas Vieten, Dagmar Bauer, Uwe Pietrzyk Evaluierung von Registrierungsstrategien zur multimodalen 3D-Rekonstruktion von Rattenhirnschnitten
Thomas Beyer, Markus Weigert, Christoph Palm, Harald H. Quick, Stefan P. Müller, Uwe Pietrzyk, Florian Vogt, M.J. Martinez, Andreas Bockisch Towards MR-based attenuation correction for whole-body PET/MR imaging
Dagmar Bauer, Gabriele Stoffels, Dirk Pauleit, Christoph Palm, Kurt Hamacher, Heinz H. Coenen, Karl Langen Uptake of F-18-fluoroethyl-L-tyrosine and H-3-L-methionine in focal cortical ischemia

2005

Christoph Palm, Markus Dehnhardt, Andreas Vieten, Uwe Pietrzyk 3D rat brain tumor reconstruction
Christoph Palm, Markus Dehnhardt, Andreas Vieten, Uwe Pietrzyk, Andreas Bauer, Karl Zilles 3D rat brain tumors
Uwe Pietrzyk, Christoph Palm, Thomas Beyer Fusion strategies in multi-modality imaging
Dagmar Bauer, Kurt Hamacher, Stefan Bröer, Dirk Pauleit, Christoph Palm, Karl Zilles, Heinz H. Coenen, Karl-Josef Langen Preferred stereoselective brain uptake of D-serine

2004

Christoph Palm Color Texture Classification by Integrative Co-Occurrence Matrices
Uwe Pietrzyk, Dagmar Bauer, Andreas Vieten, Andreas Bauer, Karl-Josef Langen, Karl Zilles, Christoph Palm Creating consistent 3D multi-modality data sets from autoradiographic and histological images of the rat brain
Uwe Pietrzyk, Christoph Palm, Thomas Beyer Investigation of fusion strategies of multi-modality images

2003

Christoph Palm, Andreas G. Schütz, Klaus Spitzer, Martin Westhofen, Thomas M. Lehmann, Justus F. R. Ilgner Colour Texture Analysis for Quantitative Laryngoscopy
Christoph Palm, , Integrative Auswertung von Farbe und Textur

2002

Christoph Palm, Thomas M. Lehmann Classification of Color Textures by Gabor Filtering
B. Fischer, Christoph Palm, Thomas M. Lehmann, Klaus Spitzer Selektion von Farbtexturmerkmalen zur Tumorklassifikation dermatoskopischer Fotografien

2001

C. Neuschaefer-Rube, Thomas M. Lehmann, Christoph Palm, J. Bredno, S. Klajman, Klaus Spitzer 3D-Visualisierung glottaler Abduktionsbewegungen
Christoph Palm, Thomas M. Lehmann, J. Bredno, C. Neuschaefer-Rube, S. Klajman, Klaus Spitzer Automated Analysis of Stroboscopic Image Sequences by Vibration Profiles
Thomas M. Lehmann, Christoph Palm Color Line Search for Illuminant Estimation in Real World Scenes

2000

Christoph Palm, Thomas M. Lehmann, Klaus Spitzer Color Texture Analysis of Moving Vocal Cords Using Approaches from Statistics and Signal Theory
Christoph Palm, D. Keysers, Thomas M. Lehmann, Klaus Spitzer Gabor Filtering of Complex Hue/Saturation Images for Color Texture Classification
Christoph Palm, B. Fischer, Thomas M. Lehmann, Klaus Spitzer Hierarchische Wasserscheiden-Transformation zur Lippensegmentierung in Farbbildern
V. Metzler, T. Aach, Christoph Palm, Thomas M. Lehmann Texture Classification of Graylevel Images by Multiscale Cross-Co-Occurrence Matrices

Bildverarbeitung für Medizin 2020

In den letzten Jahren hat sich der Workshop „Bildverarbeitung für die Medizin“ durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2020 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes – einige davon in englischer Sprache – umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.

Retrospective Color Shading Correction for Endoscopic Images

Maximilian Weiherer, Martin Zorn, Thomas Wittenberg, Christoph Palm

In this paper, we address the problem of retrospective color shading correction. An extension of the established gray-level shading correction algorithm based on signal envelope (SE) estimation to color images is developed using principal color components. Compared to the probably most general shading correction algorithm based on entropy minimization, SE estimation does not need any computationally expensive optimization and thus can be implemented more effciently. We tested our new shading correction scheme on artificial as well as real endoscopic images and observed promising results. Additionally, an indepth analysis of the stop criterion used in the SE estimation algorithm is provided leading to the conclusion that a fixed, user-defined threshold is generally not feasible. Thus, we present new ideas how to develop a non-parametric version of the SE estimation algorithm using entropy.

Semantic Lung Segmentation Using Convolutional Neural Networks

Ching-Sheng Chang, Jin-Fa Lin, Ming-Ching Lee, Christoph Palm

Chest X-Ray (CXR) images as part of a non-invasive diagnosis method are commonly used in today’s medical workflow. In traditional methods, physicians usually use their experience to interpret CXR images, however, there is a large interobserver variance. Computer vision may be used as a standard for assisted diagnosis. In this study, we applied an encoder-decoder neural network architecture for automatic lung region detection. We compared a three-class approach (left lung, right lung, background) and a two-class approach (lung, background). The differentiation of left and right lungs as direct result of a semantic segmentation on basis of neural nets rather than post-processing a lung-background segmentation is done here for the first time. Our evaluation was done on the NIH Chest X-ray dataset, from which 1736 images were extracted and manually annotated. We achieved 94:9% mIoU and 92% mIoU as segmentation quality measures for the two-class-model and the three-class-model, respectively. This result is very promising for the segmentation of lung regions having the simultaneous classification of left and right lung in mind.

Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus

Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Friederike Prinz, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann

Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE.

Optically tracked and 3D printed haptic phantom hand for surgical training system

Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm

Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system.
Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested.

Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system.

Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system.

Abstract: Imitating Human Soft Tissue with Dual-Material 3D Printing

Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm

Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.

Synthesis of Medical Images Using GANs

Luise Middel, Christoph Palm, Marius Erdt

The success of artificial intelligence in medicine is based on the need for large amounts of high quality training data. Sharing of medical image data, however, is often restricted by laws such as doctor-patient confidentiality. Although there are publicly available medical datasets, their quality and quantity are often low. Moreover, datasets are often imbalanced and only represent a fraction of the images generated in hospitals or clinics and can thus usually only be used as training data for specific problems. The introduction of generative adversarial networks (GANs) provides a mean to generate artificial images by training two convolutional networks. This paper proposes a method which uses GANs trained on medical images in order to generate a large number of artificial images that could be used to train other artificial intelligence algorithms. This work is a first step towards alleviating data privacy concerns and being able to publicly share data that still contains a substantial amount of the information in the original private data. The method has been evaluated on several public datasets and quantitative and qualitative tests showing promising results.

Bildverarbeitung für die Medizin 2019

In den letzten Jahren hat sich der Workshop „Bildverarbeitung für die Medizin“ durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2019 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes – einige davon in englischer Sprache – umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.

A technical review of artificial intelligence as applied to gastrointestinal endoscopy: clarifying the terminology

Alanna Ebigbo, Christoph Palm, Andreas Probst, Robert Mendel, Johannes Manzeneder, Friederike Prinz, Luis Antonio de Souza, João P. Papa, Peter Siersema, Helmut Messmann

The growing number of publications on the application of artificial intelligence (AI) in medicine underlines the enormous importance and potential of this emerging field of research.

In gastrointestinal endoscopy, AI has been applied to all segments of the gastrointestinal tract most importantly in the detection and characterization of colorectal polyps. However, AI research has been published also in the stomach and esophagus for both neoplastic and non-neoplastic disorders.

The various technical as well as medical aspects of AI, however, remain confusing especially for non-expert physicians.

This physician-engineer co-authored review explains the basic technical aspects of AI and provides a comprehensive overview of recent publications on AI in gastrointestinal endoscopy. Finally, a basic insight is offered into understanding publications on AI in gastrointestinal endoscopy.

Barrett’s esophagus analysis using infinity Restricted Boltzmann Machines

Leandro A. Passos, Luis Antonio de Souza, Robert Mendel, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Christoph Palm, João P. Papa

The number of patients with Barret’s esophagus (BE) has increased in the last decades. Considering the dangerousness of the disease and its evolution to adenocarcinoma, an early diagnosis of BE may provide a high probability of cancer remission. However, limitations regarding traditional methods of detection and management of BE demand alternative solutions. As such, computer-aided tools have been recently used to assist in this problem, but the challenge still persists. To manage the problem, we introduce the infinity Restricted Boltzmann Machines (iRBMs) to the task of automatic identification of Barrett’s esophagus from endoscopic images of the lower esophagus. Moreover, since iRBM requires a proper selection of its meta-parameters, we also present a discriminative iRBM fine-tuning using six meta-heuristic optimization techniques. We showed that iRBMs are suitable for the context since it provides competitive results, as well as the meta-heuristic techniques showed to be appropriate for such task.

Artificial Intelligence in Early Barrett’s Cancer

Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann

Aims:

The delineation of outer margins of early Barrett’s cancer can be challenging even for experienced endoscopists. Artificial intelligence (AI) could assist endoscopists faced with this task. As of date, there is very limited experience in this domain. In this study, we demonstrate the measure of overlap (Dice coefficient = D) between highly experienced Barrett endoscopists and an AI system in the delineation of cancer margins (segmentation task).

Methods:

An AI system with a deep convolutional neural network (CNN) was trained and tested on high-definition endoscopic images of early Barrett’s cancer (n = 33) and normal Barrett’s mucosa (n = 41). The reference standard for the segmentation task were the manual delineations of tumor margins by three highly experienced Barrett endoscopists. Training of the AI system included patch generation, patch augmentation and adjustment of the CNN weights. Then, the segmentation results from patch classification and thresholding of the class probabilities. Segmentation results were evaluated using the Dice coefficient (D).

Results:

The Dice coefficient (D) which can range between 0 (no overlap) and 1 (complete overlap) was computed only for images correctly classified by the AI-system as cancerous. At a threshold of t = 0.5, a mean value of D = 0.72 was computed.

Conclusions:

AI with CNN performed reasonably well in the segmentation of the tumor region in Barrett’s cancer, at least when compared with expert Barrett’s endoscopists. AI holds a lot of promise as a tool for better visualization of tumor margins but may need further improvement and enhancement especially in real-time settings.

Artificial Intelligence in Early Barrett’s Cancer: The Segmentation Task

Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann

The delineation of outer margins of early Barrett’s cancer can be challenging even for experienced endoscopists. Artificial intelligence (AI) could assist endoscopists faced with this task. As of date, there is very limited experience in this domain. In this study, we demonstrate the measure of overlap (Dice coefficient = D) between highly experienced Barrett endoscopists and an AI system in the delineation of cancer margins (segmentation task).

Large expert-curated database for benchmarking document similarity detection in biomedical literature search

Peter Brown, RELISH Consortium, Yaoqi Zhou

Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.

Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis

Luis Antonio de Souza, Luis Claudio Sugi Afonso, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Robert Mendel, Christian Hook, Christoph Palm, João P. Papa

Considering the increase in the number of the Barrett’s esophagus (BE) in the last decade, and its expected continuous increase, methods that can provide an early diagnosis of dysplasia in BE-diagnosed patients may provide a high probability of cancer remission. The limitations related to traditional methods of BE detection and management encourage the creation of computer-aided tools to assist in this problem. In this work, we introduce the unsupervised Optimum-Path Forest (OPF) classifier for learning visual dictionaries in the context of Barrett’s esophagus (BE) and automatic adenocarcinoma diagnosis. The proposed approach was validated in two datasets (MICCAI 2015 and Augsburg) using three different feature extractors (SIFT, SURF, and the not yet applied to the BE context A-KAZE), as well as five supervised classifiers, including two variants of the OPF, Support Vector Machines with Radial Basis Function and Linear kernels, and a Bayesian classifier. Concerning MICCAI 2015 dataset, the best results were obtained using unsupervised OPF for dictionary generation using supervised OPF for classification purposes and using SURF feature extractor with accuracy nearly to 78% for distinguishing BE patients from adenocarcinoma ones. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifiers but with A-KAZE as the feature extractor with accuracy close to 73%.
The combination of feature extraction and bag-of-visual-words techniques showed results that outperformed others obtained recently in the literature, as well as we highlight new advances in the related research area. Reinforcing the significance of this work, to the best of our knowledge, this is the first one that aimed at addressing computer-aided BE identification using bag-of-visual-words and OPF classifiers, being the application of unsupervised technique in the BE feature calculation the major contribution of this work. It is also proposed a new BE and adenocarcinoma description using the A-KAZE features, not yet applied in the literature.

Force-feedback assisted and virtual fixtures based K-wire drilling simulation

Johannes Maier, Jerome Perret, Martina Simon, Stephanie Schmitt-Rüth, Thomas Wittenberg, Christoph Palm

One common method to fix fractures of the human hand after an accident is an osteosynthesis with Kirschner wires (K-wires) to stabilize the bone fragments. The insertion of K-wires is a delicate minimally invasive surgery, because surgeons operate almost without a sight. Since realistic training methods are time consuming, costly and insufficient, a virtual-reality (VR) based training system for the placement of K-wires was developed. As part of this, the current work deals with the real-time bone drilling simulation using a haptic force-feedback device.

To simulate the drilling, we introduce a virtual fixture based force-feedback drilling approach. By decomposition of the drilling task into individual phases, each phase can be handled individually to perfectly control the drilling procedure. We report about the related finite state machine (FSM), describe the haptic feedback of each state and explain, how to avoid jerking of the haptic force-feedback during state transition.

The usage of the virtual fixture approach results in a good haptic performance and a stable drilling behavior. This was confirmed by 26 expert surgeons, who evaluated the virtual drilling on the simulator and rated it as very realistic. To make the system even more convincing, we determined real drilling feed rates through experimental pig bone drilling and transferred them to our system. Due to a constant simulation thread we can guarantee a precise drilling motion.

Virtual fixtures based force-feedback calculation is able to simulate force-feedback assisted bone drilling with high quality and, thus, will have a great potential in developing medical applications.

Imitating human soft tissue on basis of a dual-material 3D print using a support-filled metamaterial to provide bimanual haptic for a hand surgery training system

Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm

Background: Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D-printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.
Methods: The goal of this experiment is to imitate human soft tissue with its haptic and elasticity for a realistic hand phantom fabrication, using only a dual-material 3D printer and support-material-filled metamaterial between skin and bone. We present our workflow to generate lattice structures between hard bone and soft skin with iterative cube edge (CE) or cube face (CF) unit cells. Cuboid and finger shaped sample prints with and without inner hard bone in different lattice thickness are constructed and 3D printed.
Results: The most elastic available rubber-like material is too firm to imitate soft tissue. By reducing the amount of rubber in the inner volume through support material (SUP), objects become significantly softer. Without metamaterial, after disintegration, the SUP can be shifted through the volume and thus the body loses its original shape. Although the CE design increases the elasticity, it cannot restore the fabric form. In contrast to CE, the CF design increases not only the elasticity but also guarantees a local limitation of the SUP. Therefore, the body retains its shape and internal bones remain in its intended place. Various unit cell sizes, lattice thickening and skin thickness regulate the rubber material and SUP ratio. Test prints with higher SUP and lower rubber material percentage appear softer and vice versa. This was confirmed by an expert surgeon evaluation. Subjects adjudged pure rubber-like material as too firm and samples only filled with SUP or lattice structure in CE design as not suitable for imitating tissue. 3D-printed finger samples in CF design were rated as realistic compared to the haptic of human tissue with a good palpable bone structure.
Conclusions: We developed a new dual-material 3D print technique to imitate soft tissue of the human hand with its haptic properties. Blowy SUP is trapped within a lattice structure to soften rubber-like 3D print material, which makes it possible to reproduce a realistic replica of human hand soft tissue.

Barrett’s Esophagus Identification Using Color Co-occurrence Matrices

Luis Antonio de Souza, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Joao P. Papa, Robert Mendel, Christoph Palm

In this work, we propose the use of single channel Color Co-occurrence Matrices for texture description of Barrett’sEsophagus (BE)and adenocarcinoma images. Further classification using supervised learning techniques, such as Optimum-Path Forest (OPF), Support Vector Machines with Radial Basisunction (SVM-RBF) and Bayesian classifier supports the contextof automatic BE and adenocarcinoma diagnosis. We validated three approaches of classification based on patches, patients and images in two datasets (MICCAI 2015 and Augsburg) using the color-and-texture descriptors and the machine learning techniques. Concerning MICCAI 2015 dataset, the best results were obtained using the blue channel for the descriptors and the supervised OPF for classification purposes in the patch-based approach, with sensitivity nearly to 73% for positive adenocarcinoma identification and specificity close to 77% for BE (non-cancerous) patch classification. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifier and blue channel descriptor for the feature extraction, with sensitivity close to 67% and specificity around to76%. Our work highlights new advances in the related research area and provides a promising technique that combines color and texture information, allied to three different approaches of dataset pre-processing aiming to configure robust scenarios for the classification step.

A survey on Barrett’s esophagus analysis using machine learning

Luis Antonio de Souza, Christoph Palm, Robert Mendel, Christian Hook, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Silke Weber, João P. Papa

This work presents a systematic review concerning recent studies and technologies of machine learning for Barrett’s esophagus (BE) diagnosis and treatment. The use of artificial intelligence is a brand new and promising way to evaluate such disease. We compile some works published at some well-established databases, such as Science Direct, IEEEXplore, PubMed, Plos One, Multidisciplinary Digital Publishing Institute (MDPI), Association for Computing Machinery (ACM), Springer, and Hindawi Publishing Corporation. Each selected work has been analyzed to present its objective, methodology, and results. The BE progression to dysplasia or adenocarcinoma shows a complex pattern to be detected during endoscopic surveillance. Therefore, it is valuable to assist its diagnosis and automatic identification using computer analysis. The evaluation of the BE dysplasia can be performed through manual or automated segmentation through machine learning techniques. Finally, in this survey, we reviewed recent studies focused on the automatic detection of the neoplastic region for classification purposes using machine learning methods.

Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma

Alanna Ebigbo, Robert Mendel, Andreas Probst, Johannes Manzeneder, Luis Antonio de Souza, João P. Papa, Christoph Palm, Helmut Messmann

Computer-aided diagnosis using deep learning (CAD-DL) may be an instrument to improve endoscopic assessment of Barrett’s oesophagus
(BE) and early oesophageal adenocarcinoma (EAC). Based on still images from two databases, the diagnosis of EAC by CAD-DL reached sensitivities/specificities of 97%/88% (Augsburg data) and 92%/100% (Medical Image Computing and Computer-Assisted Intervention [MICCAI]
data) for white light (WL) images and 94%/80% for narrow band images (NBI) (Augsburg data), respectively. Tumour margins delineated by
experts into images were detected satisfactorily with a Dice coefficient (D) of 0.72. This could be a first step towards CAD-DL for BE assessment. If developed further, it could become a useful
adjunctive tool for patient management.

CT-basiertes virtuelles Fräsen am Felsenbein

Daniela Franz, Maria Dreher, Martin Prinzen, Matthias Teßmann, Christoph Palm, Uwe Katzky, Jerome Perret, Mathias Hofer, Thomas Wittenberg

Im Rahmen der Entwicklung eines haptisch-visuellen Trainingssystems für das Fräsen am Felsenbein werden ein Haptikarm und ein autostereoskopischer 3D-Monitor genutzt, um Chirurgen die virtuelle Manipulation von knöchernen Strukturen im Kontext eines sog. Serious Game zu ermöglichen. Unter anderem sollen Assistenzärzte im Rahmen ihrer Ausbildung das Fräsen am Felsenbein für das chirurgische Einsetzen eines Cochlea-Implantats üben können. Die Visualisierung des virtuellen Fräsens muss dafür in Echtzeit und möglichst realistisch modelliert, implementiert und evaluiert werden. Wir verwenden verschiedene Raycasting Methoden mit linearer und Nearest Neighbor Interpolation und vergleichen die visuelle Qualität und die Bildwiederholfrequenzen der Methoden. Alle verglichenen Verfahren sind sind echtzeitfähig, unterscheiden sich aber in ihrer visuellen Qualität.

Force-Feedback-assisted Bone Drilling Simulation Based on CT Data

Johannes Maier, Michaela Huber, Uwe Katzky, Jerome Perret, Thomas Wittenberg, Christoph Palm

In order to fix a fracture using minimally invasive surgery approaches, surgeons are drilling complex and tiny bones with a 2 dimensional X-ray as single imaging modality in the operating room. Our novel haptic force-feedback and visual assisted training system will potentially help hand surgeons to learn the drilling procedure in a realistic visual environment. Within the simulation, the collision detection as well as the interaction between virtual drill, bone voxels and surfaces are important. In this work, the chai3d collision detection and force calculation algorithms are combined with a physics engine to simulate the bone drilling process. The chosen Bullet-Physics-Engine provides a stable simulation of rigid bodies, if the collision model of the drill and the tool holder is generated as a compound shape. Three haptic points are added to the K-wire tip for removing single voxels from the bone. For the drilling process three modes are proposed to emulate the different phases of drilling in restricting the movement of a haptic device.

A haptic model for virtual petrosal bone milling

Thomas Eixelberger, Thomas Wittenberg, Jerome Perret, Uwe Katzky, Martina Simon, Stephanie Schmitt-Rüth, Mathias Hofer, M. Sorge, R. Jacob, Felix B. Engel, A. Gostian, Christoph Palm, Daniela Franz

Virtual training of bone milling requires realtime and realistic haptics of the interaction between the ”virtual mill” and a ”virtual bone”. We propose an exponential abrasion model between a virtual one and the mill bit and combine it with a coarse representation of the virtual bone and the mill shaft for collision detection using the Bullet Physics Engine. We compare our exponential abrasion model to a widely used linear abrasion model and evaluate it quantitatively and qualitatively. The evaluation results show, that we can provide virtual milling in real-time, with an abrasion behavior similar to that proposed in the literature and with a realistic feeling of five different surgeons.

3D Analysis of Osteosyntheses Material using semi-automated CT Segmentation

Rebecca Wöhl, Johannes Maier, Sebastian Gehmert, Christoph Palm, Birgit Riebschläger, Michael Nerlich, Michaela Huber

Backround
Scaphoidectomy and midcarpal fusion can be performed using traditional fixation methods like K-wires, staples, screws or different dorsal (non)locking arthrodesis systems. The aim of this study is to test the Aptus four corner locking plate and to compare the clinical findings to the data revealed by CT scans and semi-automated segmentation.
Methods:
This is a retrospective review of eleven patients suffering from scapholunate advanced collapse (SLAC) or scaphoid non-union advanced collapse (SNAC) wrist, who received a four corner fusion between August 2011 and July 2014. The clinical evaluation consisted of measuring the range of motion (ROM), strength and pain on a visual analogue scale (VAS). Additionally, the Disabilities of the Arm, Shoulder and Hand (QuickDASH) and the Mayo Wrist Score were assessed. A computerized tomography (CT) of the wrist was obtained six weeks postoperatively. After semi-automated segmentation of the CT scans, the models were post processed and surveyed.
Results
During the six-month follow-up mean range of motion (ROM) of the operated wrist was 60°, consisting of 30° extension and 30° flexion. While pain levels decreased significantly, 54% of grip strength and 89% of pinch strength were preserved compared to the contralateral healthy wrist. Union could be detected in all CT scans of the wrist. While X-ray pictures obtained postoperatively revealed no pathology, two user related technical complications were found through the 3D analysis, which correlated to the clinical outcome.
Conclusion
Due to semi-automated segmentation and 3D analysis it has been proved that the plate design can keep up to the manufacturers’ promises. Over all, this case series confirmed that the plate can compete with the coexisting techniques concerning clinical outcome, union and complication rate.

A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography

Felix Graßmann, Judith Mengelkamp, Caroline Brandl, Sebastian Harsch, Martina E. Zimmermann, Birgit Linkohr, Annette Peters, Iris M. Heid, Christoph Palm, Bernhard H. F. Weber

Purpose
Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm.
Design Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study.
Participants.

We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study.
Methods.

We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study.
Main Outcome Measures.

κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification.
Results.

A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%–92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm detected 84.2% of all fundus images with definite signs of early or late AMD. Overall, 94.3% of healthy fundus images were classified correctly.

Conclusions
Our deep learning algoritm revealed a weighted κ outperforming human graders in the AREDS study and is suitable to classify AMD fundus images in other datasets using individuals >55 years of age.

Bildverarbeitung für die Medizin 2018

Barrett’s Esophagus Identification Using Optimum-Path Forest

Luis Antonio de Souza, Luis Claudio Sugi Afonso, Christoph Palm, João P. Papa

Computer-assisted analysis of endoscopic images can be helpful to the automatic diagnosis and classification of neoplastic lesions. Barrett’s esophagus (BE) is a common type of reflux that is not straight forward to be detected by endoscopic surveillance, thus being way susceptible to erroneous diagnosis, which can cause cancer when not treated properly. In this work, we introduce the Optimum-Path Forest (OPF) classifier to the task of automatic identification of Barrett’sesophagus, with promising results and outperforming the well known Support Vector Machines (SVM) in the aforementioned context. We consider describing endoscopic images by means of feature extractors based on key point information, such as the Speeded up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT), for further designing a bag-of-visual-wordsthat is used to feed both OPF and SVM classifiers. The best results were obtained by means of the OPF classifier for both feature extractors, with values lying on 0.732 (SURF) – 0.735(SIFT) for sensitivity, 0.782 (SURF) – 0.806 (SIFT) for specificity, and 0.738 (SURF) – 0.732 (SIFT) for the accuracy.

Barrett’s Esophagus Analysis Using SURF Features

Luis Antonio de Souza, Christian Hook, João P. Papa, Christoph Palm

The development of adenocarcinoma in Barrett’s esophagus is difficult to detect by endoscopic surveillance of patients with signs of dysplasia. Computer assisted diagnosis of endoscopic images (CAD) could therefore be most helpful in the demarcation and classification of neoplastic lesions. In this study we tested the feasibility of a CAD method based on Speeded up Robust Feature Detection (SURF). A given database containing 100 images from 39 patients served as benchmark for feature based classification models. Half of the images had previously been diagnosed by five clinical experts as being ”cancerous”, the other half as ”non-cancerous”. Cancerous image regions had been visibly delineated (masked) by the clinicians. SURF features acquired from full images as well as from masked areas were utilized for the supervised training and testing of an SVM classifier. The predictive accuracy of the developed CAD system is illustrated by sensitivity and specificity values. The results based on full image matching where 0.78 (sensitivity) and 0.82 (specificity) were achieved, while the masked region approach generated results of 0.90 and 0.95, respectively.

Barrett’s Esophagus Analysis Using Convolutional Neural Networks

Robert Mendel, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Christoph Palm

We propose an automatic approach for early detection of adenocarcinoma in the esophagus. High-definition endoscopic images (50 cancer, 50 Barrett) are partitioned into a dataset containing approximately equal amounts of patches showing cancerous and non-cancerous regions. A deep convolutional neural network is adapted to the data using a transfer learning approach. The final classification of an image is determined by at least one patch, for which the probability being a cancer patch exceeds a given threshold. The model was evaluated with leave one patient out cross-validation. With sensitivity and specificity of 0.94 and 0.88, respectively, our findings improve recently published results on the same image data base considerably. Furthermore, the visualization of the class probabilities of each individual patch indicates, that our approach might be extensible to the segmentation domain.

Improved interactive computer-assisted approach for evaluation of ultrastructural cilia abnormalities

Josef A. Schröder, Matthias Semmelmann, Heiko Siegmund, Claudia Grafe, Matthias Evert, Christoph Palm

The Impact of Semi-Automated Segmentation and 3D Analysis on Testing New Osteosynthesis Material

Rebecca Wöhl, Michaela Huber, Markus Loibl, Birgit Riebschläger, Michael Nerlich, Christoph Palm

A new protocol for testing osteosynthesis material postoperatively combining semi-automated segmentation and 3D analysis of surface meshes is proposed. By various steps of transformation and measuring, objective data can be collected. In this study the specifications of a locking plate used for mediocarpal arthrodesis of the wrist were examined. The results show, that union of the lunate, triquetrum, hamate and capitate was achieved and that the plate is comparable to coexisting arthrodesis systems. Additionally, it was shown, that the complications detected correlate to the clinical outcome. In synopsis, this protocol is considered beneficial and should be taken into account in further studies.

Haptisches Lernen für Cochlea Implantationen

Daniela Franz, Uwe Katzky, S. Neumann, Jerome Perret, Mathias Hofer, Michaela Huber, Stephanie Schmitt-Rüth, S. Haug, K. Weber, Martin Prinzen, Christoph Palm, Thomas Wittenberg

Die Implantation eines Cochlea Implantates benötigt einen chirurgischen Zugang im Felsenbein und durch die Paukenhöhle des Patienten. Der Chirurg hat eine eingeschränkte Sicht im Operationsgebiet, die weiterhin viele Risikostrukturen enthält. Um eine Cochlea Implantation sicher und fehlerfrei durchzuführen, ist eine umfangreiche theoretische und praktische (teilweise berufsbegleitende) Fortbildung sowie langjährige Erfahrung notwendig. Unter Nutzung von realen klinischen CT/MRT Daten von Innen- und Mittelohr und der interaktiven Segmentierung der darin abgebildeten Strukturen (Nerven, Cochlea, Gehörknöchelchen,…) wird im HaptiVisT Projekt ein haptisch-visuelles Trainingssystem für die Implantation von Innen- und Mittelohr-Implantaten realisiert, das als sog. „Serious Game“ mit immersiver Didaktik gestaltet wird. Die Evaluierung des Demonstrators hinsichtlich Zweckmäßigkeit erfolgt prozessbegleitend und ergebnisorientiert, um mögliche technische oder didaktische Fehler vor Fertigstellung des Systems aufzudecken. Drei zeitlich versetzte Evaluationen fokussieren dabei chirurgisch-fachliche, didaktische sowie haptisch-ergonomische Akzeptanzkriterien.

Interactive Computer-assisted Approach for Evaluation of Ultrastructural Cilia Abnormalities

Christoph Palm, Heiko Siegmund, Matthias Semmelmann, Claudia Grafe, Matthias Evert, Josef A. Schröder

Introduction – Diagnosis of abnormal cilia function is based on ultrastructural analysis of axoneme defects, especialy the features of inner and outer dynein arms which are the motors of ciliar motility. Sub-optimal biopsy material, methodical, and intrinsic electron microscopy factors pose difficulty in ciliary defects evaluation. We present a computer-assisted approach based on state-of-the-art image analysis and object recognition methods yielding a time-saving and efficient diagnosis of cilia dysfunction. Method – The presented approach is based on a pipeline of basal image processing methods like smoothing, thresholding and ellipse fitting. However, integration of application specific knowledge results in robust segmentations even in cases of image artifacts. The method is build hierarchically starting with the detection of cilia within the image, followed by the detection of nine doublets within each analyzable cilium, and ending with the detection of dynein arms of each doublet. The process is concluded by a rough classification of the dynein arms as basis for a computer-assisted diagnosis. Additionally, the interaction possibilities are designed in a way, that the results are still reproducible given the completion report. Results – A qualitative evaluation showed reasonable detection results for cilia, doublets and dynein arms. However, since a ground truth is missing, the variation of the computer-assisted diagnosis should be within the subjective bias of human diagnosticians. The results of a first quantitative evaluation with five human experts and six images with 12 analyzable cilia showed, that with default parameterization 91.6% of the cilia and 98% of the doublets were found. The computer-assisted approach rated 66% of those inner and outer dynein arms correct, where all human experts agree. However, especially the quality of the dynein arm classification may be improved in future work.

Data-Parallel MRI Brain Segmentation in Clinicial Use

Joachim Weber, Christian Doenitz, Alexander Brawanski, Christoph Palm

Structural MRI brain analysis and segmentation is a crucial part in the daily routine in neurosurgery for intervention planning. Exemplarily, the free software FSL-FAST (FMRIB’s Segmentation Library – FMRIB’s Automated Segmentation Tool) in version 4 is used for segmentation of brain tissue types. To speed up the segmentation procedure by parallel execution, we transferred FSL-FAST to a General Purpose Graphics Processing Unit (GPGPU) using Open Computing Language (OpenCL) [1]. The necessary steps for parallelization resulted in substantially different and less useful results. Therefore, the underlying methods were revised and adapted yielding computational overhead. Nevertheless, we achieved a speed-up factor of 3.59 from CPU to GPGPU execution, as well providing similar useful or even better results.

GraphMIC: Medizinische Bildverarbeitung in der Lehre

Alexander Eduard Szalo, Alexander Zehner, Christoph Palm

Die Lehre der medizinischen Bildverarbeitung vermittelt Kenntnisse mit einem breiten Methodenspektrum. Neben den Grundlagen der Verfahren soll ein Gefühl für eine geeignete Ausführungsreihenfolge und ihrer Wirkung auf medizinische Bilddaten entwickelt werden. Die Komplexität der Methoden erfordert vertiefte Programmierkenntnisse, sodass bereits einfache Operationen mit großem Programmieraufwand verbunden sind. Die Software GraphMIC stellt Bildverarbeitungsoperationen in Form interaktiver Knoten zur Verfügung und erlaubt das Arrangieren, Parametrisieren und Ausführen komplexer Verarbeitungssequenzen in einem Graphen. Durch den Fokus auf das Design einer Pipeline, weg von sprach- und frameworkspezifischen Implementierungsdetails, lassen sich grundlegende Prinzipien der Bildverarbeitung anschaulich erlernen. In diesem Beitrag stellen wir die visuelle Programmierung mit GraphMIC der nativen Implementierung äquivalenter Funktionen gegenüber. Die in C++ entwickelte Applikation basiert auf Qt, ITK, OpenCV, VTK und MITK.

GraphMIC: Easy Prototyping of Medical Image Computing Applications

Alexander Zehner, Alexander Eduard Szalo, Christoph Palm

GraphMIC is a cross-platform image processing application utilizing the libraries ITK and OpenCV. The abstract structure of image processing pipelines is visually represented by user interface components based on modern QtQuick technology and allows users to focus on arrangement and parameterization of operations rather than implementing the equivalent functionality natively in C++. The application’s central goal is to improve and simplify the typical workflow by providing various high level features and functions like multi threading, image sequence processing and advanced error handling. A built-in python interpreter allows the creation of custom nodes, where user defined algorithms can be integrated to extend basic functionality. An embedded 2d/3d visual-izer gives feedback of the resulting image of an operation or the whole pipeline. User inputs like seed points, contours or regions are forwarded to the processing pipeline as parameters to offer semi-automatic image computing. We report the main concept of the application and introduce several features and their implementation. Finally, the current state of development as well as future perspectives of GraphMIC are discussed

Current standards and new concepts in MRI and PET response assessment of antiangiogenic therapies in high-grade glioma patients

Markus Hutterer, Elke Hattingen, Christoph Palm, Martin Andreas Proescholdt, Peter Hau

Despite multimodal treatment, the prognosis of high-grade gliomas is grim. As tumor growth is critically dependent on new blood vessel formation, antiangiogenic treatment approaches offer an innovative treatment strategy. Bevacizumab, a humanized monoclonal antibody, has been in the spotlight of antiangiogenic approaches for several years. Currently, MRI including contrast-enhanced T1-weighted and T2/fluid-attenuated inversion recovery (FLAIR) images is routinely used to evaluate antiangiogenic treatment response (Response Assessment in Neuro-Oncology criteria). However, by restoring the blood–brain barrier, bevacizumab may reduce T1 contrast enhancement and T2/FLAIR hyperintensity, thereby obscuring the imaging-based detection of progression. The aim of this review is to highlight the recent role of imaging biomarkers from MR and PET imaging on measurement of disease progression and treatment effectiveness in antiangiogenic therapies. Based on the reviewed studies, multimodal imaging combining standard MRI with new physiological MRI techniques and metabolic PET imaging, in particular amino acid tracers, may have the ability to detect antiangiogenic drug susceptibility or resistance prior to morphological changes. As advances occur in the development of therapies that target specific biochemical or molecular pathways and alter tumor physiology in potentially predictable ways, the validation of physiological and metabolic imaging biomarkers will become increasingly important in the near future.

Fusion of Serial 2D Section Images and MRI Reference

Christoph Palm

Serial 2D section images with high resolution, resulting from innovative imaging methods become even more valuable, if they are fused with in vivo volumes. Achieving this goal, the 3D context of the sections would be restored, the deformations would be corrected and the artefacts would be eliminated. However, the registration in this field faces big challenges and is not solved in general. On the other hand, several approaches have been introduced dealing at least with some of these difficulties. Here, a brief overview of the topic is given and some of the solutions are presented. It does not constitute the claim to be a complete review, but could be a starting point for those who are interested in this field.

Viewpoints on Medical Image Processing

Thomas M. Deserno, Heinz Handels, Klaus-Hermann Maier-Hein, Sven Mersmann, Christoph Palm, Thomas Tolxdorff, Gudrun Wagenknecht, Thomas Wittenberg

Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

Parallelization of FSL-Fast segmentation of MRI brain data

Joachim Weber, Alexander Brawanski, Christoph Palm

Biomedical Image and Signal Computing (BISC 2013)

Christoph Palm, T. Schanze

Novel Approach to the Human Connectome

Markus Axer, Katrin Amunts, David Gräßel, Christoph Palm, Jürgen Dammers, Hubertus Axer, Uwe Pietrzyk, Karl Zilles

Signal transmission between different brain regions requires connecting fiber tracts, the structural basis of the human connectome. In contrast to animal brains, where a multitude of tract tracing methods can be used, magnetic resonance (MR)-based diffusion imaging is presently the only promising approach to study fiber tracts between specific human brain regions. However, this procedure has various inherent restrictions caused by its relatively low spatial resolution. Here, we introduce 3D-polarized light imaging (3D-PLI) to map the three-dimensional course of fiber tracts in the human brain with a resolution at a submillimeter scale based on a voxel size of 100 μm isotropic or less. 3D-PLI demonstrates nerve fibers by utilizing their intrinsic birefringence of myelin sheaths surrounding axons. This optical method enables the demonstration of 3D fiber orientations in serial microtome sections of entire human brains. Examples for the feasibility of this novel approach are given here. 3D-PLI enables the study of brain regions of intense fiber crossing in unprecedented detail, and provides an independent evaluation of fiber tracts derived from diffusion imaging data.

IMAGENA: Image Generation and Analysis

Tobias Osterholt, Dagmar Salber, Andreas Matusch, Johanna Sabine Becker, Christoph Palm

Metals are involved in many processes of life. They are needed for enzymatic reactions, are involved in healthy processes but also yield diseases if the metal homeostasis is disordered. Therefore, the interest to assess the spatial distribution of metals is rising in biomedical science. Imaging metal (and non-metal) isotopes by laser ablation mass spectrometry with inductively coupled plasma (LA-ICP-MS) requires a special software solution to process raw data obtained by scanning a sample line-by-line. As no software ready to use was available we developed an interactive software tool for Image Generation and Analysis (IMAGENA). Unless optimised for LA-ICP-MS, IMAGENA can handle other raw data as well. The general purpose was to reconstruct images from a continuous list of raw data points, to visualise these images, and to convert them into a commonly readable image file format that can be further analysed by standard image analysis software. The generation of the image starts with loading a text file that holds a data column of every measured isotope. Specifying general spatial domain settings like the data offset and the image dimensions is done by the user getting a direct feedback by means of a preview image. IMAGENA provides tools for calibration and to correct for a signal drift in the y-direction. Images are visualised in greyscale as well a pseudo-colours with possibilities for contrast enhancement. Image analysis is performed in terms of smoothed line plots in row and column direction.

Mass spectrometric imaging (MSI) of metals using advanced BrainMet techniques for biomedical research

Johanna Sabine Becker, Andreas Matusch, Julia Susanne Becker, Bei Wu, Christoph Palm, Albert Johann Becker, Dagmar Salber

Mass spectrometric imaging (MSI) is a young innovative analytical technique and combines different fields of advanced mass spectrometry and biomedical research with the aim to provide maps of elements and molecules, complexes or fragments. Especially essential metals such as zinc, copper, iron and manganese play a functional role in signaling, metabolism and homeostasis of the cell. Due to the high degree of spatial organization of metals in biological systems their distribution analysis is of key interest in life sciences. We have developed analytical techniques termed BrainMet using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) imaging to measure the distribution of trace metals in biological tissues for biomedical research and feasibility studies—including bioaccumulation and bioavailability studies, ecological risk assessment and toxicity studies in humans and other organisms. The analytical BrainMet techniques provide quantitative images of metal distributions in brain tissue slices which can be combined with other imaging modalities such as photomicrography of native or processed tissue (histochemistry, immunostaining) and autoradiography or with in vivo techniques such as positron emission tomography or magnetic resonance tomography.

Prospective and instrumental developments will be discussed concerning the development of the metalloprotein microscopy using a laser microdissection (LMD) apparatus for specific sample introduction into an inductively coupled plasma mass spectrometer (LMD-ICP-MS) or an application of the near field effect in LA-ICP-MS (NF-LA-ICP-MS). These nano-scale mass spectrometric techniques provide improved spatial resolution down to the single cell level.

Error Correction using Registration for Blockface Volume Reconstruction of Serial Histological Sections of the Human Brain

Björn Eiben, Christoph Palm, Uwe Pietrzyk, Christos Davatzikos, Katrin Amunts

For accurate registration of histological sections blockface images are frequently used as three dimensional reference. However, due to the use of endocentric lenses the images suffer from perspective errors such as scaling and seemingly relative movement of planes which are located in different distances parallel to the imaging sensor. The suggested correction of those errors is based on the estimation of scaling factors derived from image registration of regions characterized by differing distances to the point of view in neighboring sections. The correction allows the generation of a consistent three dimensional blockface volume.

Bioimaging of metals in brain tissue by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and metallomics

Johanna Sabine Becker, Andreas Matusch, Christoph Palm, Dagmar Salber, Kathryn A. Morton, Julia Susanne Becker

Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been developed and established as an emerging technique in the generation of quantitative images of metal distributions in thin tissue sections of brain samples (such as human, rat and mouse brain), with applications in research related to neurodegenerative disorders. A new analytical protocol is described which includes sample preparation by cryo-cutting of thin tissue sections and matrix-matched laboratory standards, mass spectrometric measurements, data acquisition, and quantitative analysis. Specific examples of the bioimaging of metal distributions in normal rodent brains are provided. Differences to the normal were assessed in a Parkinson’s disease and a stroke brain model. Furthermore, changes during normal aging were studied. Powerful analytical techniques are also required for the determination and characterization of metal-containing proteins within a large pool of proteins, e.g., after denaturing or non-denaturing electrophoretic separation of proteins in one-dimensional and two-dimensional gels. LA-ICP-MS can be employed to detect metalloproteins in protein bands or spots separated after gel electrophoresis. MALDI-MS can then be used to identify specific metal-containing proteins in these bands or spots. The combination of these techniques is described in the second section.

Bioimaging of Metals by Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS)

Johanna Sabine Becker, Miroslav Zoriy, Andreas Matusch, Bei Wu, Dagmar Salber, Christoph Palm, Julia Susanne Becker

The distribution analysis of (essential, beneficial, or toxic) metals (e.g., Cu, Fe, Zn, Pb, and others), metalloids, and non‐metals in biological tissues is of key interest in life science. Over the past few years, the development and application of several imaging mass spectrometric techniques has been rapidly growing in biology and medicine. Especially, in brain research metalloproteins are in the focus of targeted therapy approaches of neurodegenerative diseases such as Alzheimer’s and Parkinson’s disease, or stroke, or tumor growth. Laser ablation inductively coupled plasma mass spectrometry (LA‐ICP‐MS) using double‐focusing sector field (LA‐ICP‐SFMS) or quadrupole‐based mass spectrometers (LA‐ICP‐QMS) has been successfully applied as a powerful imaging (mapping) technique to produce quantitative images of detailed regionally specific element distributions in thin tissue sections of human or rodent brain. Imaging LA‐ICP‐QMS was also applied to investigate metal distributions in plant and animal sections to study, for example, the uptake and transport of nutrient and toxic elements or environmental contamination. The combination of imaging LA‐ICP‐MS of metals with proteomic studies using biomolecular mass spectrometry identifies metal‐containing proteins and also phosphoproteins. Metal‐containing proteins were imaged in a two‐dimensional gel after electrophoretic separation of proteins (SDS or Blue Native PAGE). Recent progress in LA‐ICP‐MS imaging as a stand‐alone technique and in combination with MALDI/ESI‐MS for selected life science applications is summarized.

Cerebral bio-imaging of Cu, Fe, Zn and Mn in the MPTP mouse model of Parkinsons disease using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS)

Andreas Matusch, Candan Depboylu, Christoph Palm, Bei Wu, Günter U. Höglinger, Martin K.-H. Schäfer, Johanna Sabine Becker

Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been established as a powerful technique for the determination of metal and nonmetal distributions within biological systems with high sensitivity. An imaging LA-ICP-MS technique for Fe, Cu, Zn, and Mn was developed to produce large series of quantitative element maps in native brain sections of mice subchronically intoxicated with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridin (MPTP) as a model of Parkinson’s disease. Images were calibrated using matrix-matched laboratory standards. A software solution allowing a precise delineation of anatomical structures was implemented. Coronal brain sections were analyzed crossing the striatum and the substantia nigra, respectively. Animals sacrificed 2 h, 7 d, or 28 d after the last MPTP injection and controls were investigated.
We observed significant decreases of Cu concentrations in the periventricular zone and the fascia dentata at 2 h and 7d and a recovery or overcompensation at 28 d, most pronounced in the rostral periventricular zone (+40%). In the cortex Cu decreased slightly to −10%. Fe increased in the interpeduncular nucleus (+40%) but not in the substantia nigra. This pattern is in line with a differential regulation of periventricular and parenchymal Cu, and with the histochemical localization of Fe, and congruent to regions of preferential MPTP binding described in the rodent brain.
The LA-ICP-MS technique yielded valid and statistically robust results in the present study on 39 slices from 19 animals. Our findings underline the value of routine micro-local analytical techniques in the life sciences and affirm a role of Cu availability in Parkinson’s disease.

Towards ultra-high resolution fibre tract mapping of the human brain

Christoph Palm, Markus Axer, David Gräßel, Jürgen Dammers, Johannes Lindemeyer, Karl Zilles, Uwe Pietrzyk, Katrin Amunts

Polarised light imaging (PLI) utilises the birefringence of the myelin sheaths in order to visualise the orientation of nerve fibres in microtome sections of adult human post-mortem brains at ultra-high spatial resolution. The preparation of post-mortem brains for PLI involves fixation, freezing and cutting into 100-μm-thick sections. Hence, geometrical distortions of histological sections are inevitable and have to be removed for 3D reconstruction and subsequent fibre tracking. We here present a processing pipeline for 3D reconstruction of these sections using PLI derived multimodal images of post-mortem brains. Blockface images of the brains were obtained during cutting; they serve as reference data for alignment and elimination of distortion artefacts. In addition to the spatial image transformation, fibre orientation vectors were reoriented using the transformation fields, which consider both affine and subsequent non-linear registration. The application of this registration and reorientation approach results in a smooth fibre vector field, which reflects brain morphology. PLI combined with 3D reconstruction and fibre tracking is a powerful tool for human brain mapping. It can also serve as an independent method for evaluating in vivo fibre tractography.

Signal enhancement in polarized light imaging by means of independent component analysis

Jürgen Dammers, Markus Axer, David Gräßel, Christoph Palm, Karl Zilles, Katrin Amunts, Uwe Pietrzyk

Polarized light imaging (PLI) enables the evaluation of fiber orientations in histological sections of human postmortem brains, with ultra-high spatial resolution. PLI is based on the birefringent properties of the myelin sheath of nerve fibers. As a result, the polarization state of light propagating through a rotating polarimeter is changed in such a way that the detected signal at each measurement unit of a charged-coupled device (CCD) camera describes a sinusoidal signal. Vectors of the fiber orientation defined by inclination and direction angles can then directly be derived from the optical signals employing PLI analysis. However, noise, light scatter and filter inhomogeneities interfere with the original sinusoidal PLI signals. We here introduce a novel method using independent component analysis (ICA) to decompose the PLI images into statistically independent component maps. After decomposition, gray and white matter structures can clearly be distinguished from noise and other artifacts. The signal enhancement after artifact rejection is quantitatively evaluated in 134 histological whole brain sections. Thus, the primary sinusoidal signals from polarized light imaging can be effectively restored after noise and artifact rejection utilizing ICA. Our method therefore contributes to the analysis of nerve fiber orientation in the human brain within a micrometer scale.

Visualization of Fiber Tracts in the Postmortem Human Brain by Means of Polarized Light

David Gräßel, Markus Axer, Christoph Palm, Jürgen Dammers, Katrin Amunts, Uwe Pietrzyk, Karl Zilles

Reduktion von Rissartefakten durch nicht-lineare Registrierung in histologischen Schnittbildern

Nicole Schubert, Uwe Pietrzyk, Martin Reißel, Christoph Palm

In dieser Arbeit wird ein Verfahren vorgestellt, das Rissartefakte, die in histologischen Rattenhirnschnitten vorkommen können, durch nicht-lineare Registrierung reduziert. Um die Optimierung in der Rissregion zu leiten, wird der Curvature Registrierungsansatz um eine Metrik basierend auf der Segmentierung der Bilder erweitert. Dabei erzielten Registrierungen mit der ausschließlichen Segmentierung des Risses bessere Ergebnisse als Registrierungen mit einer Segmentierung des gesamten Hirnschnitts. Insgesamt zeigt sich eine deutliche Verbesserung in der Rissregion, wobei der verbleibende reduzierte Riss auf die Glattheitsbedingungen des Regularisierers zurückzuführen ist.

Level-Set-Segmentierung von Rattenhirn MRTs

Björn Eiben, Dietmar Kunz, Uwe Pietrzyk, Christoph Palm

In dieser Arbeit wird die Segmentierung von Gehirngewebe aus Kopfaufnahmen von Ratten mittels Level-Set-Methoden vorgeschlagen. Dazu wird ein zweidimensionaler, kontrastbasierter Ansatz zu einem dreidimensionalen, lokal an die Bildintensität adaptierten Segmentierer erweitert. Es wird gezeigt, dass mit diesem echten 3D-Ansatz die lokalen Bildstrukturen besser berücksichtigt werden können. Insbesondere Magnet-Resonanz-Tomographien (MRTs) mit globalen Helligkeitsgradienten, beispielsweise bedingt durch Oberflächenspulen, können auf diese Weise zuverlässiger und ohne weitere Vorverarbeitungsschritte segmentiert werden. Die Leistungsfähigkeit des Algorithmus wird experimentell an Hand dreier Rattenhirn-MRTs demonstriert.

Evaluation of Registration Strategies for Multi-modality Images of Rat Brain Slices

Christoph Palm, Andreas Vieten, Dagmar Salber, Uwe Pietrzyk

In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

MR-based attenuation correction for torso-PET/MR imaging

Thomas Beyer, Markus Weigert, Harald H. Quick, Uwe Pietrzyk, Florian Vogt, Christoph Palm, Gerald Antoch, Stefan P. Müller, Andreas Bockisch

Purpose
MR-based attenuation correction (AC) will become an integral part of combined PET/MR systems. Here, we propose a toolbox to validate MR-AC of clinical PET/MRI data sets.
Methods
Torso scans of ten patients were acquired on a combined PET/CT and on a 1.5-T MRI system. MR-based attenuation data were derived from the CT following MR–CT image co-registration and subsequent histogram matching. PET images were reconstructed after CT- (PET/CT) and MR-based AC (PET/MRI). Lesion-to-background (L/B) ratios were estimated on PET/CT and PET/MRI.
Results
MR–CT histogram matching leads to a mean voxel intensity difference in the CT- and MR-based attenuation images of 12% (max). Mean differences between PET/MRI and PET/CT were 19% (max). L/B ratios were similar except for the lung where local misregistration and intensity transformation leads to a biased PET/MRI.
Conclusion
Our toolbox can be used to study pitfalls in MR-AC. We found that co-registration accuracy and pixel value transformation determine the accuracy of PET/MRI.

Time-Dependent Joint Probability Speed Function for Level-Set Segmentation of Rat-Brain Slices

Christoph Palm, Uwe Pietrzyk

The segmentation of rat brain slices suffers from illumination inhomogeneities and staining effects. State-of-the-art level-set methods model slice and background with intensity mixture densities defining the speed function as difference between the respective probabilites. Nevertheless, the overlap of these distributions causes an inaccurate stopping at the slice border. In this work, we propose the characterisation of the border area with intensity pairs for inside and outside estimating joint intensity probabilities. Method – In contrast to global object and background models, we focus on the object border characterised by a joint mixture density. This specifies the probability of the occurance of an inside and an outside value in direct adjacency. These values are not known beforehand, because inside and outside depend on the level-set evolution and change during time. Therefore, the speed function is computed time-dependently at the position of the current zero level-set. Along this zero level-set curve, the inside and outside values are derived as mean along the curvature normal directing inside and outside the object. Advantage of the joint probability distribution is to resolve the distribution overlaps, because these are assumed to be not located at the same border position. Results – The novel time-dependent joint probability based speed function is compared expermimentally with single probability based speed functions. Two rat brains with about 40 slices are segmented and the results analysed using manual segmentations and the Tanimoto overlap measure. Improved results are recognised for both data sets.

Fusion of Rat Brain Histology and MRI using Weighted Multi-Image Mutual Information

Christoph Palm, Penny P. Graeme, William R. Crum, Julia A. Schnabel, Uwe Pietrzyk, David J. Hawkes

Fusion of histology and MRI is frequently demanded in biomedical research to study in vitro tissue properties in an in vivo reference space. Distortions and artifacts caused by cutting and staining of histological slices as well as differences in spatial resolution make even the rigid fusion a difficult task. State-of- the-art methods start with a mono-modal restacking yielding a histological pseudo-3D volume. The 3D information of the MRI reference is considered subsequently. However, consistency of the histology volume and consistency due to the corresponding MRI seem to be diametral goals. Therefore, we propose a novel fusion framework optimizing histology/histology and histology/MRI consistency at the same time finding a balance between both goals. Method – Direct slice-to-slice correspondence even in irregularly-spaced cutting sequences is achieved by registration-based interpolation of the MRI. Introducing a weighted multi-image mutual information metric (WI), adjacent histology and corresponding MRI are taken into account at the same time. Therefore, the reconstruction of the histological volume as well as the fusion with the MRI is done in a single step. Results – Based on two data sets with more than 110 single registrations in all, the results are evaluated quantitatively based on Tanimoto overlap measures and qualitatively showing the fused volumes. In comparison to other multi-image metrics, the reconstruction based on WI is significantly improved. We evaluated different parameter settings with emphasis on the weighting term steering the balance between intra- and inter-modality consistency.

Consistency of parametric registration in serial MRI studies of brain tumor progression

Andreas Mang, Julia A. Schnabel, William R. Crum, Marc Modat, Oscar Camara-Rey, Christoph Palm, Gisele Brasil Caseiras, H. Rolf Jäger, Sébastien Ourselin, Thorsten M. Buzug, David J. Hawkes

Object
The consistency of parametric registration in multi-temporal magnetic resonance (MR) imaging studies was evaluated.
Materials and methods
Serial MRI scans of adult patients with a brain tumor (glioma) were aligned by parametric registration. The performance of low-order spatial alignment (6/9/12 degrees of freedom) of different 3D serial MR-weighted images is evaluated. A registration protocol for the alignment of all images to one reference coordinate system at baseline is presented. Registration results were evaluated for both, multimodal intra-timepoint and mono-modal multi-temporal registration. The latter case might present a challenge to automatic intensity-based registration algorithms due to ill-defined correspondences. The performance of our algorithm was assessed by testing the inverse registration consistency. Four different similarity measures were evaluated to assess consistency.
Results
Careful visual inspection suggests that images are well aligned, but their consistency may be imperfect. Sub-voxel inconsistency within the brain was found for allsimilarity measures used for parametric multi-temporal registration. T1-weighted images were most reliable for establishing spatial correspondence between different timepoints.
Conclusions
The parametric registration algorithm is feasible for use in this application. The sub-voxel resolution mean displacement error of registration transformations demonstrates that the algorithm converges to an almost identical solution for forward and reverse registration.

Whole-body PET/CT imaging

Markus Weigert, Uwe Pietrzyk, Stefan P. Müller, Christoph Palm, Thomas Beyer

Aim
Combined whole-body (WB) PET/CT imaging provides better overall co-registration compared to separate CT and PET. However, in clinical routine local PET-CT mis-registration cannot be avoided. Thus, the reconstructed PET tracer distribution may be biased when using the misaligned CT transmission data for CT-based attenuation correction (CT-AC). We investigate the feasibility of retrospective co-registration techniques to align CT and PET images prior to CT-AC, thus improving potentially the quality of combined PET/CT imaging in clinical routine.
Methods
First, using a commercial software registration package CT images were aligned to the uncorrected PET data by rigid and non-rigid registration methods. Co-registration accuracy of both alignment approaches was assessed by reviewing the PET tracer uptake patterns (visual, linked cursor display) following attenuation correction based on the original and co-registered CT. Second, we investigated non-rigid registration based on a prototype ITK implementation of the B-spline algorithm on a similar targeted MR-CT registration task, there showing promising results.
Results
Manual rigid, landmark-based co-registration introduced unacceptable misalignment, in particular in peripheral areas of the whole-body images. Manual, non-rigid landmark-based co-registration prior to CT-AC was successful with minor loco-regional distortions. Nevertheless, neither rigid nor non-rigid automatic co-registration based on the Mutual Information image to image metric succeeded in co-registering the CT and noAC-PET images. In contrast to widely available commercial software registration our implementation of an alternative automated, non-rigid B-spline co-registration technique yielded promising results in this setting with MR-CT data.
Conclusion
In clinical PET/CT imaging, retrospective registration of CT and uncorrected PET images may improve the quality of the AC-PET images. As of today no validated and clinically viable commercial registration software is in routine use. This has triggered our efforts in pursuing new approaches to a validated, non-rigid co-registration algorithm applicable to whole-body PET/CT imaging of which first results are presented here. This approach appears suitable for applications in retrospective WB-PET/CT alignment.

Ziel
Kombinierte PET/CT-Bildgebung ermöglicht verbesserte Koregistrierung von PET- und CT-Daten gegenüber separat akquirierten Bildern. Trotzdem entstehen in der klinischen Anwendung lokale Fehlregistrierungen, die zu Fehlern in der rekonstruierten PET- Tracerverteilung führen können, falls die unregistrierten CT-Daten zur Schwächungskorrektur (AC) der Emissionsdaten verwendet werden. Wir untersuchen daher die Anwendung von Bildregistrierungsalgorithmen vor der CT-basierten AC zur Verbesserung der PET-Aufnahmen.
Methoden
Mittels einer kommerziellen Registrierungssoftware wurden die CT-Daten eines PET/CT- Tomographen durch landmarken- und intensitätsbasierte rigide (starre) und nicht-rigide Registrierungsverfahren räumlich an die unkorrigierten PET-Emissionsdaten angepasst und zur AC verwendet. Zur Bewertung wurden die Tracerverteilungen in den PET-Bildern (vor AC, CT-AC, CT-AC nach Koregistrierung) visuell und mit Hilfe korrelierter Fadenkreuze verglichen. Zusätzlich untersuchten wir die ITK-Implementierung der bekannten B-spline basierten, nicht-rigiden Registrierungsansätze im Hinblick auf ihre Verwendbarkeit für die multimodale PET/CT-Ganzkörperregistrierung.
Ergebnisse
Mittels landmarkenbasierter, nicht-rigider Registrierung konnte die Tracerverteilung in den PET-Daten lokal verbessert werden. Landmarkenbasierte rigide Registrierung führte zu starker Fehlregistrierung in entfernten Körperregionen. Automatische rigide und nicht-rigide Registrierung unter Verwendung der Mutual-Information-Ähnlichkeitsmetrik versagte auf allen verwendeten Datensätzen. Die automatische Registrierung mit B-spline-Funktionen zeigte vielversprechende Resultate in der Anwendung auf einem ähnlich gelagerten CT–MR-Registrierungsproblem.
Fazit
Retrospektive, nicht-rigide Registrierung unkorrigierter PET- und CT-Aufnahmen aus kombinierten Aufnahmensystemen vor der AC kann die Qualität von PET-Aufnahmen im klinischen Einsatz verbessern. Trotzdem steht bis heute im klinischen Alltag keine validierte, automatische Registrierungssoftware zur Verfügung. Wir verfolgen dazu Ansätze für validierte, nicht-rigide Bildregistrierung für den klinischen Einsatz und präsentieren erste Ergebnisse.

Visualization of Nerve Fibre Orientation in the Visual Cortex of the Human Brain by Means of Polarized Light

Markus Axer, Hubertus Axer, Christoph Palm, David Gräßel, Karl Zilles, Uwe Pietrzyk

Template for MR-based attenuation correction for whole-body PET/MR imaging

Markus Weigert, Christoph Palm, Harald H. Quick, Stefan P. Müller, Uwe Pietrzyk, Thomas Beyer

Generation of a MRI reference data set for the validation of automatic, non-rigid image co-registration algorithms

Markus Weigert, Thomas Beyer, Harald H. Quick, Uwe Pietrzyk, Christoph Palm, Stefan P. Müller

Application of Fluid and Elastic Registration Methods to Histological Rat Brain Sections

Christoph Palm, William R. Crum, Uwe Pietrzyk, David J. Hawkes

Quantifying the A1AR distribution in peritumoral zones around experimental F98 and C6 rat brain tumours

Markus Dehnhardt, Christoph Palm, Andreas Vieten, Andreas Bauer, Uwe Pietrzyk

Quantification of growth in experimental F98 and C6 rat brain tumours was performed on 51 rat brains, 17 of which have been further assessed by 3D tumour reconstruction. Brains were cryosliced and radio-labelled with a ligand of the peripheral type benzodiazepine-receptor (pBR), 3H-Pk11195 [(1-(2-chlorophenyl)-N-methyl-N-(1-methyl-propylene)-3-isoquinoline-carboxamide)] by receptor autoradiography. Manually segmented and automatically registered tumours have been 3D-reconstructed for volumetric comparison on the basis of 3H-Pk11195-based tumour recognition. Furthermore automatically computed areas of −300 μm inner (marginal) zone as well as 300 μm and 600 μm outer tumour space were quantified. These three different regions were transferred onto other adjacent slices that had been labelled by receptor autoradiography with the A1 Adenosine receptor (A1AR)-ligand 3H-CPFPX (3H-8-cyclopentyl-3-(3-fluorpropyl)-1-propylxanthine) for quantitative assessment of A1AR in the three different tumour zones. Hence, a method is described for quantifying various receptor protein systems in the tumour as well as in the marginal invasive zones around experimentally implanted rat brain tumours and their representation in the tumour microenvironment as well as in 3D space. Furthermore, a tool for automatically reading out radio-labelled rat brain slices from auto radiographic films was developed, reconstructed into a consistent 3D-tumour model and the zones around the tumour were visualized. A1AR expression was found to depend upon the tumour volume in C6 animals, but is independent on the time of tumour development. In F98 animals, a significant increase in A1AR receptor protein was found in the Peritumoural zone as a function of time of tumour development and tumour volume.

Evaluierung von Registrierungsstrategien zur multimodalen 3D-Rekonstruktion von Rattenhirnschnitten

Christoph Palm, Andreas Vieten, Dagmar Bauer, Uwe Pietrzyk

In dieser Arbeit werden drei Strategien zur 3D Stapelung von multimodalen Schnittbildern vorgestellt. Die Strategien werden experimentell anhand von Dualtracer-Autoradiographien evaluiert. Dazu werden neue Maße zur Beschreibung der Konsistenz innerhalb einer Modalität und der Konsistenz der Modalitäten untereinander entwickelt, die auf bekannten Registrierungsmetriken basieren. Gerade bezüglich der Konsistenz der Modalitäten untereinander zeigen zwei Strategien die besten Resultate: (1) abwechselnde multimodale Registrierung (2) monomodale Rekonstruktion einer Modalität und multimodale 2D Registrierung der zweiten Modalität.

Uptake of F-18-fluoroethyl-L-tyrosine and H-3-L-methionine in focal cortical ischemia

Dagmar Bauer, Gabriele Stoffels, Dirk Pauleit, Christoph Palm, Kurt Hamacher, Heinz H. Coenen, Karl Langen

Objectives: C-11-methionine (MET) is particularly useful in brain tumor diagnosis but unspecific uptake e.g. in cerebral ischemia has been reported (1). The F-18-labeled amino acid O-(2-[F-18]fluoroethyl)-L-tyrosine (FET) shows a similar clinical potential as MET in brain tumor diagnosis but is applicable on a wider clinical scale. The aim of this study was to evaluate the uptake of FET and H-3-MET in focal cortical ischemia in rats by dual tracer autoradiography.

Methods: Focal cortical ischemia was induced in 12 Fisher CDF rats using the photothrombosis model (PT). One day (n=3) , two days (n=5) and 7 days (n=4) after induction of the lesion FET and H-3-MET were injected intravenously. One hour after tracer injection animals were killed, the brains were removed immediately and frozen in 2-methylbutane at -50°C. Brains were cut in coronal sections (thickness: 20 µm) and exposed first to H-3 insensitive photoimager plates to measure FET distribution. After decay of F-18 the distribution of H-3-MET was determined. The autoradiograms were evaluated by regions of interest (ROIs) placed on areas with increased tracer uptake in the PT and the contralateral brain. Lesion to brain ratios (L/B) were calculated by dividing the mean uptake in the lesion and the brain. Based on previous studies in gliomas a L/B ratio > 1.6 was considered as pathological for FET.

Results: Variable increased uptake of both tracers was observed in the PT and its demarcation zone at all stages after PT. The cut-off level of 1.6 for FET was exceeded in 9/12 animals. One day after PT the L/B ratios were 2.0 ± 0.6 for FET vs. 2.1 ± 1.0 for MET (mean ± SD); two days after lesion 2.2 ± 0.7 for FET vs. 2.7 ± 1.0 for MET and 7 days after lesion 2.4 ± 0.4 for FET vs. 2.4 ± 0.1 for MET. In single cases discrepancies in the uptake pattern of FET and MET were observed.

Conclusions: FET like MET may exhibit significant uptake in infarcted areas or the immediate vincinity which has to be considered in the differential diagnosis of unkown brain lesions. The discrepancies in the uptake pattern of FET and MET in some cases indicates either differences in the transport mechanisms of both amino acids or a different affinity for certain cellular components.

Towards MR-based attenuation correction for whole-body PET/MR imaging

Thomas Beyer, Markus Weigert, Christoph Palm, Harald H. Quick, Stefan P. Müller, Uwe Pietrzyk, Florian Vogt, M.J. Martinez, Andreas Bockisch

3D rat brain tumor reconstruction

Christoph Palm, Markus Dehnhardt, Andreas Vieten, Uwe Pietrzyk

Fusion strategies in multi-modality imaging

Uwe Pietrzyk, Christoph Palm, Thomas Beyer

3D rat brain tumors

Christoph Palm, Markus Dehnhardt, Andreas Vieten, Uwe Pietrzyk, Andreas Bauer, Karl Zilles

Preferred stereoselective brain uptake of D-serine

Dagmar Bauer, Kurt Hamacher, Stefan Bröer, Dirk Pauleit, Christoph Palm, Karl Zilles, Heinz H. Coenen, Karl-Josef Langen

Although it has long been presumed that d-amino acids are uncommon in mammalians, substantial amounts of free d-serine have been detected in the mammalian brain. d-Serine has been demonstrated to be an important modulator of glutamatergic neurotransmission and acts as an agonist at the strychnine-insensitive glycine site of N-methyl-d-aspartate receptors. The blood-to-brain transfer of d-serine is thought to be extremely low, and it is assumed that d-serine is generated by isomerization of l-serine in the brain. Stimulated by the observation of a preferred transport of the d-isomer of proline at the blood–brain barrier, we investigated the differential uptake of [3H]-d-serine and [3H]-l-serine in the rat brain 1 h after intravenous injection using quantitative autoradiography. Surprisingly, brain uptake of [3H]-d-serine was significantly higher than that of [3H]-l-serine, indicating a preferred transport of the d-enantiomer of serine at the blood–brain barrier. This finding indicates that exogenous d-serine may have a direct influence on glutamatergic neurotransmission and associated diseases.

Color Texture Classification by Integrative Co-Occurrence Matrices

Christoph Palm

Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively.

Investigation of fusion strategies of multi-modality images

Uwe Pietrzyk, Christoph Palm, Thomas Beyer

Presenting images from different modalities seems to be a trivial task considering the challenges to obtain registered images as a pre-requisite for image fusion. In combined tomographs like PET/CT, image registration is intrinsic. However, informative image fusion mandates careful preparation owing to the large amount of information that is presented to the observer. In complex imaging situations it is required to provide tools that are easy to handle and still powerful enough to help the observer discriminating important details from background patterns. We investigated several options for color tables applied to brain and non-brain images obtained with PET, MRI and CT.

Creating consistent 3D multi-modality data sets from autoradiographic and histological images of the rat brain

Uwe Pietrzyk, Dagmar Bauer, Andreas Vieten, Andreas Bauer, Karl-Josef Langen, Karl Zilles, Christoph Palm

Volumetric representations of autoradiographic and histological images gain ever more interest as a base to interpret data obtained with /spl mu/-imaging devices like microPET. Beyond supporting spatial orientation within rat brains especially autoradiographic images may serve as a base to quantitatively evaluate the complex uptake patterns of microPET studies with receptor ligands or tumor tracers. They may also serve for the development of rat brain atlases or data models, which can be explored during further image analysis or simulation studies. In all cases a consistent spatial representation of the rat brain, i.e. its anatomy and the corresponding quantitative uptake pattern, is required. This includes both, a restacking of the individual two-dimensional images and the exact registration of the respective volumes. We propose strategies how these volumes can be created in a consistent way and trying to limit the requirements on the circumstances during data acquisition, i.e. being independent from other sources like video imaging of the block face prior to cutting or high resolution micro-X-ray CT or micro MRI.

Colour Texture Analysis for Quantitative Laryngoscopy

Christoph Palm, Andreas G. Schütz, Klaus Spitzer, Martin Westhofen, Thomas M. Lehmann, Justus F. R. Ilgner

Whilst considerable progress has been made in enhancing the quality of indirect laryngoscopy and image processing, the evaluation of clinical findings is still based on the clinician’s judgement. The aim of this paper was to examine the feasibility of an objective computer-based method for evaluating laryngeal disease. Digitally recorded images obtained by 90 degree- and 70 degree-angled indirect rod laryngoscopy using standardized white balance values were made of 16 patients and 19 healthy subjects. The digital images were evaluated manually by the clinician based on a standardized questionnaire, and suspect lesions were marked and classified on the image. Following colour separation, normal vocal cord areas as well as suspect lesions were analyzed automatically using co-occurrence matrices, which compare colour differences between neighbouring pixels over a predefined distance. Whilst colour histograms did not provide sufficient information for distinguishing between healthy and diseased tissues, consideration of the blue content of neighbouring pixels enabled a correct classification in 81.4% of cases. If all colour channels (red, green and blue) were regarded simultaneously, the best classification correctness obtained was 77.1%. Although only a very basic classification differentiating between healthy and diseased tissue was attempted, the results showed progress compared to grey-scale histograms, which have been evaluated before. The results document a first step towards an objective, machine-based classification of laryngeal images, which could provide the basis for further development of an expert system for use in indirect laryngoscopy.

Integrative Auswertung von Farbe und Textur

Christoph Palm

Classification of Color Textures by Gabor Filtering

Christoph Palm, T.M. Lehmann

Selektion von Farbtexturmerkmalen zur Tumorklassifikation dermatoskopischer Fotografien

B. Fischer, Christoph Palm, T.M. Lehmann, K. Spitzer

3D-Visualisierung glottaler Abduktionsbewegungen

C. Neuschaefer-Rube, Thomas M. Lehmann, Christoph Palm, J. Bredno, S. Klajman, Klaus Spitzer

Automated Analysis of Stroboscopic Image Sequences by Vibration Profiles

Christoph Palm, T.M. Lehmann, J. Bredno, C. Neuschaefer-Rube, S. Klajman, K. Spitzer

A method for automated segmentation of vocal cords in stroboscopic video sequences is presented.
In contrast to earlier approaches, the inner and outer contours of the vocal cords are independently delineated. Automatic segmentation of the low contrasted images is carried out by connecting the shape constraint of a point distribution model to a multi-channel regionbased balloon model. This enables us to robustly compute a vibration profile that is used as a new diagnostic tool to visualize several vibration parameters in only one graphic. The vibration profiles are studied in two cases: one physiological vibration and one functional pathology.

Color Line Search for Illuminant Estimation in Real World Scenes

T.M. Lehmann, Christoph Palm

The estimation of illuminant color is mandatory for many applications in the field of color image quantification. However, it is an unresolved problem if no additional heuristics or restrictive assumptions apply. Assuming uniformly colored and roundly shaped objects, Lee has presented a theory and a method for computing the scene-illuminant chromaticity from specular highlights [H. C. Lee, J. Opt. Soc. Am. A 3, 1694 (1986)]. However, Lee’s method, called image path search, is less robust to noise and is limited in the handling of microtextured surfaces. We introduce a novel approach to estimate the color of a single illuminant for noisy and microtextured images, which frequently occur in real-world scenes. Using dichromatic regions of different colored surfaces, our approach, named color line search, reverses Lee’s strategy of image path search. Reliable color lines are determined directly in the domain of the color diagrams by three steps. First, regions of interest are automatically detected around specular highlights, and local color diagrams are computed. Second, color lines are determined according to the dichromatic reflection model by Hough transform of the color diagrams. Third, a consistency check is applied by a corresponding path search in the image domain. Our method is evaluated on 40 natural images of fruit and vegetables. In comparison with those of Lee’s method, accuracy and stability are substantially improved. In addition, the color line search approach can easily be extended to scenes of objects with macrotextured surfaces.