<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Instrument segmentation in hybrid 3-D endoscopy using multi-sensor super-resolution</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">S</forename><surname>Haase¹</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Computer Science</orgName>
								<orgName type="laboratory">Pattern Recognition Lab</orgName>
								<orgName type="institution">Friedrich-Alexander-Universität Erlangen-Nürnberg</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">T</forename><surname>Köhler¹</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Computer Science</orgName>
								<orgName type="laboratory">Pattern Recognition Lab</orgName>
								<orgName type="institution">Friedrich-Alexander-Universität Erlangen-Nürnberg</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Erlangen Graduate School in Advanced Optical Technologies (SAOT)</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">³ Div. Medical and Biological Informatics Junior Group: Computer-assisted Interventions</orgName>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="department">German Cancer Research Center (DKFZ)</orgName>
								<address>
									<settlement>Heidelberg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">T</forename><surname>Kilgus³</surname></persName>
						</author>
						<author>
							<persName><forename type="first">L</forename><surname>Maier-Hein³</surname></persName>
						</author>
						<author>
							<persName><forename type="first">J</forename><surname>Hornegger¹</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Computer Science</orgName>
								<orgName type="laboratory">Pattern Recognition Lab</orgName>
								<orgName type="institution">Friedrich-Alexander-Universität Erlangen-Nürnberg</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Erlangen Graduate School in Advanced Optical Technologies (SAOT)</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">³ Div. Medical and Biological Informatics Junior Group: Computer-assisted Interventions</orgName>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="department">German Cancer Research Center (DKFZ)</orgName>
								<address>
									<settlement>Heidelberg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">H</forename><surname>Feußner</surname></persName>
							<affiliation key="aff4">
								<orgName type="department">Research Group Minimally-invasive interdisciplinary therapeutical intervention</orgName>
							</affiliation>
							<affiliation key="aff5">
								<orgName type="department">Klinikum rechts</orgName>
								<orgName type="institution">Isar of the Technical University Munich</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Instrument segmentation in hybrid 3-D endoscopy using multi-sensor super-resolution</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">CB5E7A78E4B46BD2F10533585195F294</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Time-of-Flight</term>
					<term>3-D Endoscopy</term>
					<term>Super-Resolution</term>
					<term>Segmentation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In hybrid 3-D endoscopy, photometric information is augmented by range data for guidance in minimally invasive procedures. In this paper, we propose a method for instrument segmentation exploiting sensor data fusion between range data and complementary photometric information. For improved robustness and to overcome the limited spatial resolution of range sensors, we make use of multi-sensor super-resolution to obtain high-quality range images. Data of both modalities is then segmented separately using thresholding techniques. The results are then consolidated into a common segmentation mask. Our approach was evaluated on real image data acquired from a liver phantom and manually labeled ground truth data. Compared to purely color driven segmentation we improved the F-score from 0.61 to 0.73.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1</head><p>Problem Statement 3-D endoscopy gained high attention as it enables new applications to minimally invasive surgery <ref type="bibr" target="#b0">[1]</ref>. Besides structured light <ref type="bibr" target="#b1">[2]</ref> and stereo vision <ref type="bibr" target="#b2">[3]</ref>, Time-of-Flight (ToF) technology was manufactured into a first hybrid 3-D endoscope prototype, recently. In comparison to stereo vision, ToF is independent of texture information. Hence, the endoscope acquires range images with a constant resolution of 64×48 px. As the ToF sensor is manufactured into a conventional endoscope we additionally acquire high-resolution color images with 640×480 px through a common optical system using a beam splitter. Both range and complementary color information can be used to develop robust algorithms for image guided surgery. Haase et al. <ref type="bibr" target="#b3">[4]</ref> proposed a tool localization framework that exploits range and color information for increased robustness. Nevertheless, as ToF technology exhibits a low signal-to-noise ratio, preprocessing is a required first step. Different preprocessing techniques have been proposed for ToF range images <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>, recently. For instrument segmentation approaches based on geometric information <ref type="bibr" target="#b6">[7]</ref> or color information <ref type="bibr" target="#b7">[8]</ref> have been investigated. This segmentation result can thereupon be used for further applications, e.g. the avoidance of risk situations as proposed in <ref type="bibr" target="#b8">[9]</ref>. However, in comparison to purely 2-D driven approaches we are able to incorporate 3-D surface data as well as 2-D photometric data to improve robustness. Our preliminary framework describes a first approach towards entire instrument segmentation on 3-D surface information using a ToF/RGB endoscope. We propose a multi-sensor instrument segmentation framework using super-resolution to denoise ToF data and increase spatial resolution <ref type="bibr" target="#b5">[6]</ref>. Our framework exploits a data fusion for range and color images <ref type="bibr" target="#b9">[10]</ref>. After upsampling ToF data, segmentation is performed on both modalities and then the results are consolidated into a common segmentation mask.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Material and Methods</head><p>The proposed segmentation framework is illustrated in Fig. <ref type="figure" target="#fig_0">1</ref> and is subdivided into (a) super-resolution and (b) multisensor segmentation. The preprocessing is applied according to our previous publication <ref type="bibr" target="#b5">[6]</ref>. Our approach requires data fusion of range and color images. As the prototype uses a beam splitter to deliver signal to the RGB and the ToF sensor, we map color information to the surface data using a homography that is estimated beforehand using a modified checkerboard <ref type="bibr" target="#b9">[10]</ref>. This mapping results in a high-quality RGB image that is aligned to the range image up to a scale-factor.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Super-Resolution for Range Image Preprocessing</head><p>We cope with the low signal-to-noise ratio of the ToF images by applying super-resolution as described in <ref type="bibr" target="#b5">[6]</ref>. The super-resolution approach is subdivided into motion estimation, range correction and numerical optimization. Multi-frame super-resolution employs subpixel displacements between consecutive frames as a cue to obtain a super-resolved image from multiple low-resolution frames. These displacements are induced by navigating the endoscope. Our objective function to obtain a maximum a-posteriori (MAP) estimate � � for a high-resolution image � is described by:</p><formula xml:id="formula_0">� � � ��� ��� � ���� ��� � � � ��� � ��� � � � � ��� �� � � � � � � � ����� � � � ��� � ��� � �</formula><p>The first sum denotes the data term and the second sum is a regularizer based on a pseudo Huber loss function � � of a high-pass filtered version of the input image �. � weights the regularizer and � denotes the number of low-resolution input frames and � denotes the number of pixels in the super-resolved output image. The data term describes the distance of the k th low-resolution input frame � ��� and a mathematical model of our image acquisition.</p><p>The system matrix � ��� incorporates blur induced by the point spread function, downsampling and the displacement field of a high-resolution image �. As the low signal-to-noise ratio of ToF data limits the accuracy of displacement field estimation, we exploit data fusion to estimate a high-quality displacement field in the color domain using optical flow <ref type="bibr" target="#b10">[11]</ref> and transfer it into the range domain. As we acquire images from different angles and distances we have to correct the range data, to have all low-resolution range images in the same plane. This correction is modeled by � � ��� and � � ��� .</p><p>For more details considering the multi-sensor super-resolution see Köhler et al. <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Multi-Sensor Segmentation</head><p>Based on the output of the preprocessing we apply instrument segmentation on data of both modalities. We distinguish between instruments and background by different thresholding techniques <ref type="bibr" target="#b7">[8]</ref>. For our segmentation we exploit the fact that instruments are usually closer to the sensor and that instruments are usually grayish. Due to the data fusion in our hybrid 3-D endoscope, we can not only exploit the range data but also incorporate the color information into the segmentation process similar to <ref type="bibr" target="#b8">[9]</ref>. Range values � are considered as instruments pixels if � � � � ��� ���. In the color domain we exploit the value and the saturation channel of the HSV color space to segment the instrument. Here, pixels � are considered as instrument pixels if � � � � � ��� �� � � and � � � � � ��� �� � �, where � � and � � denote the saturation channel and the value channel of the color image, respectively. Both binary results are then consolidated into a common segmentation mask by multiplication. For outlier removal caused by noisy data we apply morphological operators to close small holes and remove separated areas with less than 1000 instrument pixels as false instrument candidates.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Experimental Setup</head><p>Our algorithm is evaluated on real data with a realistic liver phantom. Data was acquired with a ToF/RGB endoscope manufactured by Richard Wolf GmbH, Knittlingen, Germany. We assembled realistic scenarios including two different endoscopic instruments. For evaluation we investigated the results in two different scenarios, for 6 frames each. The upsampled images had a resolution of 240×160 px. Our instrument segmentation is compared to segmentation for both modalities separately. For ground truth data, the endoscopic instruments were manually segmented by an expert in the color domain. The threshold parameters were set empirically to , and by analyzing the first frame. This frame was excluded for further evaluation to separate between training and evaluation data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Results</head><p>For quantitative evaluation, we analyzed the sensitivity, the specificity and the F-score of our approach in Table <ref type="table">1</ref>. Here, we compare our segmentation results to the results of our framework for a purely range driven approach based on superresolution and for a purely color driven approach.</p><p>For qualitative evaluation we illustrate the results of all three approaches in Fig. <ref type="figure" target="#fig_2">2</ref>. The benefit of super-resolution for our noisy range data is shown in Fig. <ref type="figure" target="#fig_3">3</ref> with the color overlay encoding the segmentation result.     Discussion Table <ref type="table">1</ref> illustrates that our approach results in the best specificity, i.e. only few background pixels are considered as instrument pixels. However, both single-sensor approaches result in satisfying sensitivities, i.e. only few instrument pixels are missing. Nevertheless, the F-score as a measurement of the accuracy indicates a more reliable performance of our approach. Furthermore, as our approach consolidates both modalities, we have a higher robustness considering different threshold parameters. Oversegmentation in one modality can be compensated by the other modality. The qualitative results confirm the comparison in Tab. 1 and highlight that both modalities oversegment the image in areas close to the sensor with surface normals pointing directly to the camera. In those areas the instruments are too close to the tissue to be distinguished in the range image, but specular highlights exclude the use of the color image, likewise. Our approach achieves a reasonable compromise, where only few instrument pixels are missed and oversegmentation is reduced. The 3-D reconstructions show that most parts of the instruments are segmented correctly in our approach and that preprocessing is required to provide any intuitive visualization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5</head><p>Summary In this paper we proposed an instrument segmentation framework for 3-D ToF/RGB endoscopy. Our method applies robust multi-sensor super-resolution based on motion estimation on high-resolution RGB images to upsample and to denoise low-resolution range images. Due to improved signal-to-noise ratio of the range images we apply instrument segmentation using thresholding techniques and consolidate the results of both modalities. Compared to purely color driven segmentation we improved the F-score from 0.61 to 0.73. Future work will consider different segmentation techniques and refinement of our super-resolution for further denoising. For the consolidation of both sensor results additional weighting factors will be taken into account as proposed in <ref type="bibr" target="#b3">[4]</ref>. In experiments on real organs, we will investigate the robustness of our segmentation in real medical scenarios.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1:Flowchart for our instrument segmentation framework. First, sensor fusion and super-resolution are performed. Second, RGB data and range data are segmented separately. Third, both results are consolidated.</figDesc><graphic coords="2,40.05,43.73,331.26,115.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Super 1 :</head><label>1</label><figDesc>Comparison of our approach (3) to segmentation on super-resolved data only<ref type="bibr" target="#b1">(2)</ref> or on color data only (3).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Input data (first and second column) and comparison of three segmentations: our approach on super-resolved data only (third column), on color data only (fourth column), on color and range data (last column).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: 3-D meshes with (left) and without (right) the use of super-resolution. The greenish overlay in both images is the segmentation mask of the proposed approach. [See the electronic publication for a color version of this figure.]</figDesc><graphic coords="3,214.49,325.93,161.72,109.23" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>Figure 3: 3-D meshes with (left) and without (right) the use of super-resolution. The greenish overlay in both images is the segmentation mask of the proposed approach. [See the electronic publication for a color version of this figure.]</figDesc><graphic coords="3,41.70,325.93,161.72,109.23" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>4</head><label>4</label><figDesc></figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. HO 1791/7-1. This research was funded/ supported by the Graduate School of Information Science in Health (GSISH) and the TUM Graduate School. The authors gratefully acknowledge funding of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the DFG in the framework of the German excellence initiative. We thank the Metrilus GmbH for their support. This project was supported by the research training group 1126 funded by the DFG.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Real-time surface reconstruction from stereo endoscopic images for intraoperative registration</title>
		<author>
			<persName><forename type="first">S</forename><surname>Röhl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bodenstedt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Suwelack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kenngott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mueller-Stich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dillmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Speidel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc SPIE</title>
				<meeting>SPIE</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">7964</biblScope>
			<biblScope unit="page" from="796414" to="796414" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An Endoscopic 3D Scanner based on Structured Light</title>
		<author>
			<persName><forename type="first">C</forename><surname>Schmalz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Forster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Angelopoulou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Med Image Anal</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1063" to="1072" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Stereo Endoscopy as a 3-D Measurement Tool</title>
		<author>
			<persName><forename type="first">M</forename><surname>Field</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Clarke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Strup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Seales</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">EMBC</title>
		<imprint>
			<biblScope unit="page" from="5748" to="5751" />
			<date type="published" when="2009">2009. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Laparoscopic Instrument Localization using a 3-D Time-of-Flight/RGB Endoscope</title>
		<author>
			<persName><forename type="first">S</forename><surname>Haase</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wasza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kilgus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hornegger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">WACV</title>
		<imprint>
			<biblScope unit="volume">2013</biblScope>
			<biblScope unit="page" from="449" to="454" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Denoising Strategies for Time-of-Flight Data</title>
		<author>
			<persName><forename type="first">F</forename><surname>Lenzen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">In</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Meister</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schäfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Becker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Garbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Theobalt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Time-of-Flight Imaging: Algorithms, Sensors and Applications</title>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">ToF Meets RGB: Novel Multi-Sensor Super-Resolution for Hybrid 3-D Endoscopy</title>
		<author>
			<persName><forename type="first">T</forename><surname>Köhler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Haase</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wasza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kilgus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Maier-Hein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Feußner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hornegger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">MICCAI</title>
		<imprint>
			<biblScope unit="volume">8149</biblScope>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note>LNCS. To Appear</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Automatic Instrument Localization in Laparoscopic Surgery</title>
		<author>
			<persName><forename type="first">J</forename><surname>Climent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mares</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronic Letters on Computer Vision and Image Analysis</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="21" to="31" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Detection of grey regions in color images : application to the segmentation of a surgical instrument in robotized laparoscopy</title>
		<author>
			<persName><forename type="first">C</forename><surname>Doignon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Nageotte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Mathelin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc of IROS</title>
				<meeting>of IROS</meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="3394" to="3399" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Recognition of Risk Situations Based on Endoscopic Instrument Tracking and Knowledge Based Situation Modeling</title>
		<author>
			<persName><forename type="first">S</forename><surname>Speidel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sudra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Senemaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Drentschew</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">P</forename><surname>Müller-Stich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gutt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dillmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc SPIE</title>
				<meeting>SPIE</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">6918</biblScope>
			<biblScope unit="page" from="69180X" to="69180X" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">ToF/RGB Sensor Fusion for 3-D Endoscopy</title>
		<author>
			<persName><forename type="first">S</forename><surname>Haase</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Forman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kilgus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Maier-Hein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hornegger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Current Medical Imaging Reviews</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="113" to="119" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Beyond Pixels: Exploring New Representations and Applications for Motion Analysis</title>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
		<respStmt>
			<orgName>Massachusetts Institute of Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
