<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Pyramid-Focus-Augmentation: Medical Image Segmentation with Step-Wise Focus</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Vajira</forename><surname>Thambawita</surname></persName>
							<email>vajira@simula.no</email>
							<affiliation key="aff0">
								<address>
									<settlement>SimulaMet</settlement>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Oslo Metropolitan University</orgName>
								<address>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Steven</forename><surname>Hicks</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>SimulaMet</settlement>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Oslo Metropolitan University</orgName>
								<address>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pål</forename><surname>Halvorsen</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>SimulaMet</settlement>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Oslo Metropolitan University</orgName>
								<address>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><forename type="middle">A</forename><surname>Riegler</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>SimulaMet</settlement>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Pyramid-Focus-Augmentation: Medical Image Segmentation with Step-Wise Focus</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C072AF133FCAA4FF9592D67A15ECA346</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Segmentation of findings in the gastrointestinal tract is a challenging but also important task which is an important building stone for sufficient automatic decision support systems. In this work, we present our solution for the Medico 2020 task, which focused on the problem of colon polyp segmentation. We present our simple but efficient idea of using an augmentation method that uses grids in a pyramid-like manner (large to small) for segmentation. Our results show that the proposed methods work as indented and can also lead to comparable results when competing with other methods.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">METHOD</head><p>Our method has two main steps: data augmentation with PYRA using pre-defined grid sizes followed by training of a DL model with the resulting augmented data. The source code for our method can be found in our GitHub 1 repository. The development dataset <ref type="bibr" target="#b4">[5]</ref> </p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Segmented polyp regions in Gastrointestinal Tract (GI) images <ref type="bibr" target="#b0">[1]</ref> can provide detailed analysis to doctors to identify correct areas to proceed with treatments compared to other computer-aided analysis such as classification <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref> and detection <ref type="bibr" target="#b6">[7]</ref> which provide less detailed information about the exact region and size of the affected area. However, training Deep Learning (DL) models to perform segmentation for medical data is challenging because of the lack of medical domain images as a result of tight privacy restrictions, the high cost for annotating medical data using experts, and a lower number of true positive findings compared to true negatives. In this paper, we present our approach for the participation in the 2020 Medico Segmentation Challenge <ref type="bibr" target="#b3">[4]</ref>, for which we introduce a novel augmentation technique called pyramidfocus-augmentation (PYRA). PYRA can be used to improve the performance of segmentation tasks when we have a small dataset to train our DL models or if the number of positive findings is small. Further, our method can focus doctors' attention to regions of polyps gradually. In addition to that the output of the method is also adjustable meaning, we could present a lower resolution of the grid if this is sufficient for the task at hand which can help to save processing time. Finally, our technique can also be applied to any segmentation task using any deep learning segmentation model.  provided by the organizers has 1000 polyp images with corresponding ground truth masks. We divided it into two parts such that 800 images are used for model training and 200 for testing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">PYRA Data Augmentation</head><p>As the first step in PYRA, we generate checker board grids as illustrated in the first row of Figure <ref type="figure" target="#fig_2">2</ref> with sizes of 𝑁 × 𝑁 with 𝑁 values of 4, 8, 16, 32, 64, 128 and 256. 𝑁 should be selected such that 𝑖𝑚𝑎𝑔𝑒_𝑠𝑖𝑧𝑒 % 𝑁 = 0. Applying these eight grid augmentations to the training dataset with 800 images increases the training data to 800 × 8 = 6400 images.</p><p>For the second step, we convert the Ground Truth (GT) segmentation masks into a grid-based representation of the GT corresponding to the grid sizes. For example, if the grid size is 8 × 8, then the corresponding GT is a 8 × 8 converted GT.</p><p>The transformation of the ground truth masks to gridded masks is performed as following: (i) we divide the gt into the input grid size, (ii) we counted true pixels of each grid cell, (iii) if the number of true pixels is larger than 0, we converted the whole cell into a true cell. An example of a converted GT is depicted on the top of Figure <ref type="figure" target="#fig_1">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Experimental Setup and Model training</head><p>We have set up four experiments: Exp-1, Exp-2, Exp-3, and Exp-4 to show the performance of PYRA. Exp-1 and Exp-2 represent two baseline experiments. Exp-1 uses only the 800 training images without any augmentations. In Exp-2, we used general augmentations such as Affine, Coarse Dropout, and Additive Gaussian Noise from the library called imgaug <ref type="bibr" target="#b5">[6]</ref>. Exp-3 and Exp-4 are using our PYRA with the data from Exp-1 and Exp-2, respectively. The training dataset size was changed from 800 to 6400 after applying PYRA. However, we validated our experiments only using 200 images reserved for testing. We have used one data loader for all experiments to maintain a fair evaluation. The baseline experiments Exp-1 and Exp-2 used the data loader with a grid size of 256 × 256 which represents the original GT masks without any conversion.</p><p>MediaEval'20, December 14-15 2020, Online   We have used the Unet architecture <ref type="bibr" target="#b7">[8]</ref> as our DL model to perform the polyp segmentation task. We trained the Unet model with a stacked input using a polyp image and a random grid image selected from the eight sizes. Then, the model was trained to predict converted GT which were formed by converting the real GT into a grid-based GT as in the previous section.</p><p>The Unet model used dropout layers with the probability of 0.5. Then, we used our Unet model as a stochastic model to perform Monte Carlo sampling for the validation data. We kept our Unet model in the training state to perform this sampling while predicting the output for the validation data. In the Pytorch library, which is used for all our implementations, we can do this simply by keeping the model state in the model.train() state. We iterated 50 times for a single input to predict the output. We calculated the mean from these 50 predictions, which is used as the final prediction for the competition and Standard Deviation (std) images to know the model's confidence for the predictions. The whole training process is illustrated in Figure <ref type="figure" target="#fig_1">1</ref> with an example image and a grid size of 8 × 8 as an input. However, we submitted the predicted mean images for the gird size of 256 × 256 which generate predictions with the size of true GT (without any transformations). All the experiments used a fixed learning rate of 0.001 with the RMSprop optimizer <ref type="bibr" target="#b2">[3]</ref>, which were selected from preliminary experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">RESULT AND DISCUSSION</head><p>Table <ref type="table" target="#tab_1">1</ref> summarizes the Mean Intersection over Union (mIoU) and the Dice Coefficient (DC) for the validation dataset and the test dataset. The final results to the competition were collected from mean images calculated by sampling 50 outputs for the same input with the grid size of 256. Additionally, we have calculated std images for the validation dataset to show the benefits of using PYRA. Example outputs for a given input image are illustrated in Figure <ref type="figure" target="#fig_2">2</ref>.</p><p>According to the results in Table <ref type="table" target="#tab_1">1</ref>, Exp-3 which use only Pyramidfocus-augmentation shows the best validation results with mIoU of 0.7693 and DC of 0.8447, and the best test results with mIoU of 0.6981 and DC of 0.7887. The advantage of our Pyramid-focusaugmentation can be identified using the third row of Figure <ref type="figure" target="#fig_2">2</ref> along the fourth row of the same figure. We can see that our model can focus on polyp regions step by step. The third row of Figure <ref type="figure" target="#fig_2">2</ref> shows how our model predicts correct polyp cells in 2 × 2, 4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64, 128 × 128 and 256 × 256 grid sizes, respectively. When we compare this row with the last row of the images of std, we can see that the model has high confidence for the identified polyp regions. For example, it shows high confidence (black color region) for the middle part of the polyps. In contrast, our model shows less confidence (yellow color region) for a polyps' outer borders.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">CONCLUSION AND FUTURE WORK</head><p>In this paper, we presented a novel augmentation method called Pyramid-focus-augmentation (PYRA), which can be used to train segmentation DL methods. Our method shows a large benefit in the medical diagnosis use-case, by focusing a doctors' attention on regions with findings step by step.</p><p>Our experiments did not use post-processing to clean up output corresponding to the input grid. In future work, we will evaluate our approach with additional post-processing steps for smaller grid sizes. For example, we can do convolution operations to the output using a convolutional window equal to the input grid size to clean the results. However, post-processing techniques will not improve the final results when the grid size equals the input images' resolution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">ACKNOWLEDGMENT</head><p>The research has benefited from the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by the Research Council of Norway under contract 270053.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Training steps for a segmentation model with the new augmentation technique.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: A representation of input and corresponding outputs of grid-augmentation-based segmentation. The first row shows an input image and all grid sizes used as stacked grid image with the input image. The second row represent ground truth. The third and fourth rows show predicted mean and std output images calculated from 30 samples.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Thambawita et al.</figDesc><table><row><cell>Image</cell><cell>2 × 2</cell><cell>4 × 4</cell><cell>8 × 8</cell><cell>16 × 16</cell><cell>32 × 32</cell><cell>64 × 64</cell><cell>128 × 128 256 × 256</cell></row><row><cell>Ground</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Truth</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Predictions</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Std from 30</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>samples</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Result collected from validation data and test data. All test data results were provided by organizers of Medico task in MediaEval 2020.</figDesc><table><row><cell></cell><cell cols="4">Validation results Official test results</cell></row><row><cell cols="2">Method mIOU</cell><cell>Dice</cell><cell>mIOU</cell><cell>Dice</cell></row><row><cell>Exp-1</cell><cell>0.7640</cell><cell>0.8422</cell><cell>0.6934</cell><cell>0.7817</cell></row><row><cell>Exp-2</cell><cell>0.7077</cell><cell>0.7957</cell><cell>0.6759</cell><cell>0.7700</cell></row><row><cell>Exp-3</cell><cell cols="3">0.7693 0.8447 0.6981</cell><cell>0.7887</cell></row><row><cell>Exp-4</cell><cell>0.6898</cell><cell>0.7822</cell><cell>0.6696</cell><cell>0.7665</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Polyp Segmentation in Colonoscopy Images Using Fully Convolutional Network</title>
		<author>
			<persName><forename type="first">M</forename><surname>Akbari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mohrekesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Nasr-Esfahani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M R</forename><surname>Soroushmehr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Karimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Samavi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Najarian</surname></persName>
		</author>
		<idno type="DOI">10.1109/EMBC.2018.8512197</idno>
		<ptr target="https://doi.org/10.1109/EMBC.2018.8512197" />
	</analytic>
	<monogr>
		<title level="m">40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="69" to="72" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Deep Learning Based Disease Detection Using Domain Specific Transfer Learning</title>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Steven</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pia</forename><forename type="middle">H</forename><surname>Hicks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pål</forename><surname>Smedsrud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Halvorsen</surname></persName>
		</author>
		<author>
			<persName><surname>Riegler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of MediaEval</title>
				<meeting>of MediaEval</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Neural networks for machine learning lecture 6a overview of mini-batch gradient descent</title>
		<author>
			<persName><forename type="first">Geoffrey</forename><surname>Hinton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nitish</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kevin</forename><surname>Swersky</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012. 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Medico Multimedia Task at MediaEval 2020: Automatic Polyp Segmentation</title>
		<author>
			<persName><forename type="first">Debesh</forename><surname>Jha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steven</forename><forename type="middle">A</forename><surname>Hicks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krister</forename><surname>Emanuelsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Håvard</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dag</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>De Lange</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">A</forename><surname>Riegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pål</forename><surname>Halvorsen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the MediaEval 2020 Workshop</title>
				<meeting>of the MediaEval 2020 Workshop</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Kvasir-seg: A segmented polyp dataset</title>
		<author>
			<persName><forename type="first">Debesh</forename><surname>Jha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pia</forename><forename type="middle">H</forename><surname>Smedsrud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">A</forename><surname>Riegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pål</forename><surname>Halvorsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dag</forename><surname>Thomas De Lange</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Håvard D</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><surname>Johansen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Multimedia Modeling</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="451" to="462" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">Alexander</forename><forename type="middle">B</forename><surname>Jung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kentaro</forename><surname>Wada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jon</forename><surname>Crall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Satoshi</forename><surname>Tanaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jake</forename><surname>Graving</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Reinders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sarthak</forename><surname>Yadav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joy</forename><surname>Banerjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gábor</forename><surname>Vecsei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adam</forename><surname>Kraft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zheng</forename><surname>Rui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jirka</forename><surname>Borovec</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Vallentin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Semen</forename><surname>Zhydenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kilian</forename><surname>Pfeiffer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ben</forename><surname>Cook</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ismael</forename><surname>Fernández</surname></persName>
		</author>
		<author>
			<persName><forename type="first">François-Michel</forename><surname>De Rainville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chi-Hung</forename><surname>Weng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Abner</forename><surname>Ayala-Acevedo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Raphael</forename><surname>Meudec</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matias</forename><surname>Laporte</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
		<ptr target="https://github.com/aleju/imgaug." />
		<title level="m">imgaug</title>
				<imprint>
			<date type="published" when="2020-01">2020. 2020. 01-Nov-2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Real-time detection of colon polyps during colonoscopy using deep learning: systematic validation with four independent datasets</title>
		<author>
			<persName><forename type="first">Ji</forename><surname>Young Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jinhoon</forename><surname>Jeong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eun</forename><forename type="middle">Mi</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chunae</forename><surname>Ha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hyo</forename><forename type="middle">Jeong</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ja</forename><forename type="middle">Eun</forename><surname>Koo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dong-Hoon</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Namkug</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeong-Sik</forename><surname>Byeon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific Reports</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">8379</biblScope>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">U-net: Convolutional networks for biomedical image segmentation</title>
		<author>
			<persName><forename type="first">Olaf</forename><surname>Ronneberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Philipp</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Brox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Medical image computing and computer-assisted intervention</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="234" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning Applied to Gastrointestinal Tract Abnormality Classification</title>
		<author>
			<persName><forename type="first">Vajira</forename><surname>Thambawita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Debesh</forename><surname>Jha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hugo</forename><forename type="middle">Lewi</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Håvard</forename><forename type="middle">D</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dag</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pål</forename><surname>Halvorsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">A</forename><surname>Riegler</surname></persName>
		</author>
		<idno type="DOI">10.1145/3386295</idno>
		<ptr target="https://doi.org/10.1145/3386295" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Comput. Healthcare</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">29</biblScope>
			<date type="published" when="2020-06">2020. June 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The medico-task 2018: Disease detection in the gastrointestinal tract using global features and deep learning</title>
		<author>
			<persName><forename type="first">Vajira</forename><surname>Thambawita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Debesh</forename><surname>Jha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Riegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pål</forename><surname>Halvorsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hugo</forename><forename type="middle">Lewi</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Håvard</forename><forename type="middle">D</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dag</forename><surname>Johansen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of MediaEval</title>
				<meeting>of MediaEval</meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
