<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Medico Multimedia Task at MediaEval 2020: Automatic Polyp Segmentation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Debesh Jha</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Steven A. Hicks</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Krister Emanuelsen</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Håvard Johansen</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dag Johansen</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thomas de Lange</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael A. Riegler</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pål Halvorsen</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Augere Medical AS</institution>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Baerum Hospital</institution>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Oslo Metropolitan University</institution>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Sahlgrenska University Hospital</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>SimulaMet</institution>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>UiT The Arctic University of</institution>
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>14</fpage>
      <lpage>15</lpage>
      <abstract>
        <p>Colorectal cancer is the third most common cause of cancer worldwide. According to Global cancer statistics 2018, the incidence of colorectal cancer is increasing in both developing and developed countries. Early detection of colon anomalies such as polyps is important for cancer prevention, and automatic polyp segmentation can play a crucial role for this. Regardless of the recent advancement in early detection and treatment options, the estimated polyp miss rate is still around 20%. Support via an automated computer-aided diagnosis system could be one of the potential solutions for the overlooked polyps. Such detection systems can help low-cost design solutions and save doctors time, which they could for example use to perform more patient examinations. In this paper, we introduce the 2020 Medico challenge, provide some information on related work and the dataset, describe the task and evaluation metrics, and discuss the necessity of organizing the Medico challenge.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>The goal of Medico automatic polyp segmentation challenge the
benchmarking of polyp segmentation algorithms on new test
images for automatic polyp segmentation that can detect and mask
out polyps (including irregular, small or flat polyps) with high
accuracy. The main goal of the challenge is to benchmark
different computer vision and machine learning algorithms on the
same dataset that could promote to build novel methods which
could be potentially useful in clinical settings. Moreover, we
emphasize on robustness and generalization of the methods to solve
the limitations related to data availability and method
comparison. The detailed challenge description can be found here https:
//multimediaeval.github.io/editions/2020/tasks/medico/.</p>
      <p>
        After three years of organizing the Medico Multimedia Task [
        <xref ref-type="bibr" rid="ref17 ref18 ref6">6,
17, 18</xref>
        ], we present the fourth iteration in the series. With a focus on
assessing human semen quality last year [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], this year we build on
the 2017 [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and 2018 [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] challenges of automatically detecting
anomalies in video and image data from the GI tract. We introduce
a new task for automatic polyp segmentation. In the prior
gastrointestinal (GI) challenges, we classified the images into various classes.
We are now interested in identifying each pixel of the lesions from
the provided polyp images in this challenge.
      </p>
      <p>
        The task is important because colorectal cancer (CRC) is the third
most leading cause of cancer and fourth most prevailing strain in
terms of cancer incidence globally [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Regular screening through
colonoscopy is a prerequisite for early cancer detection and
prevention of CRC. Regardless of the achievement of colonoscopy
examinations, the estimated polyp miss rate is still around 20% [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
and there are large inter-observer variabilities [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. An automated
computer-aided diagnosis (CADx) system detecting and
highlighting polyps could be of great help to improve the average endoscopist
performance.
      </p>
      <p>
        In recent years, convolutional neural networks (CNNs) have
advanced medical image segmentation algorithms. However, it
is essential to understand the strengths and weaknesses of the
diferent approaches via performance comparison on a common
dataset. There are a large number of available studies on automatic
polyp segmentation [
        <xref ref-type="bibr" rid="ref11 ref14 ref20 ref3 ref4 ref5 ref8 ref9">3–5, 8, 9, 11, 14, 20</xref>
        ]. However, most of the
conducted studies are performed on a restricted dataset which
makes it dificult for benchmarking, algorithm development and
reproducible results. Our challenge is utilizing the publicly available
Kvasir-SEG dataset [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The entire Kvasir-SEG dataset is used for
training and an additional and unseen test dataset for benchmarking
the algorithms.
      </p>
      <p>In summary, the Medico 2020 challenge can support building
future systems and foster open, comparable and reproducible results
where the objective of the task is to find eficient solutions automatic
polyp segmentation, both in terms of pixel-wise accuracy and
processing speed.</p>
      <p>
        For the clinical translation of technologies, it is essential to design
methods on multi-centered and multi-modal datasets. We have
recently released several gastrointestinal endoscopy [
        <xref ref-type="bibr" rid="ref1 ref15 ref16">1, 15, 16</xref>
        ],
wireless capsule endoscopy [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], endoscopic instrument [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and
polyp datasets [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Thus, we have put in significant efort to address
the challenges related to lack of public available datasets in the field
of GI endoscopy.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>DATASET</title>
      <p>
        The Kvaris-SEG [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] training dataset can be downloaded from https:
//datasets.simula.no/kvasir-seg/. It contains 1,000 polyp images and
their corresponding ground truth mask as shown in Figure 1. The
dataset was collected from real routine clinical examinations at
Baerum Hospital in Norway by expert gastroenterologists. The
resolution of images varies from 332 × 487 to 1920 × 1072 pixels.
Some of the images contain a green thumbnail in the lower-left
corner of the images showing the scope position marking from the
ScopeGuide (Olympus) (see Figure 2). We annotate another separate
dataset consisting of 160 new polyp images and use the resulting
dataset as the test set to benchmark the participants’ approaches.
Figure 2 shows some examples of test images used in the challenge.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>TASK DESCRIPTION</title>
      <p>The participants are invited to submit their solutions for the two
following tasks: segmentation and eficiency (speed).
3.1</p>
    </sec>
    <sec id="sec-4">
      <title>The automatic polyp segmentation task</title>
      <p>This task invites participants to develop new algorithms for
segmentation of polyps. The main focus is to develop an eficient system
in terms of diagnostic ability and processing speed and accurately
segment the maximum polyp area in a frame from the provided
colonoscopic images.</p>
      <p>There are several ways to evaluate the segmentation accuracy.
The most commonly used metrics by the wider medical imaging
community are the correct Dice similarity coeficient (DSC) or
overlap index, and the mean Intersection over Union (mIoU),
also known as the Jaccard index. In clinical applications, the
gastroenterologists are interested in pixel-wise detail information
extraction from the potential lesions. The metrics such as DSC and
mIoU are used to compare the pixel-wise similarity between the
predicted segmentation maps and the original ground truth of the
lesions.</p>
      <p>The DSC is a metric for comparison of the similarities between
two given samples. If tp, tn, fp, and fn represent the number of true
positive, true negative, false positive and false negative per-pixel
predictions for an image, respectively, then the DSC is given as
DSC =</p>
      <p>2 ·
2 · +   +  
Furthermore, the IoU is then defined as the ratio of intersection of
two metrics over a union of two corresponding metrics. The mean
IoU computes IoU of each semantic class of an image and calculate
the mean over each classes. The IoU is defined as:</p>
      <p>Moreover, in the polyp image segmentation task (i.e., a binary
segmentation task), precision (positive predictive value) shows
over-segmentation, and recall (true positive rate) shows
undersegmentation. Over-segmentation means that the predicted image
covers more area than the ground truth in some part of the frame.
The under-segmentation implies that the algorithm has predicted
less polyp content in some portion of the image compared to its
corresponding ground truth. We also encourage participants to
calculate precision and recall, and these are given by:

Precision =
 +  

Recall = .</p>
      <p>+</p>
      <p>The main metric for evaluation and ranking of the teams is mIoU.
There is a direct correlation between mIoU and DSC. Therefore,
we have only used one metric. If the teams have the same mIoU
values, then the teams will be further evaluated on the basis of the
higher value of the DSC. For the evaluation, we ask the participants
to submit the predicted masks in a zip file. The resolution of the
predicted masks must be equal to the test images.
3.2</p>
    </sec>
    <sec id="sec-5">
      <title>The algorithm speed eficiency task</title>
      <p>Real-time polyp detection is required for live patient examinations
in the clinic. It can gain gastroenterologist attention to the region
of interest. Thus, we also ask participants to participate in the
eficiency task. The algorithm eficiency task is similar to the previous
task, but it puts a stronger emphasis on the algorithm’s speed in
terms of frames-per-second.</p>
      <p>Submissions for this task will be evaluated based on both the
algorithm’s speed and segmentation performance. The segmentation
performance (the segmentation accuracy) will be measured using
the same mIoU metric as described above for the first task, whereas
speed will be measured by frames-per-second (FPS) according
to the following formula:</p>
      <p># 
  =

For this task, we require participants to submit their proposed
algorithm as part of a Docker image so that we can evaluate it on
our hardware. We evaluate the performance of the algorithm on
the Nvidia GeForce GTX 1080 system. For the team ranking, we set
a certain mIoU as threshold for considering it as a valid eficient
segmentation solution and rank according to the FPS.
4</p>
    </sec>
    <sec id="sec-6">
      <title>DISCUSSION AND OUTLOOK</title>
      <p>Currently, there is a growing interest in the development of CADx
systems that could act as a second observer and digital assistant
for the endoscopists. Algorithmic benchmarking is an eficient
approach to analyze the results of diferent methods. A comparison
of diferent approaches can help us to identify challenging cases in
the data. We then can discriminate the image frames into simple,
moderate, and challenging images. Later on, we can target to
develop models on the challenging images that are usually missed out
during a routine examination to design better CADx systems. We
hope that this approach would help us to design better performing
algorithms/models that may increase the eficiency of the health
system.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Hanna</given-names>
            <surname>Borgli</surname>
          </string-name>
          , Vajira Thambawita, Pia H Smedsrud, Steven Hicks, Debesh Jha, Sigrun L Eskeland, Kristin Ranheim Randel, Konstantin Pogorelov, Mathias Lux,
          <source>Duc Tien Dang Nguyen</source>
          , et al.
          <year>2020</year>
          .
          <article-title>HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy</article-title>
          .
          <source>Scientific Data</source>
          <volume>7</volume>
          ,
          <issue>1</issue>
          (
          <year>2020</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Freddie</given-names>
            <surname>Bray</surname>
          </string-name>
          , Jacques Ferlay, Isabelle Soerjomataram, Rebecca L Siegel,
          <article-title>Lindsey A Torre,</article-title>
          and
          <string-name>
            <given-names>Ahmedin</given-names>
            <surname>Jemal</surname>
          </string-name>
          .
          <year>2018</year>
          . Global cancer statistics
          <year>2018</year>
          :
          <article-title>GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries</article-title>
          . CA:
          <article-title>a cancer journal for clinicians 68,</article-title>
          <issue>6</issue>
          (
          <year>2018</year>
          ),
          <fpage>394</fpage>
          -
          <lpage>424</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Deng-Ping</surname>
            <given-names>Fan</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ge-Peng</surname>
            <given-names>Ji</given-names>
          </string-name>
          , Tao Zhou, Geng Chen, Huazhu Fu,
          <string-name>
            <given-names>Jianbing</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Ling</given-names>
            <surname>Shao</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Pranet: Parallel reverse attention network for polyp segmentation</article-title>
          . arXiv preprint arXiv:
          <year>2006</year>
          .
          <volume>11392</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Yunbo</given-names>
            <surname>Guo</surname>
          </string-name>
          , Jorge Bernal, and
          <string-name>
            <surname>Bogdan</surname>
          </string-name>
          J Matuszewski.
          <year>2020</year>
          .
          <article-title>Polyp Segmentation with Fully Convolutional Deep Neural Networks-Extended Evaluation Study</article-title>
          .
          <source>Journal of Imaging 6</source>
          ,
          <issue>7</issue>
          (
          <year>2020</year>
          ),
          <fpage>69</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Yun</given-names>
            <surname>Bo</surname>
          </string-name>
          Guo and
          <string-name>
            <given-names>Bogdan</given-names>
            <surname>Matuszewski</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>GIANA Polyp Segmentation with Fully Convolutional Dilation Neural Networks</article-title>
          .
          <source>In Proc. of International Joint Conference on Computer Vision</source>
          , Imaging and
          <source>Computer Graphics Theory and Applications</source>
          .
          <volume>632</volume>
          -
          <fpage>641</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Steven</given-names>
            <surname>Hicks</surname>
          </string-name>
          , Michael Riegler, Pia Smedsrud, Trine B Haugen, Kristin Ranheim Randel, Konstantin Pogorelov, Håkon Kvale Stensland,
          <string-name>
            <surname>Duc-Tien</surname>
            Dang-Nguyen, Mathias Lux,
            <given-names>Andreas</given-names>
          </string-name>
          <string-name>
            <surname>Petlund</surname>
          </string-name>
          , et al.
          <year>2019</year>
          .
          <article-title>Acm multimedia biomedia 2019 grand challenge overview</article-title>
          .
          <source>In Proc. of the ACM International Conference on Multimedia. 2563-2567.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Debesh</given-names>
            <surname>Jha</surname>
          </string-name>
          , Sharib Ali, Krister Emanuelsen, Steven Hicks, Vajira Thambawita,
          <string-name>
            <surname>Riegler Michael A Garcia-Ceja</surname>
            , Enrique, Lange Thomas de, Peter T. Schmidt, Johansen Håvard, Dag Johansen, and
            <given-names>Halvorsen</given-names>
          </string-name>
          <string-name>
            <surname>Pål</surname>
          </string-name>
          .
          <year>2021</year>
          .
          <article-title>Kvasir-Instrument: Diagnostic and Therapeutictool Segmentation Dataset in Gastrointestinal Endoscopy</article-title>
          .
          <source>In Proc. of International Conference on Multimedia Modeling.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Debesh</given-names>
            <surname>Jha</surname>
          </string-name>
          , Sharib Ali,
          <string-name>
            <given-names>Håvard D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          , Dag Johansen, Jens Rittscher,
          <string-name>
            <given-names>Michael A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Pål</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Real-Time Polyp Detection, Localisation and Segmentation in Colonoscopy Using Deep Learning</article-title>
          . arXiv preprint arXiv:
          <year>2006</year>
          .
          <volume>11392</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Debesh</given-names>
            <surname>Jha</surname>
          </string-name>
          , Michael A Riegler, Dag Johansen, Pål Halvorsen, and
          <string-name>
            <given-names>Håvard D</given-names>
            <surname>Johansen</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation</article-title>
          .
          <source>In Proc. of International Symposium on Computer-Based Medical Systems</source>
          .
          <volume>558</volume>
          -
          <fpage>564</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Debesh</surname>
            <given-names>Jha</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pia H Smedsrud</surname>
          </string-name>
          ,
          <article-title>Michael A Riegler, Pål Halvorsen</article-title>
          , Thomas de Lange, Dag Johansen, and
          <string-name>
            <given-names>Håvard D</given-names>
            <surname>Johansen</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Kvasir-SEG: A segmented polyp dataset</article-title>
          .
          <source>In Proc. of International Conference on Multimedia Modeling</source>
          .
          <fpage>451</fpage>
          -
          <lpage>462</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Debesh</surname>
            <given-names>Jha</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pia H Smedsrud</surname>
          </string-name>
          ,
          <article-title>Michael A Riegler, Dag Johansen</article-title>
          , Thomas De Lange, Pål Halvorsen, and
          <string-name>
            <given-names>Håvard D</given-names>
            <surname>Johansen</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>ResUNet++: An Advanced Architecture for Medical Image Segmentation</article-title>
          .
          <source>In Proc. of International Symposium on Multimedia</source>
          .
          <volume>225</volume>
          -
          <fpage>230</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Michal</surname>
            <given-names>F Kaminski</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jaroslaw Regula</surname>
            , Ewa Kraszewska, Marcin Polkowski, Urszula Wojciechowska, Joanna Didkowska, Maria Zwierko, Maciej Rupinski, Marek P Nowacki, and
            <given-names>Eugeniusz</given-names>
          </string-name>
          <string-name>
            <surname>Butruk</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Quality indicators for colonoscopy and the risk of interval cancer</article-title>
          .
          <source>New England Journal of Medicine</source>
          <volume>362</volume>
          ,
          <issue>19</issue>
          (
          <year>2010</year>
          ),
          <fpage>1795</fpage>
          -
          <lpage>1803</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Nadim</surname>
            <given-names>Mahmud</given-names>
          </string-name>
          , Jonah Cohen, Kleovoulos Tsourides, and
          <string-name>
            <surname>Tyler M Berzin</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Computer vision and augmented reality in gastrointestinal endoscopy</article-title>
          .
          <source>Gastroenterology report 3</source>
          ,
          <issue>3</issue>
          (
          <year>2015</year>
          ),
          <fpage>179</fpage>
          -
          <lpage>184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Tanvir</surname>
            <given-names>Mahmud</given-names>
          </string-name>
          , Bishmoy Paul, and Shaikh Anowarul Fattah.
          <year>2020</year>
          .
          <article-title>PolypSegNet: A Modified Encoder-Decoder Architecture for Automated Polyp Segmentation from Colonoscopy Images</article-title>
          .
          <source>Computers in Biology and Medicine</source>
          (
          <year>2020</year>
          ),
          <fpage>104119</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Konstantin</surname>
            <given-names>Pogorelov</given-names>
          </string-name>
          , Kristin Ranheim Randel, Thomas de Lange, Sigrun Losada Eskeland, Carsten Griwodz, Dag Johansen, Concetto Spampinato, Mario Taschwer, Mathias Lux,
          <string-name>
            <surname>Peter Thelin Schmidt</surname>
          </string-name>
          , et al.
          <year>2017</year>
          .
          <article-title>Nerthus: A Bowel Preparation Quality Video Dataset</article-title>
          .
          <source>In Proceedings of the ACM on Multimedia Systems Conference</source>
          .
          <volume>170</volume>
          -
          <fpage>174</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Konstantin</surname>
            <given-names>Pogorelov</given-names>
          </string-name>
          , Kristin Ranheim Randel, Carsten Griwodz, Sigrun Losada Eskeland, Thomas de Lange, Dag Johansen, Concetto Spampinato,
          <string-name>
            <surname>Duc-Tien</surname>
          </string-name>
          Dang-Nguyen, Mathias Lux,
          <string-name>
            <surname>Peter Thelin Schmidt</surname>
          </string-name>
          , et al.
          <year>2017</year>
          .
          <article-title>Kvasir: A multiclass image dataset for computer aided gastrointestinal disease detection</article-title>
          .
          <source>In Proc. of the ACM on Multimedia Systems Conference</source>
          .
          <volume>164</volume>
          -
          <fpage>169</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Konstantin</surname>
            <given-names>Pogorelov</given-names>
          </string-name>
          , Michael Riegler, Pål Halvorsen, Steven Hicks, Kristin Ranheim Randel, Duc Tien Dang Nguyen, Mathias Lux, Olga Ostroukhova, and Thomas de Lange.
          <year>2018</year>
          .
          <article-title>Medico multimedia task at MediaEval 2018</article-title>
          .
          <source>In Proc. of MediaEval 2018 CEUR Workshop.</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Riegler</surname>
          </string-name>
          , Konstantin Pogorelov, Pål Halvorsen, Carsten Griwodz, Thomas Lange, Kristin Randel, Sigrun Eskeland, Duc Tien Dang Nguyen, Mathias Lux, and
          <string-name>
            <given-names>Concetto</given-names>
            <surname>Spampinato</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Multimedia for medicine: the medico task at Mediaeval 2017</article-title>
          .
          <source>In Proc. CEUR Worksh. Multim. Bench. Worksh.</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Pia</surname>
            <given-names>H Smedsrud</given-names>
          </string-name>
          , Henrik L Gjestang, Oda O Nedrejord, Espen Naess, Vajira Thambawita, Steven Hicks, Hanna Borgli, Debesh Jha, Tor Jan Derek Berstad,
          <string-name>
            <surname>Sigrun L Eskeland</surname>
          </string-name>
          , et al.
          <year>2020</year>
          .
          <article-title>Kvasir-Capsule, a video capsule endoscopy dataset</article-title>
          . (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Pu</surname>
            <given-names>Wang</given-names>
          </string-name>
          , Xiao Xiao,
          <string-name>
            <surname>Jeremy R Glissen Brown</surname>
            , Tyler M Berzin,
            <given-names>Mengtian</given-names>
          </string-name>
          <string-name>
            <surname>Tu</surname>
            , Fei Xiong, Xiao Hu, Peixi Liu, Yan Song,
            <given-names>Di</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , et al.
          <year>2018</year>
          .
          <article-title>Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy</article-title>
          .
          <source>Nature biomedical engineering 2</source>
          ,
          <issue>10</issue>
          (
          <year>2018</year>
          ),
          <fpage>741</fpage>
          -
          <lpage>748</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>