<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Nice, France
" gorkem.polat@metu.edu.tr (G. Polat); eceisik@metu.edu.tr (E. Isik-Polat); kerem.kayabay@metu.edu.tr
(K. Kayabay); atemizel@metu.edu.tr (A. Temizel)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Polyp Detection in Colonoscopy Images using Deep Learning and Bootstrap Aggregation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gorkem Polat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ece Isik-Polat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kerem Kayabay</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alptekin Temizel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Graduate School of Informatics, Middle East Technical University</institution>
          ,
          <addr-line>Ankara</addr-line>
          ,
          <country country="TR">Turkey</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Neuroscience and Neurotechnology Center of Excellence</institution>
          ,
          <addr-line>Ankara</addr-line>
          ,
          <country country="TR">Turkey</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Computer-aided polyp detection is playing an increasingly more important role in the colonoscopy procedure. Although many methods have been proposed to tackle the polyp detection problem, their out-of-distribution test results, which is an important indicator of their clinical readiness, are not demonstrated. In this study, we propose an ensemble-based polyp detection pipeline for detecting polyps in colonoscopy images. We train various models from EficientDet family on both the EndoCV2021 and the Kvasir-SEG datasets, and evaluate their performances on these datasets both in- and out-of-distribution manner. The proposed architecture works in near real-time due to the eficiency of the EficientDet architectures even when used in an ensemble setting.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Polyp detection</kwd>
        <kwd>Colonoscopy</kwd>
        <kwd>Medical image Processing</kwd>
        <kwd>Deep Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recent studies show that colorectal cancer is the third common cancer type with the
secondhighest mortality rate [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. It is estimated that there were more than 1.9 million new diagnoses
of colorectal cancer and 935.000 deaths in 2020. It accounts for approximately one-tenth of
all cancer cases and deaths [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Colonoscopy is the most common screening test for
earlystage detection of colorectal cancer incidences and removal of polyps and adenomas, and
potentially reducing mortalities [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ]. However, efective polyp detection depends on the
gastroenterologist’s practical skills, and it is reported that approximately 20% of polyps may be
missed [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Therefore, automatic and accurate detection of polyps is an essential aid to medical
practice [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>
        Although there are many studies utilizing advanced machine learning methods on polyp
detection, it has been shown that deep learning models could overfit, resulting in institutional
biases [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. While these models show good performance against test data from the same
institution, they may not generalize well to external data from diferent institutions. In this work, we
propose an ensemble-based polyp detection architecture for detecting polyps in colonoscopy
images. We evaluate the detection performance on both EndoCV2021 and Kvasir-SEG datasets.
The results show that the proposed architecture improves the detection performance compared
to the baseline methods. In addition, it has favorable out-of-distribution test results and near
real-time processing speed.
      </p>
      <p>The rest of the paper is organized as follows. The related work in the literature is reviewed in
Section 2. In Section 3, the proposed approach and the details of the methodology are presented.
The experimental design and training details are given in Section 4. The experiment results and
discussion are given in Section 5. Finally, the conclusion of this study is given in Section 6.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Automatic detection of polyps is an extensively studied subject. In earlier studies, proposed
methods utilize machine learning methods and mainly focus on the shape, texture, or color
properties of the polyps [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ]. The method proposed in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] uses an ensemble
architecture combining the results of convolutional neural network (CNN) models specialized for
each polyp feature (color, texture, shape, and temporal information). Various approaches and
methodologies, proposed in the context of the Automatic Polyp Detection Challenge [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], based
on handcrafted features, end-to-end learning using CNNs, and their combinations are reported
in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], a region-based CNN (i.e., Inception ResNet) is trained on both colonoscopy
images and videos for the polyp detection task. Also, data augmentation methods such as
rotating, scaling, shearing, blurring, and brightening were applied to increase the number of
training samples. In [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], a multi-threaded deep learning algorithm that uses a smaller receptive
ifeld focusing on local features has been proposed for polyp detection using a large number of
data with varying morphology. In [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], various methods that use deep learning and hand-crafted
global features have been proposed to perform pixel-based, frame-based, and block-based
segmentation and detection. In [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], state-of-the-art deep learning methods for polyp detection,
localization, and segmentation were compared on the basis of both accuracy and speed with
extensive experiments on the Kvasir-SEG [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] dataset and a new method, ColonSegNet, was
proposed. This method is based on an encoder-decoder architecture for segmentation and
provides promising performance results in real-time, in comparison to the state-of-the-art
methods.
      </p>
      <p>Studies in the literature generally focus on single model performance evaluated on the same
distribution dataset. In this study, we propose a bagging-type ensemble-based method and
evaluate the proposed architecture on datasets coming from diferent institutions to investigate
its generalizability potential.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Method</title>
      <sec id="sec-3-1">
        <title>3.1. Base Models</title>
        <p>
          In this study, we use EficientDet [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] model family due to their prominent speed and detection
performance. EficientDet models use EficientNet [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] as backbone, weighted bi-directional
feature pyramid network (BiFPN) for multiscale feature fusion, and shared class/box prediction
networks for bounding box classification and regression. The compound scaling method of
the model scales the resolution, depth, and width for backbone, feature network, and box/class
prediction networks at the same time. In this study, we have experimented with D0, D1, D2, and
D3 versions and also evaluated their ensemble using bagging as described in the next section.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Ensemble of Models</title>
        <p>
          Bootstrap aggregating (bagging) [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] is a type of ensemble method. The key idea of the bagging
is to aggregate multiple versions of the predictors, which are trained on bootstrap replicates of
the training set. Although bootstrap samples are generally formed by randomly picking data
points with replacement, after reserving a part of the dataset for the test set, we split the rest
into four cross-validation folds to generate folds as diferent from each other as possible. Each
model is trained on a diferent fold and uses 75% of the samples in the fold for training and
25% for validation. When individual models are trained on a fixed training-validation split, the
validation set is not seen by any model, which restricts each model to its respective training set.
On the contrary, bagging-style ensemble provides that the combination of models trained on
diferent folds covers the whole training dataset.
        </p>
        <p>
          Bounding boxes generated individually by each model are then merged using weighted
box fusion [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. Unlike previous fusion techniques, which are generally based on keeping
the highest confidence box and removing others, used in this domain [
          <xref ref-type="bibr" rid="ref14 ref25">14, 25</xref>
          ], weighted box
fusion constructs an averaged box utilizing the confidence scores of the predicted boxes. In
the previous approaches (NMS [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], soft-NMS [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]), the highest confidence boxes may not be
the best rectangle for the ground truth object and the quality of the fused box is significantly
improved with the weighted box fusion [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. The overall architecture of the proposed approach
is given in Figure 1.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Design</title>
      <p>In order to evaluate the performance of the individual models and compare them with the
proposed bagging ensemble architecture, we designed two sets of experiments. First, the models
are trained on the EndoCV2021 dataset and results are reported on both the EndoCV2021 test
set, samples of which are from the same distribution, and Kvasir-SEG, samples of which are
from another institution. Then the same models are trained on the Kvasir-SEG and results are
reported on the Kvasir-SEG test set as well as the whole EndoCV2021.</p>
      <p>
        The EndoCV2021 dataset [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] consists of 1449 images from 5 diferent centers (center-1:
256, center-2: 301, center-3: 457, center-4: 227, center-5: 208). The resolution of the images
varies between 572 × 498 and 1920 × 1080. In addition, organizers provided 1793 frames from
15 diferent video segments. Since subsequent frames are very similar, we only sampled 75
frames out of 1793 frames to prevent bias towards a video segment due to over-representation.
In total, 1524 images were split into training, validation, and test sets as 70%, 15%, and 15%,
respectively. The validation set is used for hyperparameter tuning and the results are reported
using the withheld test set. Ensemble of models on the fixed training set is also calculated
for comparison with bagging ensemble. For the bagging ensemble, training and validation
sets, which corresponds to 85% of the overall dataset, are combined and cross-validation folds
are formed within it. Kvasir-SEG dataset contains 1000 polyp images and their corresponding
ground-truth. The resolution of the images varies between 332 × 487 and 1920 × 1072. All
1000 images are used as an out-of-distribution test set. Results are given in Table 1.
      </p>
      <p>In addition, we compared the proposed architecture’s performance by training on the
KvasirSEG. Kvasir-SEG dataset was split into training and test sets as 80% and 20%, respectively. The
training dataset, then, was split into four cross-validation folds for the training of individual
models. The proposed architecture has been evaluated on the folds, in the same way it was
done for the EndoCV2021 dataset. Performances of two bagging ensembles, which are trained
on the Kvasir-SEG and EndoCV2021 datasets, on the Kvasir-SEG test set are given in Table 2.
We also evaluated the performances of the single EficientDet models and bagging ensemble
model trained on the the Kvasir-SEG dataset on the whole EndoCV2021 dataset to compare
cross-dataset performance.</p>
      <p>In order to increase the variance and for better generalization performance, the following
data augmentation techniques have been used: Scale jittering (0.2, 2.0), horizontal flipping,
and rotating (0°-360°). During the experiments, it was observed that brightness and contrast
augmentations have a detrimental efect on the performance and, hence, they have been excluded.
Since original images have varying resolutions, the maximum dimension of each image (width
or height) is rescaled to the input size of the respective EficientDet version, then, smaller
dimension is rescaled with the same rate to preserve the aspect ratio. Since EficientDet models
require square inputs, zero padding is applied.</p>
      <p>
        Models have been optimized by Adam optimizer [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. Learning rate scheduling has been
used, which decreased the learning rate by a factor of 0.2 whenever validation set loss did not
decrease for the previous ten epochs. Early stopping has been used to terminate the training
if there was no decrease in the validation set loss for the last 25 epochs. 2× NVIDIA RTX
2080 GPUs are used for the training of D0 and D1 models and 4× NVIDIA V100 16GB GPUs
are used for the training of D2 and D3 models. Input size for each model and their inference
speed, in terms of frames-per-second (FPS) on NVIDIA RTX 2080 GPU are given in Table 3.
The processing time per frame is calculated as the sum of data transfer time from CPU to GPU,
forward-pass processing through the model, and applying post-processing steps (confidence
thresholding and NMS operation). Weighted box fusion adds 0.012 seconds processing time on
average. Processing speeds were calculated by taking the average for 1000 images.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental Results &amp; Discussion</title>
      <p>Experimental results in Table 1 show that the performances of the EficientDet models increases
in line with their scales, as expected. Fusion of diferent models, which are trained on a fixed
training-validation split, through weighted box fusion results in increased performance. The
best overall detection performance on both test sets is obtained when the proposed bagging
ensemble approach is used. A similar trend is observed when the proposed approach is tested
on the Kvasir-SEG dataset (Table 2). In addition, bagging ensemble architecture trained on
the EndoCV2021 dataset gets a very competitive accuracy on the Kvasir-SEG test set. When
EficientDet and bagging ensemble models, which are trained on the Kvasir-SEG dataset, are
tested on the EndoCV2021 dataset, the bagging ensemble gets the best detection performance.
This shows that, in addition to improving the performance on the same distribution test set,
bagging ensemble approach also improves the detection performance on an out-of-distribution
test set.</p>
      <p>
        Experiment results in Table 1 and Table 2 show that, the bagging ensemble approach improves
the detection performance on both datasets. In this approach, diferent bootstrap samples create
perturbations for the unstable models and improve the ensemble performance in comparison to
the fixed training set ensemble [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Moreover, in the case of fixed training-validation split for
the ensemble, a significant part of the dataset is kept away for the validation set, limiting the
number of samples that can be used for training. Bagging-style ensemble ensures that the fusion
of diferent models trained on their respective training set covers the whole dataset available
for training. We cross-dataset validated the proposed approach by training on EndoCV2021
and testing on Kvasir-SEG and vice versa. Results show that bagging-type ensemble improves
the baseline results on out-of-distributions as well (mAP of 0.603 vs. mAP of 0.621 for training
on EndoCV2021 and testing on Kvasir-SEG and mAP of 0.411 vs. mAP of 0.446 for training on
Kvasir-SEG and testing on EndoCV2021). A significant observation in cross-dataset validation
of out-of-distribution samples is that training on the EndoCV2021 dataset obtained better
generalization performance compared to the training on Kvasir-SEG (mAP of 0.611 vs. mAP
of 0.446). Two possible reasons for this result are training set size (1191 in EndoCV2021 vs.
800 in Kvasir-SEG) and dataset heterogeneity. Images in the EndoCV2021 dataset come from 5
diferent centers, which makes it a more diverse dataset in comparison to the Kvasir-SEG dataset,
samples of which are from a single institution. This result highlights the importance of sample
heterogeneity in the dataset for out-of-distribution performance. EndoCV2021 leaderboard also
incorporated the generalisation metrics similar to detection generalisation defined in [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] to
quantify the performance gaps.
      </p>
      <p>
        An analysis of the detection results reveal that many glare artifacts are falsely detected as
polyps (Figure 2 (b) and (c)). A separate artifact detector designed to work on endoscopic images
[
        <xref ref-type="bibr" rid="ref25">25, 31</xref>
        ] may be incorporated into the system to reduce false positives and improve overall
detection performance.
      </p>
      <p>
        A drawback of the weighted box fusion is that it keeps all the bounding boxes on the image,
similar to the afirmative type ensemble used in [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. This type of ensemble tends to increase
false positives; therefore, a consensus type ensemble, which only keeps the bounding boxes
when the majority of the models agree, can be incorporated into weighted box fusion as future
work.
      </p>
      <p>
        With the advancements in hardware and software systems, parallel processing systems have
become commonplace for real-time video processing applications [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. If the individual models
are run in parallel, the inference time will be upper bounded by the slowest model, which is
EficientDet-D3. Then, adding the weighted box fusion processing time on top of inference time,
the total processing time is calculated as 65.2 milliseconds per frame, corresponding to 15.4 FPS,
making it feasible for near real-time processing.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this study, we have used a bagging-style ensemble method in association with diferent
polyp detection networks. We have shown that using the proposed architecture increases the
overall detection performance on both the same distribution test set and out-of-distribution test
set, which is important for assessing the readiness of the methods for clinical use. While the
proposed approach has been validated on one of the largest publicly available polyp datasets,
it should be tested on larger and diverse datasets for a more comprehensive evaluation. The
proposed method can be extended in the future by integration of an artefact detector and a
consensus type ensemble may be used to reduce the number of false-positives. In addition, the
use of higher scale EficientDet models could be investigated, which could potentially provide
better detection performance in exchange for higher computational cost.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work has been supported by Middle East Technical University Scientific Research Projects
Coordination Unit under grant number GAP-704-2020-10071. The numerical calculations
reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and
Grid Computing Center (TRUBA resources).
of detection and segmentation algorithms for artefacts in clinical endoscopy, Scientific
Reports 10 (2020) 2748. doi:10.1038/s41598-020-59413-5.
[31] S. Ali, M. Dmitrieva, N. Ghatwary, S. Bano, G. Polat, A. Temizel, A. Krenzer, A. Hekalo,
Y. B. Guo, B. Matuszewski, M. Gridach, I. Voiculescu, V. Yoganand, A. Chavan, A. Raj,
N. T. Nguyen, D. Q. Tran, L. D. Huynh, N. Boutry, S. Rezvy, H. Chen, Y. H. Choi,
A. Subramanian, V. Balasubramanian, X. W. Gao, H. Hu, Y. Liao, D. Stoyanov, C. Daul,
S. Realdon, R. Cannizzaro, D. Lamarque, T. Tran-Nguyen, A. Bailey, B. Braden, J. E.
East, J. Rittscher, Deep learning for detection and segmentation of artefact and
disease instances in gastrointestinal endoscopy, Medical Image Analysis 70 (2021) 102002.
doi:https://doi.org/10.1016/j.media.2021.102002.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Parkin</surname>
          </string-name>
          ,
          <source>Global cancer statistics in the year</source>
          <year>2000</year>
          ,
          <source>The Lancet Oncology</source>
          <volume>2</volume>
          (
          <year>2001</year>
          )
          <fpage>533</fpage>
          -
          <lpage>543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>WHO</given-names>
            ,
            <surname>Cancer</surname>
          </string-name>
          ,
          <year>2021</year>
          . URL: https://www.who.int/news-room/fact-sheets/detail/cancer.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Winawer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Zauber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. J. O'brien</surname>
            ,
            <given-names>L. S.</given-names>
          </string-name>
          <string-name>
            <surname>Gottlieb</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          <string-name>
            <surname>Sternberg</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          <string-name>
            <surname>Waye</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Schapiro</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          <string-name>
            <surname>Bond</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          <string-name>
            <surname>Panish</surname>
          </string-name>
          , et al.,
          <article-title>Prevention of colorectal cancer by colonoscopic polypectomy</article-title>
          ,
          <source>New England Journal of Medicine</source>
          <volume>329</volume>
          (
          <year>1993</year>
          )
          <fpage>1977</fpage>
          -
          <lpage>1981</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Zauber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Winawer</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. J. O'Brien</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Lansdorp-Vogelaar</surname>
          </string-name>
          , M. van
          <string-name>
            <surname>Ballegooijen</surname>
            ,
            <given-names>B. F.</given-names>
          </string-name>
          <string-name>
            <surname>Hankey</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          <string-name>
            <surname>Bond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Schapiro</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          <string-name>
            <surname>Panish</surname>
          </string-name>
          , et al.,
          <article-title>Colonoscopic polypectomy and long-term prevention of colorectal-cancer deaths</article-title>
          ,
          <source>N Engl J Med</source>
          <volume>366</volume>
          (
          <year>2012</year>
          )
          <fpage>687</fpage>
          -
          <lpage>696</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Lieberman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Rex</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Winawer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Giardiello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Levin</surname>
          </string-name>
          ,
          <article-title>Guidelines for colonoscopy surveillance after screening and polypectomy: a consensus update by the us multi-society task force on colorectal cancer</article-title>
          ,
          <source>Gastroenterology</source>
          <volume>143</volume>
          (
          <year>2012</year>
          )
          <fpage>844</fpage>
          -
          <lpage>857</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Kaminski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Regula</surname>
          </string-name>
          , E. Kraszewska,
          <string-name>
            <given-names>M.</given-names>
            <surname>Polkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Wojciechowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Didkowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zwierko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rupinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Nowacki</surname>
          </string-name>
          , E. Butruk,
          <article-title>Quality indicators for colonoscopy and the risk of interval cancer</article-title>
          ,
          <source>New England Journal of Medicine</source>
          <volume>362</volume>
          (
          <year>2010</year>
          )
          <fpage>1795</fpage>
          -
          <lpage>1803</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Iwahori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hattori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Adachi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Bhuyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Woodham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kasugai</surname>
          </string-name>
          ,
          <article-title>Automatic detection of polyp using hessian filter and hog features</article-title>
          ,
          <source>Procedia computer science 60</source>
          (
          <year>2015</year>
          )
          <fpage>730</fpage>
          -
          <lpage>739</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Zech</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Badgeley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Titano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Oermann</surname>
          </string-name>
          ,
          <article-title>Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study</article-title>
          ,
          <source>PLoS medicine 15</source>
          (
          <year>2018</year>
          )
          <article-title>e1002683</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Tavanapong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wong</surname>
          </string-name>
          , P. C. De Groen,
          <article-title>Polyp detection in colonoscopy video using elliptical shape feature</article-title>
          ,
          <source>in: 2007 IEEE International Conference on Image Processing</source>
          , volume
          <volume>2</volume>
          ,
          <year>2007</year>
          , pp.
          <fpage>II</fpage>
          -
          <volume>465</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Karkanis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Iakovidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Maroulis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tzivras</surname>
          </string-name>
          ,
          <article-title>Computer-aided tumor detection in endoscopic video using color wavelet features</article-title>
          ,
          <source>IEEE transactions on information technology in biomedicine 7</source>
          (
          <year>2003</year>
          )
          <fpage>141</fpage>
          -
          <lpage>152</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Mamonov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. N.</given-names>
            <surname>Figueiredo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Figueiredo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-H. R.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <article-title>Automated polyp detection in colon capsule endoscopy</article-title>
          ,
          <source>IEEE transactions on medical imaging 33</source>
          (
          <year>2014</year>
          )
          <fpage>1488</fpage>
          -
          <lpage>1502</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ameling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wirth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Paulus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Lacey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vilarino</surname>
          </string-name>
          ,
          <article-title>Texture-based polyp detection in colonoscopy</article-title>
          ,
          <source>in: Bildverarbeitung für die Medizin</source>
          <year>2009</year>
          , Springer,
          <year>2009</year>
          , pp.
          <fpage>346</fpage>
          -
          <lpage>350</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tajbakhsh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Gurudu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <article-title>Automatic polyp detection in colonoscopy videos using an ensemble of convolutional neural networks</article-title>
          ,
          <source>in: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI)</source>
          , IEEE,
          <year>2015</year>
          , pp.
          <fpage>79</fpage>
          -
          <lpage>83</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[14] Automatic polyp detection challenge</source>
          ,
          <year>2015</year>
          . URL: https://endovis.grand-challenge.org/.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bernal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tajkbaksh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Matuszewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Angermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Romain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rustad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Balasingham</surname>
          </string-name>
          , et al.,
          <article-title>Comparative validation of polyp detection methods in video colonoscopy: results from the miccai 2015 endoscopic vision challenge</article-title>
          ,
          <source>IEEE transactions on medical imaging 36</source>
          (
          <year>2017</year>
          )
          <fpage>1231</fpage>
          -
          <lpage>1249</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Qadir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Aabakken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bergsland</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Balasingham</surname>
          </string-name>
          ,
          <article-title>Automatic colon polyp detection using region based deep cnn and post learning approaches</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>40950</fpage>
          -
          <lpage>40962</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R. G.</given-names>
            <surname>Brown</surname>
          </string-name>
          , T. M.
          <string-name>
            <surname>Berzin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tu</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Hu</surname>
            , P. Liu,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <article-title>Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy</article-title>
          ,
          <source>Nature biomedical engineering 2</source>
          (
          <year>2018</year>
          )
          <fpage>741</fpage>
          -
          <lpage>748</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pogorelov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Ostroukhova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jeppsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Espeland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Griwodz</surname>
          </string-name>
          , T. de Lange,
          <string-name>
            <given-names>D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <article-title>Deep learning and hand-crafted feature based approaches for polyp detection in medical videos</article-title>
          ,
          <source>in: 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>381</fpage>
          -
          <lpage>386</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rittscher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <article-title>Realtime polyp detection, localisation and segmentation in colonoscopy using deep learning</article-title>
          , arXiv preprint arXiv:
          <year>2011</year>
          .
          <volume>07631</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H.</given-names>
            <surname>Smedsrud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          , T. de Lange,
          <string-name>
            <given-names>D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <article-title>Kvasir-seg: A segmented polyp dataset</article-title>
          , in: International Conference on Multimedia Modeling, Springer,
          <year>2020</year>
          , pp.
          <fpage>451</fpage>
          -
          <lpage>462</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Eficientdet: Scalable and eficient object detection</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>10781</fpage>
          -
          <lpage>10790</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          , Eficientnet:
          <article-title>Rethinking model scaling for convolutional neural networks</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6105</fpage>
          -
          <lpage>6114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>L.</given-names>
            <surname>Breiman</surname>
          </string-name>
          , Bagging predictors,
          <source>Machine learning 24</source>
          (
          <year>1996</year>
          )
          <fpage>123</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Solovyev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gabruseva</surname>
          </string-name>
          ,
          <article-title>Weighted boxes fusion: Ensembling boxes from diferent object detection models</article-title>
          ,
          <source>Image and Vision Computing</source>
          <volume>107</volume>
          (
          <year>2021</year>
          )
          <fpage>104117</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>G.</given-names>
            <surname>Polat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Inci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Temizel</surname>
          </string-name>
          ,
          <article-title>Endoscopic artefact detection with ensemble of deep neural networks and false positive elimination</article-title>
          .,
          <source>in: Proc. International Workshop</source>
          and Challenge on Computer Vision in Endoscopy (
          <article-title>EndoCV2020) in conjunction with the IEEE</article-title>
          <source>International Symposium on Biomedical Imaging (ISBI2020)</source>
          , volume
          <volume>2595</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Neubeck</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Van Gool</surname>
          </string-name>
          ,
          <article-title>Eficient non-maximum suppression</article-title>
          ,
          <source>in: 18th International Conference on Pattern Recognition (ICPR'06)</source>
          , volume
          <volume>3</volume>
          , IEEE,
          <year>2006</year>
          , pp.
          <fpage>850</fpage>
          -
          <lpage>855</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bodla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chellappa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <article-title>Soft-nms-improving object detection with one line of code</article-title>
          ,
          <source>in: Proceedings of the IEEE international conference on computer vision</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>5561</fpage>
          -
          <lpage>5569</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ghatwary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Realdon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cannizzaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Daul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rittscher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. E.</given-names>
            <surname>Salem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lamarque</surname>
          </string-name>
          , T. de Lange,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>East</surname>
          </string-name>
          ,
          <article-title>Polypgen: A multi-center polyp detection and segmentation dataset for generalisability assessment</article-title>
          ,
          <source>arXiv</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Kingma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ba</surname>
          </string-name>
          ,
          <article-title>Adam: A method for stochastic optimization</article-title>
          ,
          <source>arXiv preprint arXiv:1412.6980</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Braden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          , G. Cheng, P. Zhang,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kayser</surname>
          </string-name>
          , R. D.
          <string-name>
            <surname>Soberanis-Mukul</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Albarqouni</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Watanabe</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Oksuz</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          <string-name>
            <surname>Ning</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>X. W.</given-names>
          </string-name>
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Realdon</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Loshchenov</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Schnabel</surname>
            ,
            <given-names>J. E.</given-names>
          </string-name>
          <string-name>
            <surname>East</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Wagnieres</surname>
            ,
            <given-names>V. B.</given-names>
          </string-name>
          <string-name>
            <surname>Loschenov</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Grisan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Daul</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Blondel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Rittscher</surname>
          </string-name>
          , An objective comparison
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>