<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Overview of the ImageCLEFcoral 2020 Task: Automated Coral Reef Image Annotation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jon Chamberlain</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Campello</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jessica Wright</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Louis Clift</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Adrian Clark</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alba Garc a Seco de Herrera</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science and Electronic Engineering, University of Essex</institution>
          ,
          <addr-line>Colchester</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Wellcome Trust</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper presents an overview of the ImageCLEFcoral 2020 task that was organised as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020. The task addresses the problem of automatically segmenting and labelling a collection of underwater images that can be used in combination to create 3D models for the monitoring of coral reefs. The data set comprises 440 human-annotated training images, with 12,082 hand-annotated substrates, from a single geographical region. The test set comprises a further 400 test images, with 8,640 substrates annotated, from four geographical regions ranging in geographical similarity and ecological connectedness to the training data (100 images per subset). 15 teams registered, of which 4 teams submitted 53 runs. The majority of submissions used deep neural networks, generally convolutional ones. Participants' entries showed that some level of automatically annotating corals and benthic substrates was possible, despite this being a di cult task due to the variation of colour, texture and morphology between and within classi cation types.</p>
      </abstract>
      <kwd-group>
        <kwd>ImageCLEF</kwd>
        <kwd>image annotation</kwd>
        <kwd>image labelling</kwd>
        <kwd>classi cation</kwd>
        <kwd>segmentation</kwd>
        <kwd>coral reef image annotation</kwd>
        <kwd>marine image annotation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Coral reef systems are delicate natural environments, formed of highly complex
non-uniform structures that support the biodiversity found in tropical coral reefs.
Coral reefs also form a vital source of income and food for over 500 million
people, providing ecological goods and services such as food, coastal protection,
new biochemical compounds, and recreation with an estimated value of around
$352,000 ha-1 y-1 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        However, there has been a steady decline in coral reefs in recent years [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Coral reefs are threatened by global stressors such as climate change and
subsequent extreme weather events, as well as by local anthropogenic threats such
as over shing and destructive shing, watershed pollution, and reef removal for
coastal development. Currently, more than 85% of the reefs within the Coral
Triangle region are at risk of disappearing [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        Coral reef community composition is an essential element for monitoring reef
health and the importance of automated data collection, 3D analysis and
largescale data processing are increasingly being recognised [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In 2017, Chamberlain
et al. at the University of Essex developed a novel multi-camera system to scale
up previous data capture approaches [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] by acquiring imagery from several
viewpoints simultaneously. Results showed that accurate data models were created
in a fraction of the time and complex structures were more accurately
reconstructed. The increasing use of large-scale modelling of environments has driven
the need to have such models labelled, with annotated data essential for machine
learning techniques to automatically identify areas of interest, assess community
composition and monitor phase shifts within functional groups.
      </p>
      <p>
        The composition of marine life on a coral reef varies globally. Within the
Coral Triangle, a region that encloses more than 86,500km2 of coral reef area
and includes the world's highest marine biodiversity, there are over 76% of all
coral species and more than 3,000 sh species [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The Western Indian Ocean,
and more speci cally the Northern Mozambique Channel (NMC), is a centre
of high diversity for hard corals and reef fauna [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and forms an evolutionary
distinct region within the Indian Ocean, but the diversity shows high resemblance
with the diversity found in the Coral Triangle region. Coral reef fauna from the
Caribbean within the Atlantic Ocean, is strongly delineated from (and shows
low a nity with biodiversity found in) the Indian Ocean [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>Geographically distinct regions can contain the same species or genera with
entirely di erent morphological features and traits. The variety in both
environmental conditions and competitive niche lling can lead to changes in phenotypic
expression, which makes the task of identifying them di cult without an
extensive training image set.</p>
      <p>
        As part of ImageCLEF 2019 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the ImageCLEFcoral task required
participants to automatically annotate and localise a collection of images with types
of benthic substrate, such as hard coral and sponge. The training set and test
sets contained images from the same coral reef [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Participants' entries showed that some level of automatically annotating
corals and benthic substrates was possible, despite this being a di cult task
due to the variation of colour, texture and morphology between and within
classi cation types.</p>
      <p>
        This year, as part of ImageCLEF 2020 [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the volume of training data
was increased and there were four subsets of test data ranging in
geographical similarity and ecological connectedness to the training data. The intention
was not only to assess how accurately the images could be annotated, but also
how transferable the algorithms were between datasets collected from di erent
geographical regions with di erent community compositions.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Tasks</title>
      <p>The annotation task is di erent from other image classi cation and marine
substrate classi cation tasks [12{14]. Firstly, the images are collected using low-cost
action cameras (approx. $200 per camera) with a xed lens and ring on
timelapse or extracted as stills from video. The e ect of this on the imagery is that
there is some blurring, the colour balance is not always correct (as the
camera adjusts the white balance automatically based on changing environmental
variables) and nal image quality is lower than what could be achieved using
high-end action cameras or DSLRs. However, the images can be used for
reconstructing a 3D model and therefore have useful information in the pipeline. Low
cost cameras were used to show this approach could be replicated a ordably for
future projects.</p>
      <p>
        Following the success of the rst edition of the ImageCLEFcoral task [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], in
2020 participants were again asked to devise and implement algorithms for
automatically annotating regions in a collection of images containing several types
of benthic substrate, such as hard coral or sponge. The images were captured
using an underwater multi-camera system developed at the Marine Technology
Research Unit at the University of Essex (MTRU), UK3.
      </p>
      <p>
        The ground truth annotations of the training and test sets were made by
a combination of marine biology MSc students at the University of Essex and
experienced marine researchers. All annotations were double checked by an
experienced coral reef researcher. The annotations were performed using a web-based
tool, initially developed in a collaborative project with London-based company
Filament Ltd and subsequently extended by one of the organisers. This tool was
designed to be simple to learn, quick to use and allows many people to work
concurrently (full details are presented in the ImageCLEFcoral 2019 overview
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]).
      </p>
      <p>The overall task comprises two subtasks:
{ Subtask 1 : Coral reef image annotation and localisation;
{ Subtask 2 : Coral reef image pixel-wise parsing.</p>
      <p>In the \coral reef image annotation and localisation" subtask, the annotation
is a bounding box, with sides parallel to the edges of the image, around identi ed
features. In the \coral reef image pixel-wise parsing" subtask, participants
submit a series of boundary image coordinates which form a single polygon around
each identi ed feature (these polygons should not have self-intersections).
Participants were invited to make submissions for either or both tasks.</p>
      <p>As in the rst edition, algorithmic performance is evaluated on the unseen test
data using the popular intersection over union metric from the PASCAL VOC4
exercise. This computes the area of intersection of the output of an algorithm
and the corresponding ground truth, normalizing that by the area of their union
to ensure its maximum value is bounded.
3 https://essexnlip.uk/marine-technology-research-unit/
4 http://host.robots.ox.ac.uk/pascal/VOC/</p>
    </sec>
    <sec id="sec-3">
      <title>Collection</title>
      <p>The data set comprises 440 human-annotated training images, with 12,082
substrates, from the Wakatobi Marine Reserve, Indonesia; this is the complete
training and test sets as used in the ImageCLEFcoral 2019 task. The test set comprises
a further 400 test images (see Figure 1), with 8,640 substrates annotated, from
four geographical regions, 100 images per subset:
1. Wakatobi Marine Reserve, Indonesia { the same location as the training
images;
2. Spermonde archipelago, Indonesia { geographically similar location to the
training set;
3. Seychelles, Indian Ocean { geographically distinct but ecologically connected
coral reef;
4. Dominica, Caribbean { geographically and ecologically distinct rocky reef.</p>
      <p>The images are part of a monitoring collection and therefore many have a
tape measure running through a portion of the image. As in 2019, the data
set comprises an area of underwater terrain. Many images contain the same
ground features captured from di erent viewpoints. Each image contains some
of the same thirteen types of benthic substrates as in 2019, namely hard coral |
branching, submassive, boulder, encrusting, table, foliose, mushroom; soft coral;
gorgonian sea fan (soft coral); sponge; barrel sponge; re coral (millepora); algae
(macro or leaves).</p>
      <p>The test set from the same area as the training set will give an indication as to
how well a submitted algorithm can localise and classify marine substrate, i.e.,
the maximum performance. We hypothesise that performance will deteriorate
with other test subsets as the composition, morphology and identifying features
of the substrate change and exhibit less similarity with the training data.
3.1</p>
      <p>Collection Analysis
An important consideration when testing across the datasets is that the benthic
composition will be di erent in the di erent locations, in addition to di erent
species and morphologies being present and the total coverage of benthic fauna
(represented by the total coverage of pixels in an image).</p>
      <p>
        Analysis shows that the community distribution is similar in the same
location test dataset to the training dataset, both in terms of structure and cover.
The similar location test dataset shows a much higher distribution of hard corals
and lower distribution of soft corals and sponge, with considerably higher
coverage, indicative of a healthy coral reef. The geographically distinct but
ecologically connected test set had a high distribution of hard corals in composition
and similar coverage, indicative of a recovering coral reef. The geographically
and ecologically distinct had a higher distribution of sponge and algae,
commonly found in Caribbean reefs that su er human and environmental impacts,
and higher coverage indicative of a phase shift away from hard coral towards a
sponge/algae dominated reef (see Table 1).
The task was evaluated using the methodology of previous ImageCLEF
annotation tasks [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ], which follows a PASCAL style metric of intersection over
union (IoU). We used the following two measures:
M AP 0:5 IoU : the localised Mean Average Precision (MAP) for each
submitted method using the performance measure of IoU &gt;=0.5 of the ground
truth;
M AP 0 IoU : the image annotation average for each method in which the
concept is detected in the image without any localisation.
      </p>
      <p>
        In addition, to further analyse the results per types of benthic substrate, the
measure accuracy per class was used [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], in which the segmentation accuracy
for a substrate was assessed using the number of correctly labelled pixels of that
substrate, divided by the number of pixels labelled with that class (in either the
ground truth labelling or the inferred labelling).
agreement per class =
5
      </p>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p># true positives
# false positives + # false negatives + # true positives
In 2020, 15 teams registered for the second edition of the ImageCLEFcoral task.
Four individual teams submitted 53 runs. Table 2 gives an overview of all
participants and their runs. There was a limit of at most 10 runs per team and
subtask.
5.1</p>
      <p>Subtask 1: Coral Reef Image Annotation and Localisation
performed well across multiple classes. The highest IoU score (0.512) was for the
soft coral class from FAV ZCU PiVa.</p>
      <p>Table 5 presents the pixel accuracy per location, per team, across classes for
Subtask 1. No individual team performed best across all classes. The highest
pixel accuracy scores were 0.5925 in the hard coral branching class from FHD
and 0.5116 in the soft coral class by FAV ZCU PiVa. Overall performance is best
with the same location test subset; however, the accuracy of hard coral branching
in the ecologically similar region was very good.
68213
68212
68205
68202
68201
68198
68182
68183
68197
68196
68188
68187
68186
68185
68184
68181
68179
68178
68146
68145
68143
68138
68094
68093
67919
67914
67863
67862
67858
67857
67558
67539
scores were 0.545 for the soft coral class and 0.505 for the hard coral mushroom
class from FHD.</p>
      <p>Table 8 presents the pixel accuracy per location, per team, across classes for
Subtask 2. FHD performed highest in all classes except hard coral submassive.
The highest pixel accuracy scores were 0.718 in the hard coral branching class,
0.562 for the sponge barrel class, 0.547 for the hard coral boulder class and 0.556
for the soft coral class from FHD. Overall performance was best with the same
location test subset, with the exception of the hard coral branching class which
was identi ed considerably more accurately within the ecologically similar test
set. This is a good indication that transfer learning may at least be possible in
some classes of substrate.
formance per class of all runs submitted by the participant.</p>
      <p>s
e
v
a
e
lr
o
o
r
c
a
m
e
a
g
l
a
a
r
o
p
e
l
l
i
m
la
r
o
c
e
r
r
e
d
l
u
o
b
la
r
o
c
d
r
a
h
g
n
i
h
c
n
a
r
b
la
r
o
c
d
r
a
h
g
n
i
t
s
u
r
c
n
e
la
r
o
c
d
r
a
h
e
s
o
i
l
o
f
la
r
o
c
d
r
a
h
m
o
o
r
h
s
u
m
la
r
o
c
d
r
a
h
e
v
i
s
s
a
m
b
u
s
la
r
o
c
d
r
a
h
e
l
b
a
t
la
r
o
c
d
r
a
h
Dataset</p>
      <p>Team
FHD performed well in the pixel accuracy but not as well when considering
the MAP scores and this may be indicative of their approach identifying large
polygons well but missing many of the smaller polygon objects.
6</p>
    </sec>
    <sec id="sec-5">
      <title>Discussion</title>
      <p>
        FAV ZCU CV [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] worked with two neural networks for the
rst task, SSD [22]
and a Mask R-CNN [23]; for the second task, they worked with only the latter.
Both of these used the implementation in Keras [24], pre-trained on the Pascal
VOC 2007 dataset [25].
      </p>
      <p>They partitioned the training data into distinct training and validation sets
containing rough 85% and 15% of the total number of training images. As some
types of coral were relatively rare in the training set, there were as few as 16
instances for training and 3 for validation. To train neural networks, more data
are clearly needed, so they augmented the images with horizontal and vertical
ips, resizing and Gaussian blurring. They also noted that some of the image had
a blueish tint while others featured a greenish one and simulated these e ects
too.</p>
      <p>For training SSD, all training images were resized to 512
512, while for
Mask R-CNN they were reduced to 1024
1024. It was found that Mask R-CNN
detects many more bounding boxes than SDD, most of which are false positives:
of the regions detected, 44.7% were true positives with the former, while 71.3%
was achieved with the latter. In terms of average precision,
gures as high as
62.17% were achieved (SSD for barrel sponges) but
ve coral classes were not
found by either.</p>
      <p>Interestingly, both trained models performed better on the unseen test
imagery than on the images they had retained for validation, and by a fairly large
margin. The best mean average precision obtained was 49%, for localization
using SSD. This phenomenon is particularly surprising given that the test set
contains imagery from ocean regions not present in the training set: the designers
of the dataset did not anticipate that this would be the case. It is a particularly
promising result given that the ultimate aim of the research is to equip marine
biologists and ecologists with a recognition system that can be taken anywhere
in the world and expected to work.</p>
      <p>
        FAV ZCU PiVa [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] also employed a Mask R-CNN but included a number
of re nements in their training; they believe it is this set of re nements that led
to the improvements in performance they achieved.
      </p>
      <p>
        In forming their validation set, this team selected every eleventh image and
substituted some of them so that the training and validation sets had similar
distributions. As with other teams, they augmented the provided training set,
using similar transformations as [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>The underlying approach was transfer learning, and several `backbone'
networks were examined, including ResNet-50 and Inception-ResNet-V2. One
renement employed was a `pseudo-labelling' approach inspired by [26], using a
trained network to label untrained test data with weak labels. `Accumulated
gradient normalisation' [27] is credited as providing a considerable improvement
in performance. An ensemble approach was ultimately used, in which
multiple networks classi ed the same input and their majority vote yielded the nal
classi cation.</p>
      <p>
        HHUD [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] explored two approaches. The rst was a re ned and improved
version of the approach the took for the 2019 exercise while the second was based
around RetinaNet [28].
      </p>
      <p>The team used 80% of the identi ed regions for training and 20% for
validation, again swapping individual regions between the two sets until they exhibited
similar distributions. The di culties inherent in underwater photography due
to the severe attenuation of the red end of the spectrum were considered and
RD [29] was ultimately demonstrated to be the more e ective.</p>
      <p>The team used a version of Yolo [30], though they su ered from some di
culties in the training data as initially released which meant that the annotations
were inconsistent; there was not enough time to re-train after these were
identied and corrected. The constraints on image size with their GPU-based
implementation is also thought to have an e ect. RetinaNet was also used, comprising
a feature pyramid network based on ResNet [31], a regressor and a classi er.</p>
      <p>The authors also explored more classical approaches. In 2019, a k-NN
classi er was used; this year, it was enhanced with PCA was used to identify the
best features and a nave Bayes approach for locating and classifying substrates.
It was found that the combination of PCA and nave Bayes classi er improved
performance { though despite this, the neural approaches still out-performed
classical ones.</p>
      <p>
        The authors' best performance was achieved using a ensemble of RetinaNet
and Yolo v3, using RD-enhanced images for training. The authors' paper [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] has
an interesting discussion on the interplay between thresholds, training epochs
and performance.
      </p>
      <p>One of the key aspects the dataset creators were keen to explore was whether
the training dataset, which was acquired from a single coral reef, made it
possible for trained classi ers to perform well on data sourced from geographically
distinct reefs. In this case, this `geographic generalization' was not found, though
the number of test images from the di erent geographical regions was quite small.</p>
      <p>FHD [21] went to some lengths to counter the attenuation of red illumination
in the images and the blurriness of some of them, achieving impressive visual
improvements in some cases. Further improvement was obtained by enhancement
in HSV space based on the notion of Rayleigh scattering.</p>
      <p>The classi cation architecture was again based around Mask R-CNN,
implemented using Keras and TensorFlow and with Resnet 101 pre-trained on the
COCO dataset [32], with the training images reduced to 1536 1536 pixels.
The training data were augmented using similar transformations to the other
groups. As expected, data augmentation reduced over- tting. Colour correction
led to poorer mean average precision values but better average accuracy. It was
observed that the models do not detect objects as well as some other groups'
submissions but those that are detected are classi ed very well.</p>
      <p>Interestingly, the authors found their algorithms' performance on subtask 1
(bounding boxes) could be improved simply by re-de ning their bounding boxes.
This is really an indication that bounding boxes are a poor way of describing
the output of processing that involves both segmentation and classi cation,
exacerbated by the extended nature of some types of coral. This suggests that
bounding boxes should not form part of ImageCLEFcoral in future years.</p>
      <p>The analysis of the results in this paper explores the interplay between the
performance measures used and the relative rankings of results. It is not known
of course whether these apparent performance di erences are statistically
signi cant but this is an area that the designers of the imageCLEFcoral task will
explore in future releases.</p>
      <p>
        The approach taken by [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] proved to be the most e ective as their approach
yielded the highest scores, as measured by mean average precision, for both
tasks: their submission 8 won the annotation and localisation task, while their
submission 2 won the pixel-wise parsing task with scores of about 0.58 and
0.68 respectively, a signi cant improvement on the best that was achieved in
the 2019 exercise where the equivalent gures were 0.24 and 0.04 respectively
| though the above mentioned inconsistencies between image and annotation
present in the 2019 dataset will have a ected these gures. The authors consider
the increased size of the training set in the 2020 exercise played an important
part in the improvements in performance that they were able to achieve.
      </p>
      <p>The M AP 0:5 IoU score from FAV of 0.582 over the entire test set is
excellent, bearing in mind both the di culty of the problem and that the problem
involved 13 classes, some of which are sparsely represented. There is a signi
cant peroformance margin before the best run from the second-placed team, FAV
ZCU CV, and the other teams' best submissions, which are closely spaced. FAV
also made the best-ranked submission for M AP 0 IoU but the other teams'
best-scoring submissions are much closer to this. However, the best-scoring
submission for R 0:5 IoU does not yield the highest accuracy of all the submissions.
Clearly then, there is some inconsistency in the evaluation measures employed
| and this is more of an indication that the performance evaluation measures
in widespread use in the vision research community are imperfect.</p>
      <p>It is interesting to review the scores obtained from the four categories of
test data. For the geographic regions which are similar in nature performance is
generally similar. However, performance drops o for other regions, showing that
the di erences present in the imagery a ect the ability to classify the substrates.
This shows how di cult it will be to develop a system for marine biologists
to automatically classify substrate without signi cant training resources (i.e.,
labelled datasets) from that area.</p>
      <p>For the pixel-wise parsing task, the M AP 0:5 IoU score of the best-placed
team, FAV, is actually higher than for the bounding box task, showing that
their approach is able to identify the boundaries of the image features somewhat
better than those of the other teams. This makes the performance gap between
rst- and second-placed teams somewhat larger than for the rst task. Again, the
best-scoring run in terms of M AP 0:5 IoU is not the best in terms of accuracy.
7</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>The results of the 2020 coral exercise demonstrate how e ective modern deep
neural networks are at a range of problems: a performance approaching 70% for
a 13-class problem is excellent. The results show that the best pixel-wise
parsing technique out-performed the best bounding box one, suggesting that future
exercises should concentrate on pixel-wise parsing. There are always di culties
with overlapping bounding boxes and other types of feature in the background
of bounding boxes which together reduce the value of that type of annotation.</p>
      <p>It is clear that there are genuine performance di erences between the four
geographical categories of test images described above. This is an important
practical problem for coral annotation, as well as for vision systems in general. We
anticipate future coral annotation tasks will explore ways to overcome this
difculty. Close examination of the ground truth annotations for the pixel-parsing
task shows that annotators tend to place the bounding polygons just outside the
boundaries of the features being annotated. We are considering producing other
annotations that lie within feature boundaries and encourage teams in a future
exercise to train the same architecture with both, then see which works best.
That would give us the opportunity to learn something about how annotations
should be produced.</p>
      <p>The fact that di erent measures rank-order the di erent runs di erently does
not come as a surprise but does show how di cult it is to devise a simple
measure that encapsulates performance well. There is clearly research to be done
in this regard. Although there are performance di erences between the runs,
there is no indication as to whether they are statistically signi cant or not. This
analysis shall be explored in future work. Bearing in mind the point made about
performance measures in the previous paragraph, it will be especially interesting
to ascertain whether di erent performance measures yield statistically-signi cant
but inconsistent results.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The authors would like to thank those teams who have expended substantial
amounts of time and e ort in developing solutions to this task. The images used
in this task were able to be gathered thanks to funding from the University of
Essex and the ESRC Impact Acceleration Account, as well as logistical support
from Operation Wallacea. We would also like to thank the MSc Tropical Marine
Biology students who participated in the annotation of the test set and Dr Van
Der Ven and Dr McKew for facilitating their internship.
21. Arendt, M., kert, J.R., ngel, R.B., Brumann, C., Friedrich, C.M.: The e ects of
colour enhancement and iou optimisation on object detection and segmentation
of coral reef structures. In: CLEF2020 Working Notes. CEUR Workshop
Proceedings, Thessaloniki, Greece, CEUR-WS.org &lt;http://ceur-ws.org&gt; (September
22-25 2020)
22. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C., Berg, A.: SSD:
Single shot multibox detector. In: Proceedings of the European Conference on
Computer Vision, Springer (2016) 21{27
23. He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: Proceedings of
the International Conference on Computer Vision, IEEE (2017) 2961{2969
24. Chollet, F.e.: Keras. https://keras.io (2015)
25. Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: The
pascal visual object classes challenge 2007 (voc2007) results.
http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html (2007)
26. Arazo, E., Ortego, D., Albert, P., O'Connor, N., McGuinness, K.:
Pseudolabeling and con rmation bias in deep semi-supervised learning. arXiv preprint
arXiv:1908.02983 (2019)
27. Hermans, J., Spanakis, G., Mo ckel, R.: Accumulated gradient normalization.</p>
      <p>arXiv preprint arXiv:1710.02368 (2017)
28. Lin, T., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object
detection. In: Proceedings of the International Conference on Computer Vision,
https://doi.org/10.1109/iccv.2017.324, http://dx.doi.org/10.1109/ICCV.2017.324
(October 2017)
29. Ghani, A., Isa, N.: Underwater image quality enhancement through composition
of dual-intensity images and Rayleigh-stretching. SpringerPlus 3(1) (2014) 757
30. Redmon, J., Farhadi, A.: Yolov3: An incremental improvement. arXiv preprint
arXiv:1804.02767 (2018)
31. Lin, T., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature
pyramid networks for object detection. http://arxiv.org/abs/1612.03144 (2016)
32. Lin, T., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P.,
Zitnick, C.: Microsoft COCO: Common objects in context. In: Proceedings of the
European Conference on Computer Vision, Springer (2014) 740{755</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Moberg</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Folke</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Ecological goods and services of coral reef ecosystems</article-title>
          .
          <source>Ecological Economics</source>
          <volume>29</volume>
          (
          <issue>2</issue>
          ) (
          <year>1999</year>
          )
          <volume>215</volume>
          {
          <fpage>233</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>De'ath</surname>
          </string-name>
          , G.,
          <string-name>
            <surname>Fabricius</surname>
            ,
            <given-names>K.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sweatman</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puotinen</surname>
          </string-name>
          , H.:
          <article-title>The 27-year decline of coral cover on the Great Barrier Reef and its causes</article-title>
          .
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>109</volume>
          (
          <year>2012</year>
          )
          <volume>17995</volume>
          {
          <fpage>17999</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Burke</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reytar</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spalding</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Reefs at risk revisited</article-title>
          . https://pdf.wri.org/reefs at risk revisited.
          <source>pdf</source>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Hoegh-Guldberg</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poloczanska</surname>
            ,
            <given-names>E.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skirving</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dove</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Coral reef ecosystems under climate change and ocean acidi cation</article-title>
          .
          <source>Frontiers in Marine Science</source>
          <volume>4</volume>
          (
          <year>2017</year>
          )
          <fpage>158</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Obura</surname>
            ,
            <given-names>D.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aeby</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amornthammarong</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Appeltans</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bax</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bishop</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brainard</surname>
            ,
            <given-names>R.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fletcher</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gordon</surname>
            ,
            <given-names>T.A.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gramer</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gudka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Halas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hendee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hodgson</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jankulak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kimura</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miloslavich</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chou</surname>
            ,
            <given-names>L.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muller-Karger</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Osuka</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Samoilys</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simpson</surname>
            ,
            <given-names>S.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tun</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wongbusarakum</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Coral reef monitoring, reef assessment technologies, and ecosystem-based management</article-title>
          .
          <source>Frontiers in Marine Science</source>
          <volume>6</volume>
          (
          <year>2019</year>
          )
          <fpage>580</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>G.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rogers</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Exton</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Cost and time-e ective method for multi-scale measures of rugosity, fractal dimension, and vector dispersion from coral reef 3d models</article-title>
          .
          <source>PLOS ONE 12(4)</source>
          (04
          <year>2017</year>
          )
          <volume>1</volume>
          {
          <fpage>18</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Obura</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>The diversity and biogeography of western indian ocean reef-building corals</article-title>
          .
          <source>PLOS ONE 7</source>
          (
          <issue>9</issue>
          ) (
          <year>2012</year>
          )
          <volume>1</volume>
          {
          <fpage>14</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Veron</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sta</surname>
            ord-Smith,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DeVantier</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turak</surname>
          </string-name>
          , E.:
          <article-title>Overview of distribution patterns of zooxanthellate scleractinia</article-title>
          .
          <source>Frontiers in Marine Science</source>
          <volume>1</volume>
          (
          <year>2015</year>
          )
          <fpage>81</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Peteri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Klimuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Tarasau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Ben</surname>
          </string-name>
          <string-name>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Hasan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.A.</given-names>
            ,
            <surname>Datla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>DemnerFushman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Dang-Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.T.</given-names>
            ,
            <surname>Piras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Lux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Pelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.M.</given-names>
            ,
            <surname>Garc</surname>
          </string-name>
          a Seco de Herrera,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Kavallieratou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>del Blanco</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.R.</surname>
          </string-name>
          , Cuevas Rodr guez,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Vasillopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Karampidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>ImageCLEF 2019: Multimedia retrieval in medicine, lifelogging, security and nature</article-title>
          . In:
          <article-title>Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the 10th International Conference of the CLEF Association (CLEF</source>
          <year>2019</year>
          ), Lugano, Switzerland,
          <source>LNCS Lecture Notes in Computer Science</source>
          ,
          <source>Springer (September 9-12</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Chamberlain</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Campello</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wright</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clift</surname>
            ,
            <given-names>L.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Garc a Seco de Herrera, A.:
          <article-title>Overview of ImageCLEFcoral 2019 task</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Peteri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abacha</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Datla</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DemnerFushman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kozlovski</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pelka</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de Herrera</surname>
            ,
            <given-names>A.G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , l Halvorsen,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Lux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Dang-Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.T.</given-names>
            ,
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Fichou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Berari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Brie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Stefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.D.</given-names>
            ,
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.G.</surname>
          </string-name>
          :
          <article-title>Overview of the ImageCLEF 2020: Multimedia retrieval in medical, lifelogging, nature, and internet applications</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Volume 12260 of Proceedings of the 11th International Conference of the CLEF Association (CLEF</source>
          <year>2020</year>
          ).,
          <string-name>
            <surname>Thessaloniki</surname>
          </string-name>
          , Greece,
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer (September
          <volume>22</volume>
          -25
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Schoening</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergmann</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Purser</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dannheim</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gutt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nattkemper</surname>
          </string-name>
          , T.W.:
          <article-title>Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN</article-title>
          .
          <source>PLoS ONE</source>
          <volume>7</volume>
          (
          <issue>6</issue>
          ) (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Culverhouse</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reguera</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herry</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalez-Gil</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Do experts make mistakes? A comparison of human and machine identi cation of dino agellates</article-title>
          .
          <source>Marine Ecology Progress Series</source>
          <volume>247</volume>
          (
          <year>2003</year>
          )
          <volume>17</volume>
          {
          <fpage>25</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Beijbom</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Edmunds</surname>
            ,
            <given-names>P.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kline</surname>
            ,
            <given-names>D.I.</given-names>
          </string-name>
          , Mitchell,
          <string-name>
            <given-names>B.G.</given-names>
            ,
            <surname>Kriegman</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>Automated annotation of coral reef survey images</article-title>
          .
          <source>In: Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR'12)</source>
          , Providence, Rhode Island (
          <year>June 2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramisa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dellandrea</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gaizauskas</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolajczyk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Overview of the ImageCLEF 2016 scalable concept image annotation task</article-title>
          .
          <source>In: CLEF Working Notes</source>
          . (
          <year>2016</year>
          )
          <volume>254</volume>
          {
          <fpage>278</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dellandrea</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gaizauskas</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolajczyk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Overview of the ImageCLEF 2015 scalable image annotation, localization and sentence generation task</article-title>
          .
          <source>In: CLEF Working Notes</source>
          . (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Everingham</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eslami</surname>
            ,
            <given-names>S.M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Gool</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>C.K.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winn</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The pascal visual object classes challenge: A retrospective</article-title>
          .
          <source>International Journal of Computer Vision</source>
          <volume>111</volume>
          (
          <issue>1</issue>
          ) (
          <year>January 2015</year>
          )
          <volume>98</volume>
          {
          <fpage>136</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Pixek</surname>
          </string-name>
          , L., n u R ha, A., s
          <string-name>
            <surname>Zita</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Coral reef annotation, localisation and pixel-wise classi cation using mask-rcnn and bag of tricks</article-title>
          .
          <source>In: CLEF2020 Working Notes. CEUR Workshop Proceedings</source>
          , Thessaloniki, Greece, CEUR-WS.org &lt;http://ceurws.org
          <source>&gt; (September 22-25</source>
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Gruber</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Straka</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Automatic coral detection using neural networks</article-title>
          .
          <source>In: CLEF2020 Working Notes. CEUR Workshop Proceedings</source>
          , Thessaloniki, Greece, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 22-25</source>
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Bogomasov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grawe</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conrad</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Enhanced localization and classi cation of coral reef structures and compositions</article-title>
          .
          <source>In: CLEF2020 Working Notes. CEUR Workshop Proceedings</source>
          , Thessaloniki, Greece, CEUR-WS.org &lt;http://ceurws.org
          <source>&gt; (September 22-25</source>
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>