<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Pixelwise annotation of coral reef substrates</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jessica Wright</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ioana-Lia Palosanu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Louis Clift</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alba García Seco de Herrera</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jon Chamberlain</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science and Electronic Engineering, University of Essex</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Coral reef substrate composition is regularly surveyed for ecosystem health monitoring. The current method of visual assessment is slow and limited in scale. ImageCLEFcoral aims to identify reef areas of interest and annotate them appropriately. We present an adaptation of a submission to the 2019 ImageCLEFcoral task that uses a semantic segmentation model, DeepLabV3, with a ResNet-101 backbone. We implemented pre-training image colour enhancement and supplemented the available training data with that of NOAA NCEI for specific runs. Our runs had no overall improvement from the 2019 code, though did predict submassive corals and table corals with greater accuracy (+3.022% and +0.353%). Though none of our model runs had the highest precision or accuracy, we did best predict submassive corals (3.022%), boulder corals (12.787%), table corals (0.353%), foliose corals (0.097%), gorgonian soft corals (0.002%) and algae (0.027%) across 3 of our 4 runs. Image colour enhancement benefited the prediction accuracy of boulder corals (+1.209− 5.026%), encrusting corals (+1.7− 2.578%) and algae (+0.027%), most likely by making them more distinct from their surroundings. Adding NOAA data enhanced the precision of encrusting coral, soft coral and gorgonian predictions despite only providing additional annotations for encrusting and foliose corals. Our results suggest that a more balanced approach to data augmentation combined with image-specific colour improvements may provide a more desirable outcome, particularly when paired with a model that is fine-tuned to the data set used.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Image segmentation</kwd>
        <kwd>automatic annotation</kwd>
        <kwd>coral reef annotation</kwd>
        <kwd>semantic segmentation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Coral reefs are vital marine systems that are known to provide many ecosystem functions and
services [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] while supporting one third of marine species [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Their decline has been widely
reported and tracked [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Current monitoring of coral reefs benthic communities relies on in-situ
data collection, sometimes followed by ex-situ video analysis, requiring time and expertise to
analyse correctly [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Automatic annotation from video stills or photographs would greatly
increase the speed and scale of feasible monitoring, and could free up reef experts to focus on
other areas to gain a wider view of shifting coral reef dynamics.
      </p>
      <p>
        Deep learning algorithms provide an answer to automatic annotation [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The underlying
architecture of most are Convolutional Neural Networks, often used for image and pattern
recognition [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Image segmentation models have been the most successful in the ImageCLEFcoral
pixel-wise parsing task [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ], where each pixel is predicted as a particular class.
      </p>
      <p>
        This is the third iteration of an annual ImageCLEF task [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11</xref>
        ] which has subtasks looking
into (1) Coral reef image annotation and localisation and (2) Coral reef image pixel-wise parsing.
Considering the value of each subtask in terms of practical use in monitoring reef systems
accurately, we focused on subtask 2.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Data</title>
      <p>
        The initial data provided were split into a training and test images of coral reef systems. The
training set was annotated (Fig. 1) with the morphological substrate classes set in the task [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
and the test set was not annotated. The training set was then provided to build and train a
network, with the test set given later to get submission runs. More details about the dataset can
be found at Chamberlain et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>2.1. The training set</title>
        <p>879 images with a combined set of 21,748 annotations were provided as the training data. The
annotations were not evenly split across classes, likely as some are more prevalent than others
in reef systems (Fig. 2).</p>
        <p>Each substrate morphology can be indistinct from others due to the variation in that class’
species. This is particularly true of classes that are not broken down into morphological groups,
i.e. “soft coral”, and less of an issue with classes that are split, i.e. each “hard coral” group.</p>
        <p>The use of NOAA NCEI1 and/or CoralNet2 data was recommended for the task. We chose to
utilise the NOAA data set for some experiments. 3032 NOAA images were downloaded of a
possible 15,019, due to time limitations on our machines. The NOAA data set contains a greater
number of classification labels than the ImageCLEFcoral classes. These classifications are also
of a single pixel (10 pixels per image) so did not provide enough information for our image
analysis and recognition algorithms. We developed a NOAA Translation processor to capture
1https://www.ncei.noaa.gov/access/metadata/landing-page/bin/iso?id=gov.noaa.nodc:0211063
2https://coralnet.ucsd.edu/
the classification types within the data set and translate them via an expert defined translation
matrix into the ImageCLEFcoral classes which we made available through the ImageCLEFcoral
website for other participants. The processor then created an adjustable Region Of Interest
(ROI) around the same pixel to provide an image patch, typically a 10x10 pixel area, that enabled
our machine learning routines to adapt to the NOAA data sets.</p>
        <p>5 substrate classes were then selected to refine the number of images to a more manageable
amount: Fire Coral - Millepora, Hard Coral - Foliose, Hard Coral - Table, Hard Coral -
SubMassive, and Hard Coral - Encrusting. These classes had a lower number of annotations than
others and were chosen to increase accuracy. Despite low incidence, Soft Coral - Gorgonian,
Hard Coral - Mushroom, and Sponge - Barrel were not chosen from the NOAA data set as they
have more distinct morphologies than the selected classes so were more likely to be predicted
despite relatively few occurrences. Algae - Macro or Leaves were also not selected from the
NOAA data set despite low incidence. Algae classification of the ImageCLEF set only accounted
for large leaf macroalgae, whereas the NOAA data set also included other types such as turf
and CCA, so conflicting annotations could have hampered the model predictions.</p>
        <p>502 viable NOAA images were found, within which 2 of the 5 selected classes were found:
Hard Coral - Encrusting and Hard Coral - Foliose. This almost doubled the processing time per
epoch and pushed the entire model training time from 10 hours to 17.5 hours (10 epochs total),
and increased the total number of substrate annotations from 21,748 to 22,403 (Fig. 2).
(a)
(d)
(b)
(e)
(c)
(f)</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Image enhancement</title>
        <p>
          Underwater imagery is often lower quality than that taken on land. Light attenuation distorts
colour detection and water turbidity can reduce image quality, and with all underwater imagery
there is a greater chance for blurred or unfocused photographs. Taking steps to investigate,
process and augment the provided data is expected to improve the data quality and subsequent
network results [
          <xref ref-type="bibr" rid="ref12 ref7">12, 7</xref>
          ].
        </p>
        <p>Images were visually assessed and split into those with accurate colouring and contrast, those
with a heavy green tint and those with a heavy blue tint. Accurate images were not altered in
any way. Green and blue images were passed through an RGB histogram leveller followed by
an RGB channel mixer, generalised to green or blue images for speed (Fig. 3). This would allow
all the images to be processed easily but would not allow for image-specific editing.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Data augmentation</title>
        <p>Before training the model, each image was cropped into 12 squares which were each then
cropped at a random point to a 480px square. Random horizontal flips were also utilized due
to the limited amount of data. These pre-processing techniques are used to present the model
with diferent iterations of the same images, increasing the size of the data set available.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. The test set</title>
        <p>The provided test set included 485 unannotated images from 4 diferent regions:
Region 1: The training set location.</p>
        <p>Region 2: A geographically and biologically similar region.</p>
        <p>Region 3: A geographically distinct but biologically similar region.</p>
        <p>Region 4: A region that is both geographically and biologically distinct.</p>
        <p>The test images were also cropped into 12 squares to match the training images used on the
model. Each test image was then resized to a 520px square, which allowed us to predict all test
images despite system limitations.</p>
        <p>The predicted pixel array of each test image had to be resized to its original dimensions
before submission to match the ground truth annotation mask. This was carried out using
spline interpolation through the zoom function in SciPy3.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The model</title>
      <p>
        We used the DeepLabV3 model based on a previous submission [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. It makes use of a ResNet-101
backbone and the application of both atrous convolution and Atrous Spatial Pyramid Pooling
(ASPP). ResNet-101 is used for feature extraction before atrous convolution and ASPP are
applied. Atrous convolution increases the field of view in the last layer of ResNet-101 by
inserting 0-values between filter values used in the network layer [ 13]. The atrous rate utilised
corresponds to the amount of 0-values inserted - the higher the rate, the bigger the field of view
becomes. ASPP is then applied to assign a label to each pixel using 4 atrous convolution rates.
This enables the model to utilise diferent aspects of the objects it is identifying, ensuring that
when pixels are labelled the network has seen multiple perspectives of field of view.
      </p>
      <p>The model was evaluated using diferent crop and batch sizes during training. Batch size
4 was used in each run as it had the best performance within our system limitations. A crop
size of 480px was selected as, when combined with batch size 4, it provided the greatest overall
accuracy (per mAP0.0 and mAP0.5) of all tested crop size combinations.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Submission</title>
      <p>Each team in the competition were allowed to submit up to 10 runs per task using the
collaboration platform AICrowd4. We chose to submit 4 files to the pixelwise-parsing subtask only,
representing 4 individual runs:</p>
      <p>
        MTRU1: the “baseline” run, using the 2019 submission [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] that was rewritten and
ifnetuned by experimenting on crop × batch size combinations. Batch size
4 with crop size 480 were found to give the best results and were used in
this run.
3https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.zoom.html
4https://www.aicrowd.com/
MTRU2: the edited ImageCLEF run, using the same settings as MTRU1. Poorly
coloured training images were enhanced to represent more accurate
coloring of the coral reefs.
      </p>
      <p>MTRU3: the NOAA run, using additional data from NOAA in three diferent
substrates. The images were not enhanced or edited in any way, and the same
settings from MTRU1 were used.</p>
      <p>MUTR4: the fully edited run, using same settings as MTRU1, with both the additional</p>
      <p>NOAA data and image colour enhancements where needed.</p>
      <sec id="sec-4-1">
        <title>4.1. Self-intersecting polygons</title>
        <p>All 4 runs predicted some images containing self-intersecting polygons. These polygons
invalidate a run and are not permitted in the submission file so must be removed. The evaluation
script was used to identify any images with self-intersecting polygons and the substrate type of
the polygon. This process involved removing each polygon of the relevant substrate type one
by one, re-running the evaluation script each time to check if the error was resolved.</p>
        <p>Initial images were checked polygon by polygon in this manner to minimise any impact on
model accuracy but the time constraints of the challenge required faster processing of the latter
runs. Images in these runs were checked in polygon "batches", where several at a time would be
deleted before running the evaluation script. While this did increase the speed of evaluation
before submission, it is likely that a significant proportion of the deleted polygons were not
self-intersecting and as such the mean average precision (mAP) of the runs would be both lower
and less accurate.
4.2. Blank predictions
3 of the 4 runs (MTRU2, MTRU3 and MTRU4) predicted images with no substrate classes. While
clearly an error as all images were of coral reef substratum, these predictions were a part of our
model outcome and therefore our submitted runs. The evaluation script used upon submission
blocks these images and deem runs with them a failure so each had to be altered. As all images
from the test set must be used, the blank images could not be removed. Our solution to this
was to include a small square annotation in the center of the blank images and label it as Fire
Coral - Millepora. This class was used as it had the lowest number of annotations and had no
additional images added from NOAA images so was likely to be the least accurate class, limiting
the efect on overall accuracy as much as possible.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results and discussion</title>
      <p>Results provided by ImageCLEFcoral after submission used 2 metrics. mAP0.5 showed the
localised mean average precision using IoU ≥ 0.5. Accuracy per substrate calculated the
segmenation accuracy as the number of correctly labelled pixels of class over the number of
pixels labeled with class in ground truth.</p>
      <p>Overall results of the pixel-wise parsing subtask (Table 1) show that our runs were less
accurate and precise than the other participant team. When considering the accuracy per class,
however, there were some substrate categories that were better predicted by our model.</p>
      <p>Across the MTRU runs, we saw the highest accuracy of submassive coral, table coral and
foliose coral predictions when images run unedited and without additional NOAA data. The
greatest prediction accuracy of boulder corals and algae across the subtask occurred when
images were colour corrected, and of gorgonian soft corals occurred when unedited ImageCLEF
data and NOAA data were used. MTRU3 was the only instance of gorgonian predictions with
positive accuracy (0.002%) across all submissions. Similarly, MTRU1 was the only instance of
positive accuracy in foliose coral prediction (0.097%). None of our runs predicted mushroom
corals, sponges, barrel sponges or fire coral accurately.</p>
      <p>Of our runs, the greatest precision was seen in MTRU1 (mAP0.5 = 0.021), though it did not
have the highest accuracy (2.767%). MTRU4 was most accurate (2.951%) despite having the
lowest overall precision (mAP0.5 = 0.011).</p>
      <p>
        Overall precision and average accuracy were also lower than the 2019 run of this model [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
however we did show improvement in the prediction of submassive corals (MTRU1 = 0.030221,
2019 = 0) and table corals (MTRU1 = 0.0.003534, 2019 = 0), neither of which were predicted
with any accuracy in 2019.
      </p>
      <sec id="sec-5-1">
        <title>5.1. Image colour enhancement</title>
        <p>The colour adjustments made to the images increased the prediction accuracy of boulder
corals, encrusting corals and algae (Table 1). For boulder corals, colour enhancement may have
distinguished them from other reef substrates and enabled greater recognition of the coral over
rocks and other substratum that they can easily resemble. Encrusting corals would benefit for
Category
mAP0.5
Average accuracy
Hard Coral - Branching
Hard Coral - Submassive
Hard Coral - Boulder
Hard Coral - Encrusting
Hard Coral - Table
Hard Coral - Foliose
Hard Coral - Mushroom
Soft Coral
Gorgonian
Sponge
Barrel Sponge
Fire Coral - Millepora
Algae
the same reasons. Algae would likely show improvement with colour enhancement due to the
removal of green image tints, which would allow the natural green of the algae to become more
defined and clear. Brown and red algae would also benefit from the red channel correction to
make them more distinct from surrounding substrate.</p>
        <p>Submassive corals were less accurately predicted with image enhancement, as well as table
corals, foliose corals, soft corals and gorgonians. Any loss in predictive power is likely due to
the general nature of the colour correction performed. While some images would improve with
the balancing and mixing at the levels set, others may have had colour blow outs or excessive
input from one or more RGB channels. This could have a blur-like efect, wherein neighbouring
substrate categories look indistinct from each other due to a lack of colour definition.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Augmenting annotations with NOAA data</title>
        <p>The NOAA data used was from a diferent location than the ImageCLEF data which could
greatly impact mAP0.5 and prediction accuracy as substrates from diferent geographic regions
can show vastly diferent morphologies. Of the 2 categories with increased annotations from
the NOAA data set, encrusting corals saw a greater accuracy while foliose corals had less
prediction accuracy. Encrusting corals are very similar globally despite varying conditions, so
increasing the number of annotations would likely improve the models predictive power by
adding distinctive pixels to train on. This is not the case with foliose corals, which are more
likely to show difering morphologies as they are not flat to the substrate. Foliose corals also
have extensive structures that often appear layered and often appear to have many shadows
that could hamper the training capabilities of the model. Any shadows would look like black,
probably with a flat texture, regions of the image. These would provide no benefit to the model
and may cause it to relate any dark spots to foliose corals or to fail to recognise them at all.</p>
        <p>Adding NOAA data had a detrimental efect on the accuracy of most other substrate categories.
Where a prediction accuracy &gt; 0 without NOAA data (MTRU1 and MTRU2), adding NOAA
annotations reduced the prediction accuracy of submassive, boulder, and table corals as well
as algae. This could occur if the additional NOAA annotations skewed the models perception
of each category and altered the predictions made as a result. Accuracy also decreased for
branching corals between MTRU1 and MTRU3 (unedited images), but increased between the
colour enhanced runs (MTRU2 and MTRU4) by 5.026%. Predictions were also more accurate for
soft coral (+0.228%) and gorgonians (+0.002%) when NOAA data was added but no colour
enhancement was performed. These substrate categories can form more distinct morphologies
across all locations that may have become more distinct with an increasingly balanced data set at
the expense of the other classes. Although the soft coral category encompasses several distinct
organisms with diferent morphologies, the abundance of annotations likely compensated by
providing many examples of each structure.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>Image colour enhancements can increase the accuracy of coral reef substrate predictions when
those substrates are otherwise dificult to distinguish from the surrounding environment. It
can also be detrimental when the editing performed is generalised instead of image specific.
Similarly, augmenting the training data set with NOAA annotations can improve the predictions
of substrates that are either morphologically general across diferent geographical regions or
those that form distinct structures despite changing geography. Large increases in the number
of annotations should be reflected in a subsequent increase of accuracy in the represented
classes. When this does not occur, the abundance of data can impair the predictive power of the
model by blurring the line between substrate categories through incorrect annotation or by
skewing the predictions made as a result of an imbalanced data set.</p>
      <p>A combination of an augmented data set with distinct image enhancement pathways for
either diferent geographic locations or substrate categories may provide a more accurate and
precise prediction array. Combining these steps with improved hyperparameters would enhance
model performance and provide a coral reef substrate prediction tool that would be applicable
to reefs across the globe.</p>
      <sec id="sec-6-1">
        <title>6.1. Limitations of the model</title>
        <p>The use of a dedicated GPU greatly increases the computational power of machine learning
models. Training time can then be diminished and hyperarameters can be improved. The
machine we used to run our model was afected by a lack of GPU memory, which can only be
rectified by changing the graphics card to a more powerful one. The memory limitation heavily
impacted batch sizes testing, limiting tests to batch size 4 at most. DeepLabV3 works best with
a batch size of 16 (demonstrated on the PASCAL VOC data set [13]). Using a computer with a
better GPU would allow for a greater batch size to be used which would improve the model
parameters and strengthen the power of the predictions.</p>
        <p>In the future we plan to include a greater volume of NOAA data when training the model.
This would both increase the number annotations per class across the training data. More
specific pixel expansion would have also enabled us to be more precise in training and may
have provided more pixels per class than otherwise achieved. A potential method could have
diferent expansion shapes set by class (i.e. boulder coral expands as a circle) and a pixel
selection/rejection threshold based on annotated pixel value.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Improving the approach</title>
        <p>
          Images and predictions would likely benetfi from a more tailored colour correction approach.
This could be performed with the commonly used Rayleigh distribution [
          <xref ref-type="bibr" rid="ref12">12, 14, 15</xref>
          ] or with a
diferent approach such as red channel weighted compensations [ 16] that leverage the other
colour input channels to colour balance an image with accuracy.
        </p>
        <p>Leveraging the results from this approach, developing a staggered pipeline may improve
prediction accuracy in the future. A bounding box approach to gain a generalised location of
each substrate could be used to then send images through diferent processing steps, such as
colour correction, blur reduction, contrast changes, etc, based on the class found. This could
then feed into a pixel-wise prediction model to find precise location of substrate classes within
an image.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>
        We would like to thank the team that developed the 2019 base code that we used [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], particularly
Antonio Campello for his support and advice throughout this process. We would also like to
thank NOAA and the MTRU team of participants at the 2020 NOAA hackathon https://www.
gpuhackathons.org/event/noaa-gpu-hackathon, when we began working on this project.
in: CLEF2020 Working Notes, volume 2696 of CEUR Workshop Proceedings, CEUR-WS.org,
2020.
[13] L.-C. Chen, G. Papandreou, F. Schrof, H. Adam, Rethinking Atrous Convolution for
      </p>
      <p>Semantic Image Segmentation, 2017. arXiv:1706.05587.
[14] A. Abdul Ghani, N. Mat Isa, Underwater image quality enhancement through composition
of dual-intensity images and rayleigh-stretching, SpringerPlus 3 (2014) 757.
[15] A. Abdul Ghani, N. Mat Isa, Underwater image quality enhancement through integrated
color model with rayleigh distribution, Applied Soft Computing 27 (2014) 219–230.
[16] W. Xiang, P. Yang, S. Wang, B. Xu, H. Liu, Underwater image enhancement based on red
channel weighted compensation and gamma correction model, Opto-Electronic Advances
1 (2018) 180024.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Moberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Folke</surname>
          </string-name>
          ,
          <article-title>Ecological goods</article-title>
          and
          <article-title>services of coral reef systems</article-title>
          ,
          <source>Ecological Economics</source>
          <volume>29</volume>
          (
          <year>1999</year>
          )
          <fpage>215</fpage>
          -
          <lpage>233</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Bowen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rocha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Toonen</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Karl,</surname>
          </string-name>
          <article-title>The origins of tropical marine biodiversity</article-title>
          ,
          <source>Trends in Ecology and Evolution</source>
          <volume>28</volume>
          (
          <year>2013</year>
          )
          <fpage>359</fpage>
          -
          <lpage>366</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mannion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farnsworth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Valdes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kelland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Allison</surname>
          </string-name>
          ,
          <article-title>Coupling of palaeontological and neontological reef coral data improves forecasts of biodiversity responses under global climatic change</article-title>
          ,
          <source>Royal Society Open Science</source>
          <volume>6</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          ,
          <source>Methods for Ecological Monitoring of Coral Reefs</source>
          , 1 ed., Australian Institute of Marine Science, Townsville, Australia,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mahmood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bennamoun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sohel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Boussaid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hovey</surname>
          </string-name>
          , G. Kendrick, R. Fisher,
          <article-title>Automatic annotation of coral reefs using deep learning</article-title>
          , in: OCEANS 2016 MTS/IEEE Monterey,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>K. O'Shea</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Nash</surname>
          </string-name>
          , An Introduction to Convolutional Neural Networks,
          <year>2015</year>
          . arXiv:
          <volume>1511</volume>
          .
          <fpage>08458</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Říha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zita</surname>
          </string-name>
          ,
          <article-title>Coral reef annotation, localisation and pixel-wise classification using Mask R-CNN and Bag of Tricks</article-title>
          , in: CLEF2020 Working Notes, volume
          <volume>2696</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Stefens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ravenscroft</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hagras</surname>
          </string-name>
          ,
          <article-title>Deep segmentation: Using deep convolutional networks for coral reef pixel-wise parsing</article-title>
          ,
          <source>in: CLEF2019 Working Notes</source>
          , volume
          <volume>2380</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Wright</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Clift</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          ,
          <article-title>Overview of ImageCLEFcoral 2019 task</article-title>
          , in: CLEF2019 Working Notes, volume
          <volume>2380</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Wright</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Clift</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          ,
          <article-title>Overview of the ImageCLEFcoral 2020 task: Automated coral reef image annotation</article-title>
          ,
          <source>in: CLEF2020 Working Notes</source>
          , volume
          <volume>2696</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Oliver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Moustahfid</surname>
          </string-name>
          ,
          <article-title>Overview of the ImageCLEFcoral 2021task: Coral reef image annotation of a 3d environment</article-title>
          , in: CLEF2021 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Bucharest, Romania,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arendt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Brumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <article-title>The efects of colour enhancement and IoU optimisation on object detection and segmentation of coral reef structures,</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>