<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Floristic participation at LifeCLEF 2016 Plant Identi cation Task</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Julien Champ</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Herve Goeau</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexis Joly</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IRD, UMR AMAP</institution>
          ,
          <addr-line>Montpellier</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Inria ZENITH team</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>LIRMM</institution>
          ,
          <addr-line>Montpellier</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes the participation of the Floristic consortium to the LifeCLEF 2016 plant identi cation challenge[18]. The aim of the task was to produce a list of relevant species for a large set of plant images related to 1000 species of trees, herbs and ferns living in Western Europe, knowing that some of these images belonged to unseen categories in the training set like plant species from other areas, horticultural plants or even o topic images (people, keyboards, animals, etc). To address this challenge, we rst experimented as a baseline, without any rejection procedure, a Convolutional Neural Network (CNN) approach based on a slightly modi ed GoogLeNet model. In a second run, we applied a simple rejection criteria based on probability threshold estimation on the output of the CNN, one for each species, for removing automatically species propositions judged irrelevant. In the third run, rather than definitely eliminating some species predictions with the risk to remove false negative propositions, we applied various attenuation factors in order to revise the probability distributions given by the CNN as con dent score expressing how much a query was related or not to the known species. More precisely, for this last run we used the geographical information and several cohesion measures in terms of observation, "organ" tags and taxonomy (genus and family levels) based on a knn similarity search results within the training set.</p>
      </abstract>
      <kwd-group>
        <kwd>LifeCLEF</kwd>
        <kwd>plant</kwd>
        <kwd>leaves</kwd>
        <kwd>leaf</kwd>
        <kwd>ower</kwd>
        <kwd>fruit</kwd>
        <kwd>bark</kwd>
        <kwd>stem</kwd>
        <kwd>branch</kwd>
        <kwd>species</kwd>
        <kwd>retrieval</kwd>
        <kwd>images</kwd>
        <kwd>collection</kwd>
        <kwd>species identi cation</kwd>
        <kwd>citizen-science</kwd>
        <kwd>ne-grained classi cation</kwd>
        <kwd>evaluation</kwd>
        <kwd>benchmark</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>identi cation services are particularly promising for setting-up massive
ecological monitoring systems, involving thousands of contributors at a very low cost.
A rst step in this way has been achieved by the US consortium behind
LeafSnap4, an i-phone application allowing the identi cation of 184 common
American plant species based on pictures of cut leaves on an uniform background
(see [21] for more details). Then, the French consortium supporting Pl@ntNet
([17]) went one step beyond by building an interactive image-based plant
identi cation application that is continuously enriched by the members of a social
network specialized in botany. Inspired by the principles of citizen sciences and
participatory sensing, this project quickly met a large public with more than
1M downloads of the mobile applications ([8,7]). A related initiative is the plant
identi cation evaluation task organized since 2011 in the context of the
international evaluation forum CLEF5 and that is based on the data collected within
Pl@ntNet.</p>
      <p>Since few years, deep convolutional neural networks repeatedly demonstrated
record breaking results in generic object recognition problems such as ImageNet
[20] and do attract more and more interest in the computer and multimedia
vision communities. The promising e ectiveness of this kind of approaches on more
speci c and ne grained classi cation problems like plant identi cation was
conrmed last year [9] with impressive results regarding the neness of the classes (at
species level) and the unbalanced data in terms of available images per species.
Rather than extracting the features according to hand-tuned or psycho-vision
oriented lters, such methods directly work on the image signal. The weights
learned by the rst convolutional layers allows to automatically build relevant
image lters whereas the intermediate layers are in charge of pooling these raw
responses into high-level visual patterns. The last fully connected layers work
more traditionally as any discriminative classi er on the image representation
resulting from the previous layers.</p>
      <p>A known drawback of Deep Convolutional Neural Networks is that they
require a lot of training data mainly because of the huge number of parameters
to be learned. This is particularly true here where the training set is highly
unbalanced and includes many classes with few instances. The possibility to e
ciently ne tune an already learned model, to adapt the architecture and resume
training from the already learned model weight, is one a the main strength of
CNN. This is one key explaining such results obtained last year on the plant
identi cation task.</p>
      <p>However, this year the task introduce an additional challenge by considering
an open set classi cation problem, i.e. where some of the queries of the test set do
not belong to the known species[6]. More precisely, according to the description
of the task, these unseen images came from the plantnet mobile application and
re ect the diversity of the visual content which the users produce despite of
4 http://leafsnap.com/
5 http://www.clef-initiative.eu/
the plantnet application is dedicated to wild plants from Western Europe. More
precisely these pictures can be:
{ o topic pictures like peoples, keyboards, landscapes, etc,
{ horticultural plants (house &amp; garden plants, vegetables &amp; fruits),
{ and wild plants but observed from all around the world and outside from
the list of known species in the training set.</p>
      <p>Considering the o topic pictures, one can guess that it must be rather easy
to build a system predicting low or scattered probabilities on the 1000 known
species, since visual content should be very di erent from the training dataset.
Indeed, strong lines and corners from a manufactured object will produce certainly
visual features very di erent from textured and mostly green visual contents
learned from the training dataset. The di culty of the task is most probably
concentrated on the queries related to horticultural and wild plants with images
sharing with the training set more visual similarities.</p>
      <p>That said, CNN, like a vast majority of machine learning tools and
recognition systems are designed for a static closed world, where the primary assumption
is that all categories are known. We can admit that is a classi cation problem
not so much explored in computer vision while it is a frequent usage case in the
real world, even if some previous works are yet done in this direction with the
CNNs [1].
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <sec id="sec-2-1">
        <title>Floristic Run 1</title>
        <p>To address this challenge, we used a CNN model without any rejection procedure
in order to obtain a rst run considered here as a baseline, with the expectation
that a query related to a known species will obtain a probability distribution
concentrated on one or few relevant species (for instance species related to a
same genus). The opposite expectation is that a query related to a unseen class
will obtain a probability distribution spread over many classes.</p>
        <p>We have used Ca e [14], a Deep Learning Framework, allowing us to use
CNN architectures and models from the literature. We have chosen and slightly
modi ed the "GoogLeNet GPU implementation" model in the Ca e model Zoo,
based on the Google winning architecture in the ImageNet 2014 Challenge [25].
The GoogLeNet architecture consists of a 22 layers deep network with a softmax
loss as the top classi er. It is composed of three "inception modules" stacked
on top of each other. Each intermediate inception module is connected to an
auxiliary classi er during training, so as to encourage discrimination in the lower
stages of the classi er, increase the gradient signal that gets propagated back,
and provide additional regularization. These auxiliary classi ers are only used
during the training part, and then discarded.</p>
        <p>We modi ed this model network by adding a batch normalisation at each
level between the pooling and the Local Response Normalization layers in order
to accelerate the learning phase [13]. As it is mentioned in this paper, we also
removed the dropout layers. Combined with Parametric Recti ed Linear Unit
(PReLU) instead of ReLU layers, this model nally prevent the risk of over tting
[11]. Since we didn't nd a such GoogleNet implementation, we learn this model
on the ImageNet 2014 dataset (one week, 1,100,000 iterations with a batch size
of 32, reaching a nal train loss cost around 0.12).</p>
        <p>Finally, we ne-tuned this model on the LifeCLEF Plant Task 2016
training dataset. For each image in the training and test sets, we therefore cropped
the largest square in the center, and re-sized it to 256x256 pixels. As it was
implemented within Ca e library, it makes also use of a simple data
augmentation technique, consisting in cropping randomly a 224x224 pixels image, and
mirroring it horizontally.</p>
        <p>As a reminder, here are the most important parameters for Ca e to obtain
our rst submitted run "Floristic Run 1". The base learning rate parameter was
set to 0:0075 which is rather high compared to usual learning rates applied to
models without batch normalisation. The learning rate is divided by 10 every
42451 iterations with a batch size of 16 involving that each training images will
pass 6 times during a step (113204 images x 6 / 16 gives the step size). We used
only 2 steps and nally stopped the training after 90k iterations. For information,
this ne-tuned model stopped with a top-1 loss accuracy on the training set itself
of 0.9378.</p>
        <p>To obtain the rst run "Floristic Run 1", we directly used this ne-tuned
model on the 8000 test images and limited the responses to the rst 50 predicted
species for each (when necessary).
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Floristic Run 2: run 1 + rejection procedure</title>
        <p>In a second run we added to the rst approach a simple rejection procedure based
on the estimation of probability thresholds, one for each species. The main idea
was here to detect and remove some species predictions judged irrelevant. If the
thresholds are correctly estimated, a query related to an unseen category should
not be associated to any predictions and nally should not occur in the run le.</p>
        <p>Given a species, for estimating its probability threshold, we computed the
probability for each of its training image, and then selected the lowest value as
a threshold. This threshold represents in a way the limit of the visual knowledge
of the species according to the model and its available training images.</p>
        <p>Finally, we produced the run le "Floristic Run 2" by applying the estimated
thresholds on the predictions given by the run 1. This approach divided by two on
average the number of species predictions of each query, with numerous queries
associated to only one species prediction (1470 queries among 8000, while the
run 1 contained only 83 queries with a response size of 1). But nally, none of
the queries where entirely rejected.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Floristic Run 3: run 1 + mitigating factors</title>
        <p>The risk of the approach using rejection procedure like in the run 2 is to de nitely
remove false negative species propositions, notably if the thresholds are too high
while the probability of a correct species on a query is too loo. Therefore, in
the third run, we preferred to keep all the species predictions produced in run 1
and apply on it some various attenuation factors. Indeed, the metric of the task
is the classi cation MAP, i.e. the Mean of the Average Precision of each class
taken individually, so, given a class, all the queries are sorted by their probability
associated to this class, and the Average Precision depends directly on the ranks
of the queries which really belong to this class. The main idea here was to reorder
the list of the queries by downscaling their initial probability value by several
factors between [ ; 1:0] ( xed arbitrarily to 0.9) with the expectation that
irrelevant queries will be nally pushed on the tail of the list while relevant
queries will maintain their rank.</p>
        <p>For each query, six distinct factors were applied, mixing some information
available in the metadata provided in the dataset with some consistency
measures computed on the response given by a visual similarity search approach.
The similarity search is produced by a fast nearest neighbors indexing and search
method applied to the 1024 dimensional high level feature vector extracted with
the CNN model from the second to last layer "pool5/7x7 s1". Each image
feature is compressed with RMMH[15] (Random Maximum Margin Hashing) and
its approximate k-nearest neighbors are searched by probing multiple
neighboring buckets in the consulted hash table (according to the a posteriori multi-probe
algorithm described in [16]). In that way, the knn search gives a complementary
views of the training dataset from which we re-examine the species predictions
given by the softmax output in the CNN model. More precisely, we can
compare the metadata of the knns returned by the system with the metadata of
a query for computing several factors ( ve here) reporting various contextual
information:
{ a factor Sclasses based on the classes returned by the knns,
{ a factor Sorgans based on the "organ" tags,
{ two "taxonomic" factors Sgenus and Sfamily at the genus and family levels,
{ and a geolocalisation factor Sgeoloc.</p>
        <p>Factors are estimated individually for each query i, with values belonging to
[0:9; 1:0] and are directly applied to the probability distribution Pi in order to
obtain some con dent scores Ci:</p>
        <p>Ci = Pi Sclasses</p>
        <sec id="sec-2-3-1">
          <title>Sorgans</title>
        </sec>
        <sec id="sec-2-3-2">
          <title>Sgenus</title>
        </sec>
        <sec id="sec-2-3-3">
          <title>Sfamily</title>
        </sec>
        <sec id="sec-2-3-4">
          <title>Sgeoloc</title>
          <p>For computing these factors, we choose to select arbitrarily the most visually
similar images belonging to distinct observations. We didn't take directly the
5 most similar images because it can potentially be only near duplicate images
belonging to a same observation and thus report poor contextual information.</p>
          <p>Class distribution factor Sclasses: this factor represents the convergence
of the knns to a same class or not: if the knns belong to the same class, the factor
will be neutral (Sclasses = 1), while more the returned classes are distinct, more
the factor will tend to = 0:9. Based on the occurrences of the classes appearing
in the knns, we can compute a probability distribution P and then compute the
entropy Hc de ned by:</p>
          <p>
            Hc =
k
X Pi log2 Pi
i=1
with k = 5 observations. The entropy Hc will be equal to 0 when all the knns
belong to a same same class, while it will be equal to its maximal value Hcmax =
log2(k) = log2(
            <xref ref-type="bibr" rid="ref5">5</xref>
            ) when each knn belong to a di erent class. Then an a ne
function gives directly the factor:
          </p>
          <p>Sclasses = 1
0:1</p>
          <p>
            Hc
log2(
            <xref ref-type="bibr" rid="ref5">5</xref>
            )
          </p>
          <p>Organ factor Sorgans: following the same previous approach, we count here
the number of distinct tags organ reported by the knns among the available tags
( ower, fruit, leaf, scan, stem, entire, branch), extract a probability distribution
on these organs, compute the entropy Ho and nally compute the factor:
Sorgans = 1
0:1</p>
          <p>
            Ho
log2(
            <xref ref-type="bibr" rid="ref5">5</xref>
            )
          </p>
          <p>Taxonomic factors Sgenus and Sfamily: following the same previous
formulas, from the occurrences of the distinct genera (families) reported by the nns,
we can extract a probability distributions on the genera (family), compute the
entropy Hg (and Hf ) and compute nally the factors:</p>
          <p>Sgenus = 1
Sfamily = 1
0:1
0:1</p>
          <p>
            Hg
log2(
            <xref ref-type="bibr" rid="ref5">5</xref>
            )
          </p>
          <p>
            Hf
log2(
            <xref ref-type="bibr" rid="ref5">5</xref>
            )
          </p>
          <p>Geolocalisation factor Sgeoloc: here we didn't use a visual similarity knn
search, but computed directly a factor based on the great circle distance dist
between the GPS coordinates given by the metadata of a query and the
coordinates representing more or less the center of France (latitude = 46.3, longitude
= 2.3): Sgeoloc = 1 distadnicsetmax where distancemax = 20000 kms is more or less
the farthest distance on earth from the center of France. By default dist = 500
kms if the metadata of a query doesn't contain some GPS coordinates, which is
giving a factor of Sgeoloc = 0:975
3</p>
          <p>O</p>
          <p>cial Results
Table 1 reports the scores of the 29 submitted runs, and gure 2 gives a
complementary graphical overview of all results obtained by the participants.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Conclusion</title>
      <p>Floristic team submitted 3 runs: the rst run Floristic Run 1 was based on the
well-known GoogLeNet CNN architecture, but slightly modi ed with the use
of Parametric Recti ed Linear Units and the use of batch normalisation
layers in order to accelerate and prevent from over tting the learned model. This
rst approach obtained an intermediate MAP of 0.619 while the best system
obtained a MAP of 0.742. Unfortunately, by adding the rejection criteria, we
degraded slightly the MAP (down to 6.111 obtained by "Floristic Run 2"). This
rejection criteria was certainly too strong, with estimated probability thresholds
too high, and have probably removed too much correct species predictions. On
another side, contextual information exploited in "Floristic Run 3" for revising
the species predictions slightly improved the MAP (up to 0.627), but not enough
for reaching the performances of the best systems.
11. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into recti ers: Surpassing
humanlevel performance on imagenet classi cation. CoRR abs/1502.01852 (2015), http:
//arxiv.org/abs/1502.01852
12. Hsu, T.H., Lee, C.H., Chen, L.H.: An interactive ower image recognition system.</p>
      <p>
        Multimedia Tools Appl. 53(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 53{73 (May 2011), http://dx.doi.org/10.1007/
s11042-010-0490-6
13. Io e, S., Szegedy, C.: Batch normalization: Accelerating deep network training
by reducing internal covariate shift. CoRR abs/1502.03167 (2015), http://arxiv.
org/abs/1502.03167
14. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R.,
Guadarrama, S., Darrell, T.: Ca e: Convolutional architecture for fast feature embedding.
arXiv preprint arXiv:1408.5093 (2014)
15. Joly, A., Buisson, O.: Random maximum margin hashing. In: Computer Vision
and Pattern Recognition (CVPR), 2011 IEEE Conference on. pp. 873{880 (June
2011)
16. Joly, A., Buisson, O.: A posteriori multi-probe locality sensitive hashing. In:
Proceedings of the 16th ACM International Conference on Multimedia. pp. 209{
218. MM '08, ACM, New York, NY, USA (2008), http://doi.acm.org/10.1145/
1459359.1459388
17. Joly, A., Goeau, H., Bonnet, P., Bakic, V., Barbe, J., Selmi, S., Yahiaoui, I., Carre,
J., Mouysset, E., Molino, J.F., et al.: Interactive plant identi cation based on social
image data. Ecological Informatics 23, 22{34 (2014)
18. Joly, A., Goeau, H., Glotin, H., Spampinato, C., Bonnet, P., Vellinga, W.P.,
Champ, J., Planque, R., Palazzo, S., Muller, H.: Lifeclef 2016: multimedia life
species identi cation challenges. In: Proceedings of CLEF 2016 (2016)
19. Kebapci, H., Yanikoglu, B., Unal, G.: Plant image retrieval using color, shape and
texture features. Comput. J. 54(
        <xref ref-type="bibr" rid="ref9">9</xref>
        ), 1475{1490 (Sep 2011), http://dx.doi.org/
10.1093/comjnl/bxq037
20. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classi cation with deep
convolutional neural networks. In: Advances in neural information processing systems.
pp. 1097{1105 (2012)
21. Kumar, N., Belhumeur, P.N., Biswas, A., Jacobs, D.W., Kress, W.J., Lopez, I.C.,
Soares, J.V.: Leafsnap: A computer vision system for automatic plant species
identi cation. In: Computer Vision{ECCV 2012, pp. 502{516. Springer (2012)
22. Mouine, S., Yahiaoui, I., Verroust-Blondet, A.: Advanced shape context for plant
species identi cation using leaf image retrieval. In: Ip, H.H.S., Rui, Y. (eds.) ICMR
'12 - 2nd ACM International Conference on Multimedia Retrieval. ACM, Hong
Kong, China (Jun 2012), https://hal.inria.fr/hal-00726785
23. Nilsback, M.E., Zisserman, A.: Automated ower classi cation over a large number
of classes. In: Computer Vision, Graphics Image Processing, 2008. ICVGIP '08.
      </p>
      <p>Sixth Indian Conference on. pp. 722{729 (Dec 2008)
24. Spampinato, C., Mezaris, V., van Ossenbruggen, J.: Multimedia analysis for
ecological data. In: Proceedings of the 20th ACM international conference on Multimedia.
pp. 1507{1508. ACM (2012)
25. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D.,
Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. CoRR abs/1409.4842
(2014), http://arxiv.org/abs/1409.4842
26. Trifa, V.M., Kirschel, A.N.G., Taylor, C.E., Vallejo, E.E.: Automated species
recognition of antbirds in a Mexican rainforest using hidden Markov models. Journal of
The Acoustical Society of America 123 (2008)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bendale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boult</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Towards open set deep networks</article-title>
          .
          <source>arXiv preprint arXiv:1511.06233</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ee</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pham</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roe</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Zhang, J.:
          <article-title>Sensor network for the monitoring of ecosystem: Bird species recognition</article-title>
          .
          <source>In: Intelligent Sensors, Sensor Networks and Information</source>
          ,
          <year>2007</year>
          .
          <source>ISSNIP</source>
          <year>2007</year>
          . 3rd International Conference on. pp.
          <volume>293</volume>
          {
          <issue>298</issue>
          (Dec
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Cerutti</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tougne</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vacavant</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coquin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>A Parametric Active Polygon for Leaf Segmentation and Shape Estimation</article-title>
          .
          <source>In: 7th International Symposium on Visual Computing</source>
          . p.
          <fpage>1</fpage>
          .
          <string-name>
            <given-names>Las</given-names>
            <surname>Vegas</surname>
          </string-name>
          , United
          <string-name>
            <surname>States</surname>
          </string-name>
          (
          <year>Sep 2011</year>
          ), https://hal. archives-ouvertes.fr/hal-00622269
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ellison</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farnsworth</surname>
            ,
            <given-names>E.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kress</surname>
            ,
            <given-names>W.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neill</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Best</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pickering</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stevenson</surname>
            ,
            <given-names>R.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Courtney</surname>
            ,
            <given-names>G.W.</given-names>
          </string-name>
          , VanDyk,
          <string-name>
            <surname>J.K.</surname>
          </string-name>
          :
          <article-title>Next-generation eld guides</article-title>
          .
          <source>BioScience</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Gaston</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.J.</surname>
            ,
            <given-names>O</given-names>
          </string-name>
          <string-name>
            <surname>'Neill</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <source>Automated species identi cation: why not? Philosophical Transactions of the Royal Society of London B: Biological Sciences</source>
          <volume>359</volume>
          (
          <issue>1444</issue>
          ),
          <volume>655</volume>
          {
          <fpage>667</fpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Goeau, H.,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Plant identi cation in an open-world (lifeclef 2016)</article-title>
          .
          <source>In: CLEF working notes 2016</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. Goeau, H.,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>A ouard, A</article-title>
          .,
          <string-name>
            <surname>Bakic</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barbe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dufour</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selmi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yahiaoui</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vignau</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , et al.:
          <article-title>Pl@ ntnet mobile 2014: Android port and new features</article-title>
          .
          <source>In: Proceedings of International Conference on Multimedia Retrieval</source>
          . p.
          <fpage>527</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. Goeau, H.,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bakic</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barbe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yahiaoui</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selmi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carre</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barthelemy</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boujemaa</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , et al.:
          <article-title>Pl@ ntnet mobile app</article-title>
          .
          <source>In: Proceedings of the 21st ACM international conference on Multimedia</source>
          . pp.
          <volume>423</volume>
          {
          <fpage>424</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Goeau, H.,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Lifeclef plant identi cation task 2015</article-title>
          . In: CLEF working notes
          <year>2015</year>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Goeau, H.,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selmi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mouysset</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joyeux</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Visual-based plant species identi cation from crowdsourced data</article-title>
          .
          <source>In: MM'11 - ACM Multimedia</source>
          <year>2011</year>
          . pp.
          <volume>0</volume>
          {
          <issue>0</issue>
          . ACM, Scottsdale, United
          <string-name>
            <surname>States</surname>
          </string-name>
          (
          <year>Nov 2011</year>
          ), https://hal.inria. fr/hal-00642236
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>