<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Conference and Labs of the Evaluation Forum, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Overview of FungiCLEF 2024: Revisiting Fungi Species Recognition Beyond 0-1 Cost</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lukas Picek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Milan Šulc</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jiří Matas</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Cybernetics, Faculty of Applied Sciences, University of West Bohemia</institution>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Second Foundation</institution>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>The Center for Machine Perception Dept. of Cybernetics, FEE, Czech Technical University in Prague</institution>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>0</volume>
      <fpage>9</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>The third edition of the fungi recognition challenge, FungiCLEF 2024, organized within LifeCLEF, advances the ifeld of mushroom species identification using computer vision and machine learning. Building on the Danish Fungi 2020 dataset and incorporating new data from the CzechFungi app, FungiCLEF 2024 challenges participants to recognize fungi species from images and metadata, focusing on eficient inference and minimalization of edible and poisonous species confusion. The strict limits on computational complexity ensure that the resulting solutions are practical for use in real-world settings with limited computational resources. The competition attracted seven teams, with five outperforming the provided baseline, which was based on the pre-trained EficientNet-B1 model. This overview paper provides (i) a comprehensive description of the challenge and provided baseline method, (ii) detailed characteristics of the dataset and task specifications, (iii) an examination of the methods employed by contestants, and (iv) a discussion of the competition outcomes. The results highlight incremental advancements in fungi recognition, showcasing innovative approaches and techniques that push the limits of previous work.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;LifeCLEF</kwd>
        <kwd>FungiCLEF</kwd>
        <kwd>fine-grained visual categorization</kwd>
        <kwd>metadata</kwd>
        <kwd>open-set recognition</kwd>
        <kwd>fungi</kwd>
        <kwd>species identification</kwd>
        <kwd>machine learning</kwd>
        <kwd>computer vision</kwd>
        <kwd>classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Fungi recognition systems based on computer vision and machine learning [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] are transforming
the field of mycology, making it easier than ever for researchers, enthusiasts, and professionals to
identify mushroom species. Tasks that once required extensive expertise, i.e., studying the existing
literature, can now be accomplished in a few seconds. The fungi identification service ofered by
the Atlas of Danish Fungi [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] exemplifies this advancement: users simply capture an image of their
specimen, and the system promptly generates a list of probable species matches. This facilitates
eficient manual verification by allowing users to compare their observations with reference photos and
species descriptions. Additionally, it encourages users to contribute valuable biodiversity observations,
enhancing the overall understanding and documentation of fungal diversity.
      </p>
      <p>
        Despite the impressive performance of automatic fungi species recognition systems, significant
challenges remain due to the complexity and diversity of fungal species. One major challenge is the
vast number of fine-grained categories (species) that exist. Many of these species exhibit high visual
similarities, making it dificult to distinguish between them even though they may not be genetically
related (see Figure 1). This visual similarity can easily lead to misidentification, as the algorithm might
not reliably discern subtle diferences in color, shape, or texture. Additionally, there is significant
intra-class variance within species. The appearance of fungal specimens can vary widely based on
several factors, including genotype, age, seasonal conditions, and the local environment. For instance, a
mushroom of the same species can look markedly diferent when it is young compared to when it is
mature. Seasonal variations can afect the color and size of fungi, while local environmental conditions
such as humidity, light exposure, and soil type can further influence their appearance [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. These
variations pose a considerable challenge for automatic recognition systems, which must be robust
enough to account for such diferences to ensure accurate identification. Moreover, the quality and
resolution of images submitted for recognition can vary significantly, impacting the system’s ability to
accurately classify the observations. Images taken in the wild might sufer from poor lighting, occlusions,
or background noise, adding another layer of complexity to the recognition task [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. These challenges
underscore the importance of developing new methods capable of handling the complex details of
fungal diversity, pushing the boundaries of current state-of-the-art in fungi species recognition, and
ifne-grained visual categorization in general.
      </p>
      <p>
        To allow continual incremental improvements in fungi recognition, we organize an annual research
competition – FungiCLEF. The latest edition, which took part of the LifeCLEF 2024 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and the FGVC11
workshop at CVPR 2024, builds on the foundations laid by its predecessors [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. This year’s challenge
introduces an updated dataset and continues to emphasize model eficiency by imposing computational
and memory constraints. The competition presents participants with fungi recognition scenarios that
address real-world applications, including fungi species identification and distinguishing between
poisonous and edible mushrooms.
      </p>
      <p>By promoting innovation in this field, FungiCLEF 2024 aims to bridge further the gap between
computer vision capabilities and practical mycological needs, potentially impacting areas ranging from
biodiversity research to public health and beyond.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Challenge Description</title>
      <p>
        Eficient and scalable species recognition is essential for large-scale initiatives such as citizen science
projects [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], which often operate with limited computational resources. Species identification typically
relies not just on the visual features of the specimen but also on additional contextual data about habitat,
substrate, location, etc. For the FungiCLEF2024 competition, we have developed a benchmark with rich
metadata and expert-verified labels for testing the performance of systems that combine visual and
contextual information. Given that mushrooms are often foraged for consumption, the competition
also addresses scenarios related to misclassification between edible and poisonous species. This ensures
robust outcomes in fungal species recognition, enhancing scientific research and public safety.
      </p>
      <p>To enable use in practical applications, all participants had to submit their models via the HuggingFace
evaluation platform and must pass computational limits. Each classification model had to fit limits
for prediction time limit (120 minutes) within a given HuggingFace server instance (Nvidia T4 small
4vCPU, 15GB RAM, 16GB VRAM).</p>
      <sec id="sec-2-1">
        <title>2.1. Dataset</title>
        <p>CzechFungi App
Atlas of Danish Fungi
# of Species → known/unknown</p>
        <p>
          # of Images # of Observations
The FungiCLEF 2024 dataset builds upon the previous editions of FungiCLEF [
          <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
          ], LifeCLEF
[
          <xref ref-type="bibr" rid="ref14 ref15 ref16">14, 15, 16</xref>
          ], and the Danish Fungi 2020 dataset [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. All training data is derived from a citizen science
platform – the Atlas of Danish Fungi. Each fungi observation in the provided dataset has undergone
expert validation, ensuring high-quality species labels. The dataset features rich observation metadata,
i.e., information about habitat, substrate, timestamp, location, etc. Provided subsets (i.e., training,
validation, and test) are briefly described below, and their statistics in detail are listed in Table 1.
The training set is based on 177,170 real and expert-verified fungi observations of 1,604 species. The
dataset is built exclusively from the Danish Fungi 2020 data by combining the training and public test
sets and includes 295,938 images.
        </p>
        <p>The validation set comprises expert-validated observations with species labels collected solely in
2022. This subset includes 3,299 fungi species and contains 45,021 observations with many "unknown"
species.</p>
        <p>The test set is based on two subsets originating from two applications (Atlas of Danish Fungi and
CheckFungi) and two countries, Denmark and the Czech Republic respectively. The CheckFungi dataset
is a small subset containing only around 200 submissions and is included primarily as a control set to
prevent cheating. The test set was split 80/20 for public and private evaluation, respectively.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Evaluation Protocol</title>
        <p>
          Same as the previous year [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], the challenge aims at predicting the species given visual observation
and metadata and considers scenarios focusing on correct species classification as well as on classifying
edible vs poisonous mushrooms. Namely, the goal is to minimize the empirical loss  for decisions ()
over observations  and true labels , given a cost function  (, ()).
        </p>
        <p>= ∑︁  (, ()).</p>
        <p />
        <p>Diferent recognition scenarios and their cost function  (, ()) are described together with their
motivation in the points below:
• Track 1: Standard classification with "unknown" category (an open-set scenario). The first
metric was the standard classification accuracy, i.e., the average correctness of the predicted class.
All species not represented in the training set had to be correctly classified as an "unknown"
category. The decision function is simple: each observation is simply represented by an identity
matrix, i.e.,
1(, ())) =
• Track 2: Cost for confusing edible species for poisonous and vice versa. Let us have a
function  that indicates dangerous (poisonous) species as () = 1 if species  is poisonous, and
() = 0 otherwise. Let us denote PSC the cost for poisonous species confusion (if a poisonous
observation was misclassified as edible) and ESC the cost for edible species confusion (if an edible
observation was misclassified as poisonous).</p>
        <p>2(, ())) =
⎧ 0 if () = (())
⎨PSC if () = 1 and (()) = 0.
⎩ESC otherwise
(3)</p>
        <p>For the benchmark, we set ESC = 1 and PSC = 100.
• Track3: A user-focused loss composed of both the classification error and the
poisonous/edible confusion. Assuming the user is interested both in the species classification
as well as in low edible to poisonous species confusion (and vice versa), the third cost function
simply combines 1 and 2:
3(, ())) = 1(, ()) + 2(, ()).
(4)</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Baseline</title>
        <p>
          To enable an easier start for all participants and straightforward model evaluation, we provide a weak
baseline based on the pre-trained PyTorch EficientNet-B1 [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] model wrapped into a HuggingFace
repository, allowing direct evaluation on the private test set. This repository includes the model
weights and inference scripts. In addition to the PyTorch-based submission, we ofer an example for
submitting an ONNX model. This ONNX model was initially provided for another LifeCLEF [
          <xref ref-type="bibr" rid="ref18 ref9">9, 18</xref>
          ]
competition, SnakeCLEF 2024 [19]. The pre-trained EficientNet-B1 model used in our baseline was
originally published in the Danish Fungi 2020 dataset [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. This model has demonstrated relatively
strong performance in fungi species classification and serves as a robust starting point. Overall, our
goal was to ofer a comprehensive and accessible starting point for all participants, enabling them to
focus on developing novel solutions and improving upon the provided baseline.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Timeline</title>
        <p>The FungiCLEF 2024 competition was launched on March 13, 2024, and was promoted through the
LifeCLEF, HuggingFace, and FGVC challenge web pages, inviting participants to register. The
competition ran for approximately three months, with the final submission deadline on May 24. Similar
to the previous year, the test data remained confidential. Participants were allowed to make up to
ifve submissions per day using the HuggingFace evaluation platform to assess their models on the
test set. Two weeks before the deadline, the submission limit was increased to ten per day. After the
competition concluded, all participants had the opportunity to submit post-competition entries for
further evaluation of their ablations.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Working Notes</title>
        <p>Participants were strongly encouraged to submit both their code and a detailed technical report (Working
Notes) to ensure their results can be fully reproduced. All the submitted Working Notes underwent
thorough review and were given complex feedback by 2-3 experts with extensive publication records
in Computer Vision and Machine Learning. This rigorous review process was designed to guarantee
reproducibility and maintain quality standards. The review was single-blind, allowing participants to
respond with up to two rebuttals to address any feedback or concerns raised by the reviewers. These
working notes provide an in-depth analysis of the techniques employed, including hyperparameter
tuning, model ensembling, and loss function selection, ofering valuable insights into the development
of the method for fungal image classification.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Challenge Results</title>
      <p>This year, the three tracks of FungiCLEF have three diferent best-performing submissions by three
diferent teams 1. However, for the oficial ranking, the Track 3 score was selected. The best-performing
submission in Track 1 by Jack Etheredge [20] achieved a score of 0.240. The best scores in Track 2
were achieved by team upupup [21] with a score of 0.072. Finally, the best score for Track 3, the main
competition track, was achieved by team IES [22] with a score of 0.362. The oficial challenge results,
in terms of Track 1, Track 2, and Track 3 metrics, are reported in Figure 2. For completeness, we note
that a post-competition submission by team DS@GT [23] achieved even higher recognition scores,
highlighting the competitive and evolving nature of the challenge. This post-competition efort serves
as a testament to the ongoing advancements and innovations in the field, extending beyond the oficial
competition period.</p>
      <p>0.6
0.5
irc0.4
t
e
M0.3
1
k
c
rTa0.2
0.1
0.0</p>
    </sec>
    <sec id="sec-4">
      <title>4. Participants and Methods</title>
      <p>This year, a total of seven teams participated in the FungiCLEF 2024 challenge; of these, five
outperformed the baseline EficientNet-B1 in Track 3, and five submitted working notes from which
four passed the review process and were accepted for publication. The methodologies varied and
included a range of techniques from state-of-the-art neural network architectures to sophisticated
strategies to incorporate the metadata. Details of the best methods and systems used are synthesized
below and further developed in participants’ working notes [20, 21, 22, 23].</p>
      <p>
        Team IES [22] (Top1) utilized a Swin Transformer V2 Base [24] architecture as a feature extractor
and used a similar approach for meta-data integration as Ren et al. [25] from the previous edition of
FungiCLEF [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Besides, they introduced (i) a poisonous re-ranking that prevents predicting an edible
species when there is a significant chance of the sample being poisonous, and (ii) a genus loss that
improves the feature space’s regularization.
1Each team usually had diferent "best" submissions for each track.
      </p>
      <p>Jack Etheredge [20] (Top2) combined visual information with metadata using MetaFormer-0 and
MetaFormer-2 [26] and further improved the ensemble by a vision-only CAFormer-S18 [27], and
proposed a novel application of openGAN [28] for open-set recognition of fine-grained images utilizing
WGAN-GP [29].</p>
      <p>Team upupup [21] (Top3) used Dynamic MLP [30] for the fusion of image features and metadata,
identifying unknown classes using an entropy-based approach, training with a marginal expected loss
for recognizing poisonous mushrooms while maintaining accuracy.</p>
      <p>Team DS@GT [23] (Top82) utilized DINOv2 visual embeddings [31] (namely the dinov2-large model
with register for final submission) which were combined with metadata in a classifier head. The model
was trained with a composite loss function, consisting of the Seesaw loss [32] and a binary cross
entropy loss for poisonous species classification.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Discussion</title>
      <p>This paper presents an overview and results evaluation of the third edition of the FungiCLEF challenge,
organized in conjunction with the CLEF LifeCLEF lab, and CVPR-FGVC11 — The Tenth Workshop on
Fine-Grained Visual Categorization held within the CVPR conference. This challenge continues to push
the boundaries of fine-grained visual categorization by bringing together diverse methodologies and
innovative approaches from leading research teams worldwide.</p>
      <p>By introducing an updated dataset, emphasizing model eficiency, and addressing both species
recognition and poisonous mushroom identification, FungiCLEF 2024 has fostered innovation and
practical solutions for fungi recognition. The diverse approaches employed by the top-performing
teams illustrate the evolving landscape of fungi recognition challenges. From sophisticated model
architectures to novel techniques for handling unknown species and balancing species classification
with poisonous species identification, participants showcased groundbreaking solutions. They built on
the findings of previous challenges, particularly in the encoding of metadata, demonstrating continuous
advancement in the field.</p>
      <p>The strict computational constraints imposed on submissions ensured that the resulting models were
not only accurate but also practical for deployment in resource-limited environments and motivated
participants to focus on principal improvements rather than training large ensembles of complex models.
However, we also observed that enforcing the computational limits through a submission system caused
additional technical dificulties with submission to some participants. Future editions shall thrive to
further simplify the submission process for the participants.</p>
      <p>With the advances in recognition accuracy for well-known species, we propose that future work
should focus on the more challenging cases, specifically few-shot classification techniques. This
approach, with its potential to push the fungal species recognition forward, would enable more robust
identification of rare or newly discovered fungal species with limited training data. As the field
progresses, the ultimate goal remains to develop robust, accurate, and accessible fungi recognition systems
that can support both expert mycologists and citizen scientists in documenting and understanding
fungal biodiversity.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>LP and JM were supported by the Technology Agency of the Czech Republic, project No. SS05010008
and project No. SS73020004.
2This team encountered some issues while submitting to HuggingFace, but achieved better results than Top1 in their
postcompetition submissions.
C. Leblanc, et al., Lifeclef 2024 teaser: Challenges on species distribution prediction and
identification, in: European Conference on Information Retrieval, Springer, 2024, pp. 19–27.
[19] L. Picek, M. Hruz, A. M. Durso, Overview of SnakeCLEF 2024: Revisiting snake species
identification in medically important scenarios, in: Working Notes of CLEF 2024 - Conference and Labs of
the Evaluation Forum, 2024.
[20] J. Etheredge, Openwgan-gp for fine-grained open-set fungi classification, in: Working Notes of</p>
      <p>CLEF 2024 - Conference and Labs of the Evaluation Forum, 2024.
[21] B.-F. Tan, Y.-Y. Li, P. Wang, L. Zhao, X.-S. Wei, Say no to the poisonous fungi: An efective strategy
for reducing 0-1 cost in fungiclef2024, in: Working Notes of CLEF 2024 - Conference and Labs of
the Evaluation Forum, 2024.
[22] S. Wolf, P. Thelen, J. Beyerer, Poison-aware open-set fungi classification: Reducing the risk of
poisonous confusion, in: Working Notes of CLEF 2024 - Conference and Labs of the Evaluation
Forum, 2024.
[23] C. Chiu, M. Heil, T. Kim, A. Miyaguchi, Fine-grained classification for poisonous fungi identification
with transfer learning, in: Working Notes of CLEF 2024 - Conference and Labs of the Evaluation
Forum, 2024.
[24] Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, et al., Swin
transformer v2: Scaling up capacity and resolution, in: Proceedings of the IEEE/CVF conference
on computer vision and pattern recognition, 2022, pp. 12009–12019.
[25] H. Ren, H. Jiang, W. Luo, M. Meng, T. Zhang, Entropy-guided open-set fine-grained fungi
recognition, in: Working Notes of CLEF 2023 - Conference and Labs of the Evaluation Forum,
2023.
[26] Q. Diao, Y. Jiang, B. Wen, J. Sun, Z. Yuan, Metaformer: A unified meta framework for fine-grained
recognition, arXiv preprint arXiv:2203.02751 (2022).
[27] W. Yu, C. Si, P. Zhou, M. Luo, Y. Zhou, J. Feng, S. Yan, X. Wang, Metaformer baselines for vision,</p>
      <p>IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
[28] S. Kong, D. Ramanan, Opengan: Open-set recognition via open data generation, in: Proceedings
of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 813–822.
[29] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. C. Courville, Improved training of wasserstein
gans, Advances in neural information processing systems 30 (2017).
[30] L. Yang, X. Li, R. Song, B. Zhao, J. Tao, S. Zhou, J. Liang, J. Yang, Dynamic mlp for fine-grained
image classification by leveraging geographical and temporal information, in: Proceedings of the
IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 10945–10954.
[31] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza,
F. Massa, A. El-Nouby, et al., Dinov2: Learning robust visual features without supervision, arXiv
preprint arXiv:2304.07193 (2023).
[32] J. Wang, W. Zhang, Y. Zang, Y. Cao, J. Pang, T. Gong, K. Chen, Z. Liu, C. C. Loy, D. Lin, Seesaw loss
for long-tailed instance segmentation, in: Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, 2021, pp. 9695–9704.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Jeppesen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <article-title>Fungi recognition: A practical use case</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>2316</fpage>
          -
          <lpage>2324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Jeppesen</surname>
          </string-name>
          , E. Lind,
          <article-title>Automatic fungi recognition: Deep learning meets mycology</article-title>
          ,
          <source>Sensors</source>
          <volume>22</volume>
          (
          <year>2022</year>
          )
          <fpage>633</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T. G.</given-names>
            <surname>Frøslev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Laessøe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Petersen</surname>
          </string-name>
          , U. Søchting,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Jeppesen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vesterholt</surname>
          </string-name>
          , Danish mycological society, fungal records database (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , T. Bruns,
          <article-title>Community structure of ectomycorrhizal fungi in a pinus muricata forest: minimal overlap between the mature forest and resistant propagule communities</article-title>
          ,
          <source>Molecular Ecology</source>
          <volume>8</volume>
          (
          <year>1999</year>
          )
          <fpage>1837</fpage>
          -
          <lpage>1850</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Boddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <article-title>Interactions between basidiomycota and invertebrates</article-title>
          ,
          <source>in: British mycological society symposia series</source>
          , volume
          <volume>28</volume>
          ,
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          ,
          <year>2008</year>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>179</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <article-title>Plant recognition by inception networks with test-time class prior estimation</article-title>
          .,
          <source>in: CLEF (Working Notes)</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>K.-H. Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-H. Yang</surname>
            , S.-T. Huang,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Plant species classification based on hyperspectral imaging via a lightweight convolutional neural network model</article-title>
          ,
          <source>Frontiers in Plant Science</source>
          <volume>13</volume>
          (
          <year>2022</year>
          )
          <fpage>855660</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Jeppesen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Laessøe</surname>
          </string-name>
          , T. Frøslev,
          <article-title>Danish fungi 2020-not just another image recognition dataset</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1525</fpage>
          -
          <lpage>1535</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Espitalier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Botella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Deneu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marcos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Estopinan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Leblanc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Larcher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hrúz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Servajean</surname>
          </string-name>
          , et al.,
          <source>Overview of LifeCLEF</source>
          <year>2024</year>
          :
          <article-title>Challenges on species distribution prediction and identification</article-title>
          ,
          <source>in: International Conference of the Cross-Language Evaluation Forum for European Languages</source>
          , Springer,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <article-title>Overview of fungiclef 2022: Fungi recognition as an open set classification problem</article-title>
          ,
          <source>CEUR-WS</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chamidullin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          , Overview of fungiclef 2023:
          <article-title>Fungi recognition beyond 1/0 cost</article-title>
          , in
          <source>: CLEF 2023-Conference and Labs of the Evaluation Forum</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          , Overview of FungiCLEF 2022:
          <article-title>Fungi recognition as an open set classification problem</article-title>
          ,
          <source>in: Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heilmann-Clausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          , Overview of FungiCLEF 2023:
          <article-title>Fungi recognition beyond 0-1 cost</article-title>
          ,
          <source>in: Working Notes of CLEF 2023 - Conference and Labs of the Evaluation Forum</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lorieul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Cole</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Deneu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Servajean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Durso11</surname>
          </string-name>
          , I. Bolon,
          <string-name>
            <given-names>H.</given-names>
            <surname>Glotin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Planqué</surname>
          </string-name>
          , R. R. de Castaneda,
          <string-name>
            <given-names>W.-P.</given-names>
            <surname>Vellinga</surname>
          </string-name>
          , et al.,
          <source>Overview of lifeclef</source>
          <year>2021</year>
          :
          <article-title>An evaluation of machine-learning based species identification and species distribution prediction</article-title>
          ,
          <source>in: Experimental IR Meets Multilinguality, Multimodality, and Interaction: 12th International Conference of the CLEF Association, CLEF</source>
          <year>2021</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          ,
          <source>September 21-24</source>
          ,
          <year>2021</year>
          , Proceedings, volume
          <volume>12880</volume>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>Nature</given-names>
          </string-name>
          ,
          <year>2021</year>
          , p.
          <fpage>371</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Deneu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Servajean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Cole</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. R. De Castaneda</surname>
            ,
            <given-names>I. Bolon</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Durso</surname>
          </string-name>
          , et al.,
          <source>Overview of lifeclef</source>
          <year>2020</year>
          :
          <article-title>a system-oriented evaluation of automated species identification and species distribution prediction</article-title>
          ,
          <source>in: International Conference of the CrossLanguage Evaluation Forum for European Languages</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>342</fpage>
          -
          <lpage>363</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Botella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marcos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hrúz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lorieul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Moussi</surname>
          </string-name>
          , et al.,
          <article-title>Lifeclef 2023 teaser: Species identification and prediction challenges</article-title>
          ,
          <source>in: European Conference on Information Retrieval</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>568</fpage>
          -
          <lpage>576</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          , Eficientnet:
          <article-title>Rethinking model scaling for convolutional neural networks</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6105</fpage>
          -
          <lpage>6114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Espitalier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Botella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Deneu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marcos</surname>
          </string-name>
          , J. Estopinan,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>