<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Multi-Label Plant Species Identification⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nelly Semenova</string-name>
          <email>nelli.semenova@mail.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Moscow Pedagogical State University (MPGU University)</institution>
          ,
          <addr-line>1/1 Malaya Pirogovskaya St., Moscow, 119435, Russian Federation</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper presents an ecology-oriented post-processing pipeline that optimizes a pre-trained Vision Transformer (DINOv2) for the PlantCLEF 2025 challenge. The task requires predicting the complete species list in vegetationplot images (0.25 m², 2-12 MP), whereas the model was trained almost exclusively on single-plant images, which results in a pronounced domain shift. The pipeline comprises (i) multi-scale tiling with test-time augmentation, (ii) artifact down-weighting via zero-shot segmentation, and (iii) ecological correction of prediction scores using Global Biodiversity Information Facility (GBIF) occurrence statistics, seasonal windows and niche similarity derived from Ecological Indicator Values for Europe (EIVE). Without retraining the Vision Transformer on any task‑specific data, the public F1-score rises from 21.84 to 38.13 (+16.3 pp) and the private score rises from 20.12 to 33.45, ranking 1st of 38 teams (public leaderboard) and 4th (private). Three consecutive ecological filters contribute approximately +4 pp to this improvement. These results show that ecology-aware post-processing is a reproducible alternative to costly model retraining for multi-species identification. vision transformer, vegetation classification, multi-species identification, ecological niche, test-time augmentation, Automatic identification of all plant species in a vegetation-plot image is a challenging multi-label classification task. In the PlantCLEF 2025 challenge participants must predict the complete species list for every top-down photograph of a vegetation plo≈t(0.5 × 0.5 m, ≈ 0.25 m²) [1, 2].</p>
      </abstract>
      <kwd-group>
        <kwd>species co-occurrence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>CEUR</p>
      <p>ceur-ws.org
pipeline). These results confirm that lightweight ecology-aware post-processing can substantially boost
pre-trained ViTs for large-scale multi-label plant-species identification, and remaining limitations are
discussed.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Data and Baseline Model</title>
      <sec id="sec-2-1">
        <title>2.1. Pre-trained Vision Transformer</title>
        <p>
          The baseline rests on the checkpointdinov2_patch14_reg4_onlyclassifier_then_all distributed
by the organisers on Kaggle 6[]. Its backbone is    2  ‑/14 with four learnable register tokens,
originally pre‑trained on142 M images [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. A two-stage supervised phase followed: first, a linear head
was fitted on the 1.4 M single-plant pictures of PlantCLEF 2024; afterward the entire network was
ifne-tuned on the same data. The final classifier therefore produces logits for 7806 European taxa—the
exact label set of the 2025 challenge.
        </p>
        <p>All experiments keep these weights frozen. The model expects an input of518 × 518 px, which
coincides with the crop size used in both tiling schemes (Section3.1). Each window yields a
7806element logit vector; applying the sigmoid transforms it into class-wise confidences. Only the five
highest scores are retained per window to minimize I/O without information loss for later aggregation.
No further fine-tuning, domain adaptation or ensembling at the feature level is attempted; every
subsequent improvement described in Sections3–4 operates solely on the fixed predictions of this
Vision Transformer.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Competition data</title>
        <p>
          The oficial test set comprises 2105 JPEG images taken vertically above vegetation plots of roughly 0.25
m². Native resolutions range between2 MP and 12 MP; the organizers distributed the files exactly as
recorded in the field. Each file name encodes a persistent plot identifier followed by the acquisition
date in the form      [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. These two tokens enable later aggregation of images that depict the
same location in diferent years, yet the images themselves are completely unannotated: no geographic
coordinates, species labels, masks or bounding boxes accompany the pictures. Every method must
therefore infer the full multi-species content from a single high-resolution frame without spatial
supervision.
        </p>
        <p>Alongside the test set the organizers provided two additional resources: a 1.4-million image
singleplant collection covering 7806 European species, and an archive of 212,782 unlabelled pseudo-quadrat
views derived from LUCAS Cover photographs. Neither resource is used in the present study. All
experiments are conducted with the competition’s pre-trained Vision Transformer, and no further
ifne-tuning or self-supervised adaptation is performed.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. External ecological context</title>
        <p>To supply the ecological background that the image files themselves lack, two public data sets are linked
to the baseline predictions. Daily and monthly sighting counts for every target species were extracted
from the European section of GBIF4[]; later allow the method to judge whether a taxon suggested
by the network is seasonally plausible for the date embedded in the file name. In addition, the five
numerical Ellenberg indicator values compiled in the EIVE project (light, temperature, moisture, soil
reaction and nitrogen supply) are available for most of the species list. These figures are converted into
a quantitative measure of potential niche overlap and will be used to assess the ecological compatibility
of the candidate labels produced by the Vision Transformer. A coarse spatial prior derived from a
ifve-kilometre raster of GBIF occurrences was also evaluated but provided no measurable benefit, so
seasonal frequency and niche overlap constitute the only contextual signals carried forward into the
methodological section that follows.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Image pre-processing and tiling</title>
        <p>Each test image is analyzed at its original resolution. A150-pixel margin on every side is simply skipped
during the sliding-window pass, so that no crop ever includes the wooden frame, ruler, or color card
lying along the borders. Two window sizes are employed, forming a multi-scale (double-scale) tiling
strategy.</p>
        <p>The fine-scale pass (Scheme A) sweeps the interior with 518 × 518 px squares taken every 172 px;
across the 2105 plots this produces 303,558 fragments—on averag1e44 per image (median154, minimum
4, maximum 272). The coarse pass (Scheme B) uses 732 × 732 px windows on the same grid; each
fragment is down-scaled so that its longer side equals 518 px, yielding roughly one third as many tiles
and capturing larger leaves or inflorescences that may be split across fine crops.</p>
        <p>For every tile the frozen DINOv2 ViT returns class confidences; the five highest scores and their
species IDs are stored. Tile-wise probabilities are summed per scale, and th1e8 most confident species
are retained. Empirical tuning shows that a single-scale configuration is optimal when taxa below
0.20–0.30 confidence are discarded; with Scheme A alone, a 0.26 threshold followed by limiting the
output to eight labels per plot yields20.12/21.84 F1-score (private / public score).</p>
        <p>The baseline used throughout the paper combines both scales: Scheme A filtered at 0.30 and Scheme
B at 0.26; their probabilities are summed and the eight highest species are submitted. This multi-scale
ensemble attains 29.15 private score and31.30 public score, whereas percentile cut-ofs or diferent
limits on the number of submitted labels proved consistently weaker.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Visual inference pipeline</title>
        <p>Every test image is processed at two spatial scales. The fine-scale stream (Scheme A) slides 518 × 518 px
crops across the usable area with a stride o1f72 px, yielding on average144 windows per plot (median
154, min 4, max 272). The coarse stream (Scheme B) extracts732 × 732 px crops on the same grid and
rescales each crop so that its longer side equals518 px, capturing broader plant structures that might be
split across fine crops.</p>
        <p>Each crop is forwarded to the frozen ViT-B/14 DINO v2 model introduced in Sectio2n.1. The model
returns class confidences, of which only the five highest are stored. A species is retained for a plot
if its best score in Scheme A reaches 0.30 or its best score in Scheme B reaches 0.26. The two
scalespecific lists are merged by taking, for every taxon, the single highest confidence observed in either
stream; probabilities are not summed. The merged list is truncated to the eight most confident taxa and
constitutes the baseline label set that subsequent stages refine.</p>
        <p>Alternative aggregation rules were examined. Global percentile cut-ofs (80–95%) and hybrid
strategies that combined a percentile threshold for one scale with a fixed threshold for the other both lowered
the overall public score. The simple per-scale thresholds described above therefore represent the
strongest purely visual baseline.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Thirteen-crop self-ensemble</title>
        <p>
          For each518 × 518 window generated by Scheme A a fixed 13‑crop set was created. The first element
is the window itself; the remaining twelve crops are derived from it: four concentric centre crops
covering 90%, 80%, 70% and 60%; eight corner crops extracted at 80% and 70% of the shorter side (top-left,
top-right, bottom-left, bottom-right for each scale) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>Each crop is passed through the evaluation transform, which rescales it to the model’s native 518
px input if necessary. The frozen ViT returns a logit vector for every crop; the arithmetic mean of the
13 logit vectors is computed, and only then is a softmax applied to obtain class confidences. A single
threshold of0.265, tuned on the public score, is used to filter these probabilities. The coarse 732 px
stream remains unchanged and keeps its0.30 threshold. After both scales are processed, their outputs
are merged exactly as in Section3.1 and truncated to the eight most confident species.</p>
        <p>Alternative reductions were evaluated: the geometric mean of logits, a hand-tuned weighted mean
favouring central crops, and the element-wise median of logits. None of these surpassed the simple
arithmetic average on either leaderboard, so the arithmetic-logit ensemble is retained as the default; the
median variant is revisited later in the cascade described in Section3.7.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Multi-scale fusion</title>
        <p>After the introduction of the 13-crop self-ensemble, each test image is analyzed twice: once with 518
×518 px windows enhanced by test-time augmentation and once with larger 732 ×732 px windows that
are down-scaled to 518 px. For the fine-scale pass, the logits of the thirteen geometric variants are
averaged, passed through a softmax and filtered with a confidence threshold of 0.265. For the coarse
pass, the single forward prediction is accepted when its confidence reaches 0.30. The two tiling schemes
are then reconciled by a simple rule: for every species the higher of the two confidences is taken;
probabilities are neither summed nor renormalized. The resulting list is sorted and trimmed to the nine
most confident taxa, a slight relaxation compared with the eight-label limit used in the baseline.</p>
        <p>This “highest-confidence” fusion exploits the complementary strengths of the two spatial resolutions.
The TTA-augmented fine windows are sensitive to small or partially occluded plants, whereas the
coarse windows stabilize predictions on larger leaves or inflorescences that may be split across finer
crops. Alternative fusion strategies—probability summation, geometric or weighted averaging, as well
as percentile cut-ofs—were systematically evaluated but always yielded a lower public score. With the
adopted thresholds of0.265 for the fine scale and 0.30 for the coarse scale, and with the Top-9 restriction,
the purely visual stage already attains33.50 public score and29.66 private score, providing a strong
basis for the ecological post-processing introduced in the following sections.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Cross-year plot aggregation</title>
        <p>Because every test image name encodes a persistent PlotID, plots imaged more than once can be
identified, often in diferent years. All images that share the same identifier are therefore grouped and
treated as a temporal series of the same physical quadrat.</p>
        <p>Within such a series, the confidence scores already produced for each image are examined, and
the single taxon attaining the highest confidence in any member of the group is selected. This most
reliable “anchor” species is then added to the prediction list of every other image of the same plot if it is
not already present. Copying exactly one label in this way consistently raises the leaderboard score,
whereas propagating two or more labels degrades performance; the aggregation is therefore limited to
a single species per series. The operation is applied after the purely visual stage yet before the seasonal
and niche-based filters discussed in Section 3.6.2, so that the inherited label can still be removed if
subsequent ecological checks deem it implausible.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Noise-aware weighting with SAM and GroundingDINO</title>
        <p>A visual inspection of the test images revealed that many frames contain substantial non-botanical
objects (wooden plot frames, rulers, stones, pieces of plastic or metal) that occupy a noticeable fraction
of the field of view. Their texture sometimes activates the Vision Transformer and produces false species
labels. Simply discarding contaminated windows, however, deprives the classifier of genuine plant
information along the borders of those windows. A soft penalty that down‑weights, rather than deletes,
the evidence coming from noisy regions was therefore chosen.</p>
        <p>
          To locate the unwanted objects in a domain-agnostic manner the full test image is first processed by
GroundingDINO [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], using the open-vocabulary prompt”stone, shell, roulette, plastic, metal, hand, ice,
snow, measure, ruler, wood, board, paper”. For every bounding box returned by the detector, the Segment
Anything Model (SAM) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] produces a pixel-accurate mask. The masks are stored as a thirteen-channel
tensor—one channel per prompt—so that the contribution of each query can be inspected visually and
disabled if necessary. In the production pipeline the channels are merged by a logicaolr into a single
binary ”noise” map.
        </p>
        <p>
          When the image is later divided into windows by schemes A and B, the fraction of noise pixels inside
a window is denoted  ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] . The confidence of every species predicted in that window is then
rescaled according to
 ′ =  (1 −   ),
 = 0.35.
        </p>
        <p>Thus the penalty grows linearly with the proportion of contamination, yet the window is discarded
entirely only when it is fully covered by the mask  (= 1 ). The adjusted confidences are subjected to
the same thresholds as before, namely 0.265 for the 518 px windows and 0.30 for the 732 px windows
that are subsequently resized to 518 px, and then pass on to the remaining stages of the pipeline.</p>
        <p>This linear weighting proved more reliable than both hard rejection and non-linear penalty functions:
it consistently reduced false positives originating from frames and rulers while preserving the recall of
true plant instances that share the window. The setting = 0.35 ofered the best trade-of on the public
score; larger values harmed recall, whereas smaller values left many noisy detections untouched.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Ecological post-processing</title>
        <p>Most plant species exhibit pronounced seasonality: even widespread taxa are usually observable only
in the months when their vegetative or reproductive organs are visible in photographs. In addition,
species that co-occur within a 0.25 m² plot typically share similar requirements for light, moisture, and
other abiotic factors. These observations motivate an ecological post-processing filter that complements
the purely visual pipeline, discarding predictions that are clearly out of season and removing taxa that
are ecologically incompatible with the rest of the plot’s label set.</p>
        <p>
          Seasonal consistency is assessed with five year occurrence statistics from GBIF Europe, providing a
month-wise plausibility check without the need for detailed spatial range modelling1[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The same
criterion applies to both rare and common species, an advantage when the training data are strongly
imbalanced.
        </p>
        <p>Niche compatibility is evaluated with the numerical Ellenberg indicators supplied by EIVE. Each
species is represented as a multidimensional Gaussian cloud on the axes of light, temperature, soil
moisture, soil reaction, and nitrogen supply; the extent to which two clouds overlap gives a direct
measure of their likelihood of co-occurrence. This screening step simultaneously increases recall, by
reinstating rare but ecologically plausible taxa, and reduces false positives that arise from visually
similar yet ecologically unsuitable species.</p>
        <sec id="sec-3-6-1">
          <title>3.6.1. Seasonal filtering of candidate species</title>
          <p>
            Every test image encodes its acquisition date in the file name (YYYYMMDD), providing the month of
observation [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. For each of the 7806 target taxa, GBIF Europe statistics were compiled over the past
ifve years (2020-2024), yielding monthly and daily counts of confirmed occurrences. Two variant filters
were designed.
          </p>
          <p>The hard seasonal filter excludes a taxon when its European record count for the month of the
photograph falls between 1 and 10 inclusive; counts of zero are preserved to avoid removing species
that may have been mis-matched in GBIF. This rule produced the most stable improvement on the
private leaderboard, and exhibited further gains when species with exactly zero observations were also
discarded.</p>
          <p>The soft seasonal filter applies the same 1-10 threshold to a three-month window centred on the
month of acquisition. Although this broader window achieved a larger increase on the public score, the
efect on the private set proved inconsistent, so the hard variant is retained as the default while the soft
variant is reserved for sensitivity analysis.</p>
          <p>Two alternative designs were abandoned. A day-level filter, which removed species never observed
on the exact calendar day, reduced performance because daily statistics are too sparse. Restricting
predictions to the thousand most frequent species of the month likewise failed to yield any benefit.
Seasonal filtering is applied directly after the visual fusion described in Sections 3.3–3.4 and before the
rarity and niche-overlap procedures outlined in Sectio3n.6.2.</p>
        </sec>
        <sec id="sec-3-6-2">
          <title>3.6.2. Niche overlap filtering based on EIVE indicators</title>
          <p>
            The multi–scale fusion step (Sec. 3.3) keeps the nine most confident taxa for each image and also
records areserve list of the next‐best nine candidates, yielding at most 18 species per plot. Ecological
consistency among those taxa is assessed with aNiche Similarity Index  ∈ [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ] that is computed from
the five Ellenberg indicator values supplied by EIVE (soil moisture  , nitrogen , soil reaction , light
 , temperature ) [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ].
          </p>
          <p>Similarity measure For species  and  let   ,   be their indicator means and  ,   the published
standard deviations on every axis where both species are defined. Two complementary notions of
overlap are combined:</p>
          <p>Let   ,   denote the consensus niche positions of species and  , and let   ,   denote the
corresponding niche widths (rather than statistical standard deviations) on every axis where both species
are defined. The niche–similarity index is computed in three steps:</p>
          <p>centre overlap: Δ = exp(− 12  2),
shape overlap: BC = [ ∏
  2 +  2

2    1/4</p>
          <p>HereΔ converts the squaredMahalanobis distance  2 between the two niche centres into a kernel,
so that values close to 1 correspond to centres that are virtually coincident, whereas small values
indicate large Mahalanobis separation.
interpretable overlap score.</p>
          <p>The second termBC, the Bhattacharyya coeficient , quantifies how strongly the two diagonal
multivariate normal clouds interpenetrate, falling from 1 (perfect overlap) towards 0 as the clouds
drift apart or narrow. Both components therefore lie in the interva[l0, 1], and their arithmetic mean
inherits the same scale, ofering an interpretable measure of ecological similarity.</p>
          <p>reflects how far the niche centres lie apart, whereasBC measures the actual overlap of the
Gaussian ”clouds”. A value of   = 1 indicates virtually identical ecological conditions; scores near 0
imply almost complete separation. If two species share no common indicator, is left undefined.</p>
          <p>Although the underlying ecological distributions are neither strictly normal nor necessarily
symmetric, the subsequent similarity calculation treats each niche as a multivariate Gaussian centred a t
with variance   2 ; this simplification proved adequate for fast large-scale filtering while retaining an</p>
        </sec>
        <sec id="sec-3-6-3">
          <title>Filtering procedure</title>
          <p>For every image, the algorithm first considers the set of taxa that survive
the visual and seasonal stages. For each species in this set the niche–similarity index is averaged over
its partners in ; whenever this mean similarity falls below 0.015, the species is judged ecologically
incompatible with the rest of the community and is discarded. After this pruning step the algorithm
turns to the ”reserve” list—the next nine candidates that were kept only for ecological checks. For every
taxon in the reserve list the average of its similarity scores to the species still present inis calculated;
if that average reaches at least 0.750, the taxon is inserted into the prediction set. Similarities contribute
to these averages only on axes where both species have indicator values, so missing data never penalise
a comparison. In practice a single pass of removal followed by addition is suficient; further iterations
do not change the composition.</p>
        </sec>
        <sec id="sec-3-6-4">
          <title>Rationale</title>
          <p>
            Neighboring plants on a0.25 m2 plot rarely exhibit radically diferent indicator profiles,
whereas taxa with highly similar niches often co‐occur even when visual evidence is weak. Combining
a distance kernel with the Bhattacharyya coeficient keeps  in the convenient range[
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ], remains
continuous, and captures both the location and the breadth of a niche without discretising the
environmental axes. The twin thresholds 0(.015 for removal and0.750 for addition) proved to reduce false
positives, particularly when visually plausible but ecologically impossible species were present, yet
preserved recall by reinstating rare taxa whose niches closely match those already accepted.
          </p>
        </sec>
      </sec>
      <sec id="sec-3-7">
        <title>3.7. Cascade completion with the median-logit stream (Scheme C)</title>
        <p>In addition to the arithmetic 13-crop stream of Scheme A (Sec3..2), a parallel set of predictions was
produced in which the logits of the thirteen image crops were aggregated by the coordinate-wise
median. This “median-logit” variant, hereafter Scheme C, delivered a top-1 species that difered from</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <p>The solution that ranked 4th on the PlantCLEF 2025 private leaderboard (1st on the public leaderboard)
can be decomposed into ten incremental steps, each raising the final score; Table 1 lists all steps and
their resulting metrics.</p>
      <p>
        Steps 1–3. Earlier PlantCLEF 2024 papers already explored tiling; experiments identified 518 ×518 px
windows with a 172 px stride and a 150 px border margin as the most efective setting [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. A
largerwindow (Scheme B) was introduced for multi-scale fusion (Sec.3.3), and confidence thresholds were
tuned separately for each scheme. Smaller tiles or non-overlapping grids yielded lower public scores
than the combined Scheme A + Scheme B setup. Randomly positioned windows appeared promising but
were not evaluated. Limiting predictions to the eight most confident species per plot further improved
performance. After steps 1–3, the baseline reached 31.30 public score.
      </p>
      <p>Step 4. A new prediction set was generated for the same Scheme A windows using 13 test-time
augmentations; the logits from all transforms were averaged arithmetically, followed by a softmax (Sec.
3.2), and the resulting probabilities were subjected to the same confidence threshold as in the baseline.
This arithmetic-logit stream replaced the original Scheme A branch in the ensemble. Experiments
further showed that retaining the top-9 species per plot outperformed the previous top-8 limit. The
score gained an additional +2.20 points at this stage.</p>
      <p>Step 5. Cross-year plot aggregation ofered a modest improvement of +0.49 pp: for each plot the top-1
species predicted for the same PlotID in other years was added to the current list (Se3c.4). Including
the top-2 candidates, however, reduced the score, so the aggregation was limited to a single additional
taxon.</p>
      <p>Step 6. To mitigate noise-related errors, each image was first processed by GroundingDINO, which
detected bounding boxes for objects frequently observed on the test set (stone, shell, roulette, plastic,
metal, hand, ice, snow, tape measure, ruler, wood, board, and paper) (Sec3.5). Pixel-accurate masks
for these detections were then produced by SAM. The most efective strategy was a soft penalty: the
confidence of every window was multiplied by (1 − 0.35 × mask coverage), where “mask coverage” is
the fraction of masked pixels inside the window. The penalty factor 0.35 was found empirically. This
noise-aware weighting increased the public score by +2.61 pp; an illustration is provided in Fig3.
Step 7. Monthly seasonality was quantified from five years of GBIF Europe occurrence data: for every
target species the number of confirmed records was tallied for each calendar month (Sec3..6.1). Among
several tested rules, only the soft-month filter delivered a gain: a species is discarded when its European
record count does not exceed ten in the observation month and in each of the two adjacent months.
This constraint raised the public score by +0.15 pp.</p>
      <p>Steps 8-9. For every test plot, pairwise ecological similarity was computed among the taxa already
predicted. The calculation relied on a composite Niche Similarity IndexS that merges the Mahalanobis
distance with the Bhattacharyya coeficient (Sec. 3.6.2), using the calibrated Ellenberg indicators from
EIVE 1.0. Within each plot the species whose mean S to all other candidates was the lowest (and fell
below the empirically fixed threshold of 0.015) was removed. Attempts to remove more than one taxon
per plot yielded smaller gains, so the pruning was restricted to a single removal.</p>
      <p>After the least compatible taxon had been dropped, the mean S was recomputed between the remaining
prediction set and every species in the original top-18 list. Whenever a previously filtered taxon achieved
S &gt; 0.75, it was reinstated (Sec. 3.6.2). Together, the removal-and-add cycle improved the public score
by +0.97 pp.</p>
      <p>The procedure yields ecologically interpretable results: manual inspections confirmed that pairs with
high S share similar environmental preferences, whereas taxa with low values occupy contrasting niches.
However, Fig. 4 reveals that the calibrated Ellenberg distributions deviate from normality and are often
asymmetric around the mean. Niche Similarity IndexS treats each niche as a symmetric Gaussian; a
more faithful model that captures these asymmetries (and estimates indicator spread individually for
every taxon) may further refine the similarity metric.</p>
      <p>Step 10. Scheme C, which aggregates window logits by the median and is therefore more robust to
outliers than Scheme A, frequently ranked a previously filtered taxon as its top-1 candidate. The final
cascade iteratively inserted the most confident Scheme C predictions into the submission—up to three
passes, stopping once the public score no longer increased. This procedure increased the public score
by +0.41 pp, yet lowered the private score, indicating a degree of overfitting to the public leaderboard.</p>
      <p>The pipeline achieved a 33.447% private F1-score, securing 4th place in PlantCLEF 2025 1[].</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The presented approach demonstrates that a combination of a ViT classifier, tiling, test‑time
augmentation, zero‑shot segmentation and multi‑step ecological post‑processing can substantially improve
multi‑label plant identification accuracy without additional network training. Nevertheless, several
aspects of the methodology require critical reassessment and further development.
Computational costs Test‑time augmentation (Sec. 3.2) yields a gain in F1 (Tab. 1) but prolongs
inference by ≈ 25 GPU‑hours per dataset. In order to accelerate inference, it is necessary to
experimentally select two to three augmentations from the available twelve and to determine an averaging
scheme for their logits that provides the maximum improvement.</p>
      <p>
        Biological plausibility Cross-year aggregation (copying the top‑1 species from previous seasons,
Sec. 3.4) adds +0.1 pp, yet may mask local extinctions and shifts in taxonomic diversity observable in
long‑term vegetation resurvey studies1[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Ellenberg/EIVE indicators are primarily calibrated for Central Europe; application elsewhere requires
local gradient tables or estimation of missing indicator values for species of the new regio1n4[].</p>
      <p>The niche approximation by a single Gaussian (Sec.3.6.2) may inadequately represent multimodal
or asymmetric distributions and thus distort the similarity metric; the Schoener D may therefore be
considered [15]. Future work should explore more precise metrics based oknernel density estimation
(KDE), which constructs a continuous density without assuming normality, oGraussian mixture models
(GMM), which approximate the distribution by a sum of Gaussians and often provide a more compact
representation than KDE at comparable accuracy.</p>
      <sec id="sec-5-1">
        <title>Overfitting and lack of independent validation Local validation was not performed, although the</title>
        <p>single‑plant close‑up dataset (≈ 1.4 million images) is formally available within the challenge. Because
the team was unable to download this volume in time due to technical constraints, hyperparameter
tuning relied exclusively on the public score, leading to overfitting and a loss of three positions on the
private leaderboard.</p>
        <p>Several methods can local estimate model quality, overfitting and hyperparameter selection on
unlabelled data. Under PlantCLEF 2025 conditions it would have been reasonable to select the output
limiter  and confidence threshold on a small subset (about 50k) of single‑plant close‑up images using
proxy metrics such as F1@k. For automatic optimization of these two hyperparameters in future work,
a single‑plant validation set is proposed. It has been theoretically shown that, for a narrow distribution
of ground‑truth label counts, such optimization transfers to multi‑label images without bia1s6[].</p>
      </sec>
      <sec id="sec-5-2">
        <title>Complexity and redundancy of certain components Although the GroundingDINO + SAM</title>
        <p>combination efectively suppresses artificial objects (Sec. 3.5), it increases memory consumption and
inference time; preliminary experiments with simple rectangular masks resulted in a reduction of
merely 0.3 pp in F1, indicating that the segmentation step can be simplified.</p>
        <p>The cascade scheme with three additional species (Scheme C, Sec. 3.7) increased the public score but
reduced the private score, representing further overfitting that could have been avoided through local
validation.</p>
        <p>Use of external data The PlantCLEF rules permit external data; however, the ecological layer
described in this work extends beyond a purely computer‑vision task and limits transferability to other
regions and taxonomic groups.</p>
        <p>Future work is planned to: (i) develop and implement a local validation scheme based solely on
label‑free proxy metrics; (ii) create code for computing an ecological neighborhood validity metric based on
niche intersection (KDE/GMM hypervolumes) and global co‑occurrence statistics (GBIF); and (iii) openly
publish a repository containing the full ecological post‑processing pipeline
(https://github.com/nellysemenova, release planned for Q3 2025). These developments should enhance reproducibility and extend
applicability to new regions and datasets.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Sincere gratitude is extended to all naturalists, researchers, and experts who contribute data to and
support the work of the Global Biodiversity Information Facility (GBIF) worldwide; their tireless eforts
and commitment to open science have made the results presented in this paper possible.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the following Generative AI tool was employed:
• ChatGPT o3 (OpenAI, May 2025 model) — Text Translation of the paper from Russian to English,</p>
      <p>Grammar and spelling check, and Improve writing style.</p>
      <p>All AI-generated suggestions were reviewed and edited manually; the authors assume full
responsibility for the final content. No Generative AI system was used to create original scientific ideas, analyse
data, or draw conclusions.
[15] D. Warren, R. Glor, M. Turelli, Enmtools: A toolbox for comparative studies of environmental
niche models, Ecography 33 (2010) 607 – 611. doi1:0.1111/j.1600-0587.2009.06142.x.
[16] N. Xu, C. Qiao, J. Lv, X. Geng, M.-L. Zhang, One positive label is suficient:
Singlepositive multi-label learning with label enhancement, 2022. URL:https://arxiv.org/abs/2206.00517.
arXiv:2206.00517.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Martellucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vinatier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <article-title>Overview of PlantCLEF 2025: Multi-species plant identification in vegetation quadrat images</article-title>
          ,
          <source>in: Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Adam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Larcher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Leblanc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Servajean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Janoušková</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Čermák</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Papafitsoros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Planqué</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.-P.</given-names>
            <surname>Vellinga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Klinck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Denton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Cañas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Martellucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vinatier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          , Overview of lifeclef 2025:
          <article-title>Challenges on species presence prediction and identification, and individual animal identification</article-title>
          ,
          <source>in: International Conference of the Cross-Language Evaluation Forum for European Languages</source>
          , Springer,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Espitalier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <article-title>Overview of plantclef 2024: Multi-species plant identification in vegetation plot images</article-title>
          ,
          <source>in: Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>GBIF.Org</given-names>
            <surname>User</surname>
          </string-name>
          , Occurrence download,
          <year>2025</year>
          . URLh: ttps://www.gbif.org/occurrence/download/ 0003757-
          <fpage>250227182430271</fpage>
          . doi:
          <volume>10</volume>
          .15468/DL.QT3PHR.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dengler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Chusova</surname>
          </string-name>
          , E. Hüllbusch,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Nobis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. V.</given-names>
            <surname>Meerbeek</surname>
          </string-name>
          , et al.,
          <article-title>Ecological indicator values for europe (eive) 1.0</article-title>
          ,
          <string-name>
            <surname>Vegetation</surname>
            <given-names>Classification</given-names>
          </string-name>
          <source>and Survey</source>
          <volume>4</volume>
          (
          <year>2023</year>
          )
          <fpage>7</fpage>
          -
          <lpage>29</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          . 3897/VCS.98324.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Goëau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-C.</given-names>
            <surname>Lombardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Afouard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Espitalier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Joly,
          <article-title>PlantCLEF 2024 pretrained models on the flora of the south western Europe based on a subset of Pl@ntNet collaborative images and a ViT base patch</article-title>
          14 dinoV2,
          <year>2024</year>
          . URL: https://doi.org/10.5281/zenodo.10848263. doi:
          <volume>10</volume>
          .5281/zenodo.10848263.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>El</surname>
          </string-name>
           Oquab,
          <string-name>
            <given-names>T.</given-names>
            <surname>Darcet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Moutakanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. V.</given-names>
            <surname>Vo</surname>
          </string-name>
          , et al.,
          <article-title>DINOv2: Learning robust visual features without supervision</article-title>
          ,
          <source>arXiv:2304.07193</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Foy</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. McLoughlin</surname>
          </string-name>
          ,
          <article-title>Utilising DINOv2 for domain adaptation in vegetation plot analysis</article-title>
          (plantclef
          <year>2024</year>
          ),
          <source>in: Working Notes of CLEF</source>
          <year>2024</year>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Picek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šulc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Matas</surname>
          </string-name>
          ,
          <article-title>Plant recognition by ai: Deep neural nets, transformers and knn in deep embeddings</article-title>
          ,
          <source>Frontiers in Plant Science</source>
          <volume>13</volume>
          (
          <year>2022</year>
          )
          <fpage>787527</fpage>
          . doi1:
          <fpage>0</fpage>
          .3389/fpls.
          <year>2022</year>
          .
          <volume>787527</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Zhang,</surname>
          </string-name>
          <article-title>Grounding dino: Marrying dino with grounded pre-training for open-set object detection</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2303.05499. arXiv:
          <volume>2303</volume>
          .
          <fpage>05499</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kirillov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mintun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rolland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gustafson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Whitehead</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Berg</surname>
          </string-name>
          , W.-Y. Lo,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dollár</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Girshick</surname>
          </string-name>
          , Segment anything,
          <year>2023</year>
          . URL:https://arxiv.org/abs/2304.02643. arXiv:
          <volume>2304</volume>
          .
          <fpage>02643</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H. C.</given-names>
            <surname>Wittich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Seeland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wäldchen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rzanny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mäder</surname>
          </string-name>
          ,
          <article-title>Recommending plant taxa for supporting on-site species identification</article-title>
          ,
          <source>BMC Bioinformatics 19</source>
          (
          <year>2018</year>
          )
          <fpage>190</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>U.</given-names>
            <surname>Jandt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bruelheide</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Berg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bernhardt-Römermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Blueml</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bode</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dengler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Diekmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dierschke</surname>
          </string-name>
          , I. Doerfler,
          <string-name>
            <given-names>U.</given-names>
            <surname>Döring</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dullinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Haerdtle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Haider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Heinken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Horchler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kudernatsch</surname>
          </string-name>
          , G. Kuhn,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wulf</surname>
          </string-name>
          , Resurveygermany:
          <article-title>Vegetationplot time-series over the past hundred years in germany</article-title>
          ,
          <source>Scientific Medical Data</source>
          <volume>9</volume>
          (
          <year>2022</year>
          )
          <article-title>631</article-title>
          . doi:
          <volume>10</volume>
          .1038/s41597-022-01688-6.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Leccese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Fanelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. E.</given-names>
            <surname>Cambria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Massimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Attorre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alfò</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Aćić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bergmeier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Čarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Custerevska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dimopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hoda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mullaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Šilc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Skvorc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Stancic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. Dajic</given-names>
            <surname>Stevanovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tzonev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vassilev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Malatesta</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. De Sanctis</surname>
          </string-name>
          ,
          <article-title>Estimation of missing ellenberg indicator values for tree species in south-eastern europe: a comparison of methods</article-title>
          ,
          <source>Ecological Indicators</source>
          <volume>160</volume>
          (
          <year>2024</year>
          )
          <article-title>111851</article-title>
          . URL:https://www.sciencedirect.com/science/article/pii/ S1470160X2400308X. doi:https://doi.org/10.1016/j.ecolind.
          <year>2024</year>
          .
          <volume>111851</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>