<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Tighnari: Multi-modal Plant Species Prediction Based on Hierarchical Cross-Attention Using Graph-Based and Vision Backbone-Extracted Features Notebook for the &lt;LifeCLEF&gt; Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Haixu</forename><surname>Liu</surname></persName>
							<email>liuhaixu1998@foxmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">The University of Sydney</orgName>
								<address>
									<addrLine>Camperdown Campus</addrLine>
									<postCode>2006 NSW</postCode>
									<settlement>Sydney</settlement>
									<country key="AU">Australia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Penghao</forename><surname>Jiang</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">The University of Sydney</orgName>
								<address>
									<addrLine>Camperdown Campus</addrLine>
									<postCode>2006 NSW</postCode>
									<settlement>Sydney</settlement>
									<country key="AU">Australia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Zerui</forename><surname>Tao</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">The University of Sydney</orgName>
								<address>
									<addrLine>Camperdown Campus</addrLine>
									<postCode>2006 NSW</postCode>
									<settlement>Sydney</settlement>
									<country key="AU">Australia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Muyan</forename><surname>Wan</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">The University of Sydney</orgName>
								<address>
									<addrLine>Camperdown Campus</addrLine>
									<postCode>2006 NSW</postCode>
									<settlement>Sydney</settlement>
									<country key="AU">Australia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Qiuzhuang</forename><surname>Sun</surname></persName>
							<email>qiuzhuang.sun@sydney.edu.au</email>
							<affiliation key="aff0">
								<orgName type="institution">The University of Sydney</orgName>
								<address>
									<addrLine>Camperdown Campus</addrLine>
									<postCode>2006 NSW</postCode>
									<settlement>Sydney</settlement>
									<country key="AU">Australia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Tighnari: Multi-modal Plant Species Prediction Based on Hierarchical Cross-Attention Using Graph-Based and Vision Backbone-Extracted Features Notebook for the &lt;LifeCLEF&gt; Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">F337E774CF5BC43D6CEDA34AF499295B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:03+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Swin-Transformer</term>
					<term>Computer Vision Backbone</term>
					<term>Graph Feature Extract</term>
					<term>Hierarchical Cross-Attention Fusion Mechanism</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Predicting plant species composition in specific spatiotemporal contexts plays an important role in biodiversity management and conservation, as well as in improving species identification tools. Our work utilizes 88,987 plant survey records conducted in specific spatiotemporal contexts across Europe. We also use the corresponding satellite images, time series data, climate time series, and other rasterized environmental data such as land cover, human footprint, bioclimatic, and soil variables as training data to train the model to predict the outcomes of 4,716 plant surveys. We propose a feature construction and result correction method based on the graph structure. Through comparative experiments, we select the best-performing backbone networks for feature extraction in both temporal and image modalities. In this process, we built a backbone network based on the Swin-Transformer Block for extracting temporal Cubes features. We then design a hierarchical cross-attention mechanism capable of robustly fusing features from multiple modalities. During training, we adopt a 10-fold cross-fusion method based on fine-tuning and use a Threshold Top-K method for post-processing. Ablation experiments demonstrate the improvements in model performance brought by our proposed solution pipeline. This work achieves a private leaderboard score of 0.36242 in the GeoLifeCLEF 2024 LifeCLEF &amp; CVPR-FGVC Competition, securing third place in the rankings (Team name: Miss Qiu).</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction 1.Background and related literature</head><p>Predicting the composition of plant species over segmented time and spatial scales plays an important role in managing and protecting ecosystem biodiversity and improving species identification tools. Therefore, the LifeCLEF lab of the CLEF conference and the FGVC11 workshop of CVPR jointly host the GeoLifeCLEF 2024 competition centered around this task <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>We review the strategies of past winners in this competition. The fourth-place <ref type="bibr" target="#b2">[3]</ref> in 2022 mentions a method for constructing "Patched" approaches, where variables from eight different modalities provided that year are aligned into a single image format of (256,256,3). These modalities are then processed separately by ResNet50 <ref type="bibr" target="#b3">[4]</ref>, and the output features are concatenated and passed through a linear layer, with Top-K processing applied to the outputs. In contrast, the second-place entry from the same year <ref type="bibr" target="#b4">[5]</ref> abandons data from other modalities and only uses remote sensing data, creating NDVI image format features using NIR and RGB data. They train multiple models based on ResNet50, DenseNet201, and Inception-v4, and fuse the logits.The champion's strategy in 2023 <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref> introduces three backbone networks based on ResNet. The first network solely extracts features from bioclimatic raster data, while</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Exploratory Data Analysis</head><p>Exploratory Data Analysis (EDA) is essential for unveiling the motivations behind our choice of modeling techniques. We divide our analysis into several steps, starting with the visualization of unstructured data (shown in Figure <ref type="figure" target="#fig_1">1</ref> and Figure <ref type="figure" target="#fig_3">2</ref>):    We find that time series data exhibit strong periodicity, which inspires us to consider whether folding the time series according to its periodicity to transform 1D data into 2D data might facilitate more effective feature extraction.</p><p>Subsequently, we perform outlier detection in tabular data, followed by data cleaning and imputation of missing values. We group the data by year and region to observe the distribution of features and the correlations between them under different groupings.  The analysis reveal significant variations in feature distribution and correlations across different regions, leading us to hypothesize that these variations could influence the distribution of species (shown in Figure <ref type="figure" target="#fig_4">3</ref> and Figure <ref type="figure" target="#fig_5">4</ref>). This hypothesis is confirmed by further examining the prevalence of leading species in different regions. Additionally, we note that feature distribution and correlations do not show significant differences between close years, but observable differences emerge across more widely separated years. Hence, we conclude that sharing the same year and region is a prerequisite for establishing an edge between two nodes (supporting label aggregation).</p><p>Next, we visualize the geographic locations of PA ad PO Survey IDs to explore whether they exhibit any clustering tendencies (shown in Figure <ref type="figure" target="#fig_6">5</ref>). The visualization indicates a tendency for the survey locations to cluster. We analyze some of these smaller clusters, calculating their radii, and estimated that the radius around each small cluster center was approximately 10 kilometers. To prevent a node from being overwhelmed by an excessive number of adjacent nodes, which could lead to a reduction in the variance of the aggregated feature vectors, we introduce further constraints on edge creation: no edge would exist between nodes if their geographic distance exceeded 10 kilometers.</p><p>We then visualize the labels, observing the frequency of species occurrences and the frequency distribution of the number of species recorded per survey (shown in Figure <ref type="figure" target="#fig_7">6</ref>). This analysis aids us in setting a reasonable range for K in the Top-K process. It can be observed that the optimal value of K should be between 0 and 100. Finally, using survey IDs as nodes, we construct a graph based on the aforementioned rules. We examine the distribution of each node's degree (the sum of edge weights) on the graph. We find the distribution to be highly uneven, which could potentially degrade the quality of aggregated features on the graph. Direct normalization of the aggregated feature vectors by the degree could excessively diminish the values corresponding to rarer species. Inspired by the Attention mechanism <ref type="bibr" target="#b12">[13]</ref>, which normalizes attention scores using a square root transformation, we adopt this technique to use the square root of the degree as the divisor for normalizing aggregated feature vectors. Through visualization, we discover that this approach allows the divisor used to normalize the feature vectors of each node in the graph to approximate a normal distribution, and also results in a smaller range of divisor (shown in Figure <ref type="figure" target="#fig_8">7</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>In this section, we provide a detailed description of each step in the modeling process and the motivations behind them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Table Data Cleaning and Missing Value Imputation</head><p>Initially, we group the metadata for both PA and PO by Survey ID. We then replace outliers in the grouped metadata of PA and the test set with null values, and impute missing values with the mean. Categorical data were then one-hot encoded. Subsequently, for the label(speciesId), we encode each element according to its corresponding number using 0-1 encoding.</p><p>Next, we access all tabular data for the training and test sets from the EnvironmentalRasters folder. We notice that the human footprint column contained −1 and extreme outliers. Based on the observed data distribution, we set all values greater than 255 to 255 and those less than 0 to 1. Subsequently, we merge all tables by Survey ID, replacing all infinite and infinitesimal values with nulls, and again impute all missing values with the mean. Finally, the training sets of the PA and test sets were merged again with the cleansed EnvironmentalRasters data by Survey ID to produce the cleaned tabular modality data.In the subsequent sections, we represent the features extracted from tables as 𝐹 .</p><p>Following this, we clean the time series data by folding it into cubes. In the baseline method, all null values in the Tensor are replaced with zeros. We change this replaced value to the mean of the Tensor. Additionally, as the Swin-Transformer cannot accept prime numbers as the dimensions of input image matrices, we trim the shapes of two sets of time series cubes from <ref type="bibr" target="#b3">(4,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b11">12)</ref> and <ref type="bibr" target="#b5">(6,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr">21)</ref> to <ref type="bibr" target="#b3">(4,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b11">12)</ref> and <ref type="bibr" target="#b5">(6,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b19">20)</ref> respectively. This is justified because the last year in the series already had many missing values, so trimming directly does not result in significant information loss.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Graph Construction and Utilization</head><p>Graph construction and utilization are highlights of our work. To establish graph relationships among samples, we based our approach on two fundamental assumptions. First, we assumed a clustering tendency in the spatial distribution of individual species, meaning that if a species appears in nodes surrounding a particular Survey ID, the likelihood of its occurrence at that Survey ID increases. Second, we hypothesized that within the same ecological environment, there is a correlation between the spatiotemporal distributions of different species, implying that samples (nodes) close in geographical location and year exhibit higher similarity in species composition. Our earlier data visualization efforts validated these assumptions.</p><p>Our graph construction process was two-fold. The first step involved establishing a base graph for aggregating labels from adjacent nodes. Our rationale and actions are as follows: Visualization revealed significant differences in ecological characteristics and species distribution among different regions, and even within the same region across extensive years. Consequently, we determine that nodes being in the same year and region was a prerequisite for an edge (supporting label aggregation).</p><p>However, simply adding edges between nodes of the same year and region can result in highly consistent feature vectors due to label aggregation (with minimal variance), and the varying numbers of nodes across different regions and years can lead to significant imbalances in feature vector values, thus affecting feature quality. To counter this, we need to further restrict edge creation conditions. Visualization showed clustering tendencies in survey locations; we identify some smaller clusters and calculate their radii, with an estimated radius of about 10 kilometers for each cluster center. Therefore, we stipulate that nodes over 10 kilometers apart would not be connected by an edge. Moreover, we want the edge weight between two nodes to increase as their distance decreased, hence we set the edge weight as the maximum allowable distance (10 kilometers) minus the actual distance between two nodes. Based on these criteria, we add an edge for every pair of nodes that meet the conditions, thus forming the graph.</p><p>Next, we need to establish rules for generating a graph feature vector (GFV) for each node through label aggregation from adjacent nodes. We convert each node's neighboring node labels into 0-1 encoded vectors, where the presence of a species is marked as 1 and absence as 0, resulting in each node's label being a 11255-dimensional vector. For any node, the weighted sum of all its neighboring nodes' label vectors and corresponding edge weights constitutes the node's GFV. To avoid the severe imbalance in the numerical distribution of GFV mentioned in the previous section, we adopt a square-root normalization trick similar to that used in attention mechanisms, using the square-root of the degree as the divisor for the GFV. Additionally, to prevent global imbalances in the numerical distribution of GFVs among all nodes, we normalize all nodes' GFVs and reassign them accordingly.</p><p>Finally, we add each Survey ID from the test set as nodes to this graph, determining whether to create edges with existing training set nodes based on the established criteria (note that edges should not be generated between test set nodes to avoid affecting the statistics of node degrees). We then aggregate the GFVs for inference on the test set nodes. The specific calculation formula is as follows:</p><formula xml:id="formula_0">𝐺𝐹 𝑉 𝑖 = ∑︀ 𝐷 𝑖 𝑗=1 𝐿 𝑖𝑗 × 𝑊 𝑖𝑗 √ 𝐷 𝑖<label>(1)</label></formula><formula xml:id="formula_1">𝑊 𝑖𝑗 = 10 − 6731 × 𝑅𝑎𝑑 𝑖𝑗 ,<label>(2)</label></formula><p>where 𝑖 represents the ID of the current node, 𝑗 represents the ID of the node connected to it, 𝑊 represents the edge weight between these two nodes, 𝐿 represents the label vector passed through by the nodes adjacent to the current node, 𝐷 represents the degree of the current node, and 𝑅𝑎𝑑 represents the radian distance calculated between two points based on their latitude and longitude. In the second step, we clone the graph with its nodes, edges, weights, years, regions, coordinates, and labels into a new graph, marking training and test set nodes with category labels. We then label the auxiliary nodes from the PO metadata table grouped by Survey ID according to the first step's edge creation conditions and create edges with qualifying test nodes. For each test node, we select the 𝑁 adjacent training nodes with the highest weights (closest geographical locations) and identify species appearing more than 𝐿 times, generating a list. We then select the 𝐴 auxiliary nodes with the highest weights, identify species appearing more than 𝑀 times, and compile another list. Merging and deduplicating these two lists provides a high-probability species list for each test node, used for correcting model output in post-processing. The default settings are: 𝑁 = 5, 𝐿 = 4, 𝐴 = 10, 𝑀 = 8. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 2 Complete Graph Construction and Species List Compilation with Sorting</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Temporal Feature Extraction</head><p>Another highlight of our work is the development of a temporal feature extraction method based on the Swin-Transformer Block. This approach is inspired by Haixu Wu et al., who used a CNN-based Inception backbone network to extract features from folded time series data in TimesNet <ref type="bibr" target="#b13">[14]</ref>. Wu and colleagues argue that this method, compared to traditional time series neural networks, is not only better at capturing multi-scale sequential relationships but also has a stronger capability for spatio-temporal information fusion. It shows superior performance across various time series analysis tasks and offers more efficient training and inference. Our goal in processing time series is to obtain higher quality features that are more conducive to modality fusion, rather than making better predictions about future time points. Therefore, we believe that using a visual backbone network to process time series cubes should yield better feature extraction results than traditional time series models.</p><p>In the current research on deep learning technology, whether for image processing or time series prediction tasks, methods based on Transformers are considered superior to those based on CNNs. Consequently, we experiment with a backbone network specially designed for extracting temporal features using Swin-Transformer Blocks and Vision Transformer Blocks.</p><p>Taking Swin-Transformer as an example, for time series cubes cropped to sizes <ref type="bibr" target="#b3">(4,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b11">12)</ref> and <ref type="bibr" target="#b5">(6,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b19">20)</ref>, we set the Patch size to <ref type="bibr" target="#b2">(3,</ref><ref type="bibr" target="#b2">3)</ref> and <ref type="bibr" target="#b1">(2,</ref><ref type="bibr" target="#b4">5)</ref>, and the Window size to (3,2) and (2,3), respectively. We stack a Swin-Transformer Stage with a depth of 2 and 12 attention heads and another with a depth of 6 and 24 attention heads to create the backbone network for handling our specific time series cubes. The attention function can be defined mathematically by the equation:</p><formula xml:id="formula_2">Attention(𝑄, 𝐾, 𝑉 ) = SoftMax (︂ 𝑄𝐾 𝑇 √ 𝑑 + 𝐵 )︂ ,<label>(3)</label></formula><p>where 𝑄 represents the query matrix, 𝐾 represents the key matrix, 𝑉 represents the value matrix, 𝑑 is the dimensionality of the keys and queries, typically used for scaling, 𝐵 represents the positional encoding for time sequences. Unlike the classic Swin-Transformer, where a two-dimensional vector indicates the absolute position of patches in an image, here we employ a one-dimensional vector to represent the position of each patch in a flattened sequence. This modification better adapices to temporal tasks. Apart from that, the meanings of other symbols and formulas are the same as those in the classic Swin-Transformer, and are not reiterated here <ref type="bibr" target="#b14">[15]</ref>. We depart from the standard practice of stacking four stages in the Swin-Transformer backbone network because the dimensions of the time series cubes are too small. With each stacked stage, the dimensions for the subsequent stage are halved. Moreover, the Swin-Transformer Block does not accept odd dimensions for input feature maps, making two blocks the most logical configuration for our current time series cube dimensions.</p><p>The depths of the two stacked blocks are chosen to be 2 and 6, corresponding to the depths of the second and third stages in the classic Swin-Transformer backbone network. This configuration means that they extract shallow and deep features, respectively. This 1:3 depth ratio has also been adopted by subsequent backbone networks, such as ConvNeXt <ref type="bibr" target="#b15">[16]</ref>.</p><p>We select 12 and 24 as the attention head counts for the two stages, following the counts used in the third and fourth stages of the classic Swin-Transformer backbone network. Increasing the number of attention heads allows the model to independently capture features across more subspaces. Since the feature map sizes entering the third and fourth stages in the original model are already quite small, similar to the size of our time series cubes, having many attention heads does not overly increase the computational burden. Therefore, we use as many attention heads as possible to more comprehensively extract features from the time series cubes. The input and output representations of the final model are as follows:</p><formula xml:id="formula_3">𝑇 = Temporal-Swin-T(𝑈 ),<label>(4)</label></formula><p>where 𝑈 ∈ R 𝐻×𝑊 ×𝐶 , 𝐻 represents the number of seasons or months, 𝑊 represents the number of years, 𝐶 represents the number of channels, 𝑇 are the features after Swin-Transformer processing, respectively. We give the comparison diagram to reveal the differences and connections between our model and the classic Swin-Transformer (shown in Figure <ref type="figure" target="#fig_11">9</ref>).</p><p>Based on this rationale, we also design a Vision Transformer (ViT) backbone network for extracting temporal information. To validate our approach, we design rigorous comparative experiments using our new temporal feature extraction network to replace the Resnet18 <ref type="bibr" target="#b14">[15]</ref> used in the baseline for extracting time series features. Given the small size of the time series cubes relative to images, we primarily choose small backbone networks to prevent overfitting and gradient vanishing.</p><p>Our comparative backbone networks include the baseline's ResNet18 <ref type="bibr" target="#b3">[4]</ref>, the improved version of Inception, Xception41 <ref type="bibr" target="#b16">[17]</ref> from TimesNet <ref type="bibr" target="#b13">[14]</ref>, the CNN-based lightweight backbone networks EfficientNet-B0 <ref type="bibr" target="#b17">[18]</ref> and MoblieNetV3 <ref type="bibr" target="#b18">[19]</ref>, as well as our designed Swin-Transformer and Vision Transformer backbone networks for time series. The final experimental results demonstrate that our custom-designed temporal feature extraction networks perform optimally, with Swin-Transformer showing the fastest gradient descent and the best results on the private leaderboard.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Image Feature Extraction</head><p>Although this is a multi-task classification problem, the low resolution of satellite images clearly does not allow for accurate prediction of plant species in a given region. Therefore, the primary motivation for processing images is still to extract high-quality features that are conducive to modality fusion. In the baseline, the image feature extraction network employed is the tiny version of Swin-Transformer, which was presented in a Best Paper at ICCV 2021 <ref type="bibr" target="#b3">[4]</ref>. We note that most models are trained with input sizes of (224,224) or (384,384). To better utilize the weights preserved in the pre-trained model and retain the original information carried by the satellite images, we resize the satellite images from (128,128) to (224,224) before inputting them into the pre-trained model for fine-tuning. Experimental evidence shows that this adjustment significantly enhances the model's performance.</p><p>To explore whether there are better alternatives, we conduct comparative experiments with other models such as EfficientNet-B0 <ref type="bibr" target="#b17">[18]</ref>, ConvNeXt-Base <ref type="bibr" target="#b15">[16]</ref>, and ViT-Base <ref type="bibr" target="#b19">[20]</ref>, all pre-trained with an input size of (224,224). The choice of ConvNeXt-Base and ViT-Base is based on their status along with Swin-Transformer as the current state-of-the-art (SOTA) solutions in computer vision backbone networks, and they are comparable to the tiny version of Swin-Transformer in terms of the number of parameters. EfficientNet-B0 was selected partly because it is one of the most powerful feature extraction networks among traditional CNN-based backbones, seen as a superior alternative to the ResNet scheme. Moreover, EfficientNet-B0 is exceptionally lightweight and converges quickly, both in terms of parameter count and training time, allowing for significant optimization of the overall model training and inference time without substantial performance loss.</p><p>Our comparative experiments reveal that using the ResNet18 scheme for both temporal and image feature extraction achieved the highest accuracy. </p><formula xml:id="formula_4">)<label>5</label></formula><p>where 𝑆 is the feature map of output.𝐻 represents the height of satellite images, 𝑊 represents the width, 𝐶 represents the number of channels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Hierarchical Cross-Attention Fusion Mechanism(HCAM)</head><p>Another significant highlight of our work is the introduction of a hierarchical cross-attention fusion mechanism to address the challenge of efficiently fusing feature vectors with varying information densities extracted from different modalities. In the aforementioned steps, we have extracted information (𝑇 , 𝑆, 𝐹 ′ , 𝐺 ′ ) from four modalities, including time-series modal (𝑈 ), satellite imagery modal (𝐼), and tabular modal features (𝐹 ), as well as graph modal features (𝐺) we constructed and extracted ourselves. 𝐹 ′ and 𝐺 ′ represent 𝐹 and 𝐺 after being processed by fully connected layers.</p><p>In terms of information density, satellite imagery modal features are the most dense because the backbone network essentially compresses information carried by multiple channels of an image into a limited set of features for classification mapping via a fully connected layer. Time-series and tabular modal features are less dense, as in our model, we attempt to map them to feature vectors of the same or even higher dimensions. Graph modal features are the sparsest; although we aggregate features from different nodes, the majority of elements in a graph feature vector relative to all 11,255 dimensions are still marked as 0.</p><p>Initially, we try to use concatenation for modality fusion similar to the baseline, but due to the high dimensionality and sparsity of the graph feature vectors, the model's loss reduction process is unstable.</p><p>Consequently, we decide to use cross-attention, more commonly seen in multimodal learning, to attempt fusion of these modalities. However, cross-attention supports only the fusion of two modalities at a time, requiring six uses for pairwise fusion among four modalities, and still necessitating concatenation to integrate each cross-attention output. This not only increases computational overhead but also fails to ensure a controllable reduction in training loss. After multiple adjustments using cross-attention, we determine the optimal modality fusion structure, with specific operations and motivations as follows:</p><p>Firstly, the features extracted from the satellite imagery modal are the densest in information. However, observations of the raw data reveal that satellite images primarily provide features of the landscape and vegetation cover, such as color and density of foliage, at the location of the current SurveyID. These features may map to higher-order latent features such as seasonal climate and ecological environment. Meanwhile, the time-series features record climate characteristics and vegetation changes of the area, and the tabular features mainly document the ecological environment characteristics of the area. We consider the time-series and tabular features as two sets of queries, querying keys of climate and vegetation change features, and ecological environment features respectively, both derived from the image features through two parallel linear layers. This setup allows the calculation of attention scores for the time-series and image modalities on climate and vegetation changes, and for the image and tabular modalities on ecological features. The outputs of the cross-attention from these two groups are then concatenated after being calculated from two sets of values mapped from the image features through two parallel linear layers. This concatenated output serves as the final output of the cross-attention calculation for these three modalities. Simultaneously, when the features of the three modalities are fed into this cross-attention module, they are concatenated and mapped to the same dimension as the output of the cross-attention through a linear layer serving as a cutoff, and added together, forming a residual connection. This addition enhances the robustness of the cross-attention module during training. The Cross attention(CA) function can be defined mathematically by the equation:</p><formula xml:id="formula_5">𝑄 1 = 𝑇 × 𝑊 𝑄𝑇 , 𝑄 2 = 𝐹 ′ × 𝑊 𝑄𝐹 , 𝐾 1 = 𝑆 × 𝑊 𝐾1 , 𝐾 2 = 𝑆 × 𝑊 𝐾2 , 𝑉 1 = 𝑆 × 𝑊 𝑉 1 , 𝑉 2 = 𝑆 × 𝑊 𝑉 2 , 𝐴 1 = Softmax (︂ 𝑄 1 𝐾 𝑇 1 √ 𝑑 𝑘1 )︂ 𝑉 1 , 𝐴 2 = Softmax (︂ 𝑄 2 𝐾 𝑇 2 √ 𝑑 𝑘2 )︂ 𝑉 2 , 𝑂 = Concat(𝐴 1 , 𝐴 2 ), 𝑂 2 = Concat(𝑇, 𝐹, 𝑆), 𝑂 CA = 𝑂 + Linear(𝑂 2 ).<label>(6)</label></formula><p>In our initial concept, we consider using the graph modal as the primary modality, given that its features are directly aggregated from the labels of adjacent nodes. Based on the assumptions mentioned at the beginning of section 3.2, these features should closely approximate the current node's label. Therefore, we intend to use the features from other modalities to correct the graph modal, aiming to achieve higher accuracy. However, during training, the difference in information density between the graph modal features and other modal features was too great. Whether through direct concatenation, mapping three modalities' features to the graph feature vector's dimension for addition, or through cross-attention, it was ineffective in guiding the graph feature modal to generate accurate labels for prediction samples (nodes).</p><p>In ecology, the composition of vegetation species in a specific spacetime is often determined by factors such as climate conditions, ecological environments, and vegetation changes. Considering that the species appearing in a node's adjacent nodes can represent these three major features of the node's spacetime, albeit in a too sparse representational form, we decide to reduce the dimensionality of the graph feature vector. We compress it to the same dimension as the concatenated vector of the other three modalities, then perform another multi-head cross-attention(MHCA) calculation. Here, the concatenated vector of the other three modal features serves as the Query, with the compressed graph feature vector being mapped as Key and Value by two parallel linear layers, calculating the multi-head cross-attention and outputting. Our rationale for this choice is that the concatenated vector of the three modal features represents the measured characteristics of climate conditions, ecological environment, and vegetation changes in the current spacetime, while the compressed graph feature vector represents the observational results of these characteristics on the plant species combinations we are focusing on. We aim to reveal the causal relationships between them through cross-attention. The Multi-head cross attention function can be defined mathematically by the equation:</p><formula xml:id="formula_6">𝐺 ′ = Linear(𝐺), 𝐶 = Concat(𝐹, 𝑆, 𝑇 ), 𝑄 = 𝐶 × 𝑊 𝑄 , 𝐾 = 𝐺 ′ × 𝑊 𝐾 , 𝑉 = 𝐺 ′ × 𝑊 𝑉 , 𝑂 MHCA (𝑄, 𝐾, 𝑉 ) = Softmax (︂ 𝑄𝐾 𝑇 √ 𝑑 𝑘 )︂ 𝑉.<label>(7)</label></formula><p>Ultimately, we concatenated the output of this multi-head cross-attention with the output of the trimodal cross-attention module mentioned previously, organizing it into the final output through a fully connected layer.</p><p>𝑂 final = Linear(Concat(𝑂 MHCA , 𝑂 CA )),</p><formula xml:id="formula_7">𝑂 output = Sigmoid(𝑂 final ).<label>(8)</label></formula><p>This approach significantly improves the model's performance during training. Observing the loss curves for the training and validation sets during the training process, it is evident that the model's overfitting was markedly reduced, and the loss on the validation set could decrease further.</p><p>Throughout the process, we use two cross-attention layers, performing three cross-attention calculations. Initially, we select the image modal feature with the highest information density and calculate cross-attention with the time-series and tabular modal features, which have relatively high information densities. Subsequently, we calculate the cross-attention between the features of the three modalities and the compressed graph modal feature in one go. The selection of modal features for each cross-attention layer reflects a hierarchical relationship, thus we name this approach the Hierarchical Cross-Attention Fusion Mechanism. The schematic diagram of HCAM is as follows: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Mix up +10 Fold Cross Fusion training strategy</head><p>We adopt the Mix up training strategy provided in the baseline, which is a very common method for data augmentation during training. Specifically, this method involves randomly shuffling the order of training data and labels in the current batch and then performing a weighted addition with the unshuffled training data and labels. The input matrices 𝑈 , 𝐼, 𝐹 , and 𝐺, as well as the label matrix 𝐿, are shuffled to create 𝑇 ˜, 𝑆 ˜, 𝐹 ˜, 𝐺 ˜, and 𝐿 ˜:</p><formula xml:id="formula_8">𝑇 mix = 𝜆𝑇 + (1 − 𝜆)𝑇 ˜, 𝑆 mix = 𝜆𝑆 + (1 − 𝜆)𝑆 ˜, 𝐹 mix = 𝜆𝐹 + (1 − 𝜆)𝐹 ˜, 𝐺 mix = 𝜆𝐺 + (1 − 𝜆)𝐺 ˜, 𝐿 mix = 𝜆𝐿 + (1 − 𝜆)𝐿 ˜. (<label>9</label></formula><formula xml:id="formula_9">)</formula><p>This approach enhances the robustness of the training process, improves the generalization performance of the model, smooths out the distribution of samples across different categories, and makes originally sparse labels relatively dense.</p><p>Moreover, to fully utilize the training data and reduce model overfitting, we employ a ten-fold cross-fusion technique to train the model. The concept of ten-fold cross-fusion is an improvement over ten-fold cross-validation. The specific operation involves dividing the dataset into ten parts, with each part serving once as the validation set, while the other nine parts are used to train a brand new model. The logits output by these ten new models are then averaged. However, this approach also results in a training efficiency about ten times lower than before. To address this drawback, I reason that the training datasets used for each model are highly overlapping, and except for the first model, the training of the subsequent nine models can be considered as fine-tuning the first model using a slightly altered dataset. Motivated by this, we optimize our training strategy. For the first model, we initialize parameters and train it using an early stopping strategy. For each subsequent model, we clone the parameters of the first model and fine-tune it on a newly combined training set. This method reduces the number of training epochs for subsequent models, thereby enhancing training efficiency. The original dataset is 𝐷; we divide it into ten parts {𝐷 1 , 𝐷 2 , . . . , 𝐷 10 }. We train a model 𝑀 𝑖 by setting each 𝐷 𝑖 as the test dataset: 𝑀 𝑖 = Train (𝐷∖𝐷 𝑖 ). Then we compute the average of logits from these models:</p><formula xml:id="formula_10">𝐿 = 1 10 10 ∑︁ 𝑖=1 𝑀 𝑖 (𝑥), (<label>10</label></formula><formula xml:id="formula_11">)</formula><p>where 𝑥 is the model input. The training process and results visualization for each model are as follows: The binary cross-entropy (BCE) loss is one of the most common loss functions for multi-label learning <ref type="bibr" target="#b5">[6]</ref>. For an observed data point (𝑥 𝑛 , 𝑦 𝑛 ), including full positive and negative classes, the BCE loss is calculated as follows:</p><formula xml:id="formula_12">𝐿 BCE (𝑓 𝑛 , 𝑦 𝑛 ) = − 1 𝐿 𝐿 ∑︁ 𝑖=1 [︀ 1 𝑦 𝑛,𝑖 =1 log(𝑓 𝑛,𝑖 ) + 1 𝑦 𝑛,𝑖 =0 log(1 − 𝑓 𝑛,𝑖 ) ]︀ ,<label>(11)</label></formula><p>where 𝑓 𝑛 = 𝑓 (𝑥 𝑛 ) ∈ [0, 1] is the model predicted probability of presence for species 𝑖 under input 𝑥 𝑛 , and 1 denotes the indicator function, i.e., 1 𝑘 = 1 if assertion 𝑘 is true, or 0 otherwise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.7.">Post-Processing: Threshold Top-K and Output Correction</head><p>Based on the official requirements, the final formula for calculating the score is as follows:</p><formula xml:id="formula_13">𝐹 1 = 1 𝑁 𝑁 ∑︁ 𝑖=1 TP 𝑖 TP 𝑖 + (FP 𝑖 + FN 𝑖 ) /2 , (<label>12</label></formula><formula xml:id="formula_14">)</formula><p>where</p><formula xml:id="formula_15">⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ TP 𝑖 = Number-of-predicted-species-truly-present, i.e., ⃒ ⃒ ⃒ ̂︀ 𝑌 𝑖 ∩ 𝑌 𝑖 ⃒ ⃒ ⃒ FP i = Number-of-species predicted-but-absent, i.e.. ⃒ ⃒ ⃒ ̂︀ 𝑌 1 ∖Y i ⃒ ⃒ ⃒ FN i = Number-of-species not-predicted-but-present, i.e., ⃒ ⃒ ⃒Y i ∖ ̂︀ 𝑌 1 ⃒ ⃒ ⃒ . (<label>13</label></formula><formula xml:id="formula_16">)</formula><p>In the baseline, the classic multi-class task method of Top-K is used to filter the model's output. Typically, in conventional Top-K, a value for K is manually set or initially a range is predefined, within which K is enumerated to observe which K yields the best performance on the validation set inference results, and this K is then applied to the test set inference process.</p><p>However, we observe drawbacks with this method. Some Survey IDs contain dozens or even hundreds of species. Although these species might have high probabilities in the model's output, they are truncated if they do not rank within the top K. Additionally, some test set Survey IDs, even with low probabilities for each species, are still forced to output the top K species by probability rank.</p><p>Therefore, we improve the Top-K algorithm by setting a range of thresholds (0.1 to 0.5, in steps of 0.01) for each possible K to filter out species with predicted probabilities below these thresholds. By exhaustively combining K and threshold values and comparing the scores of the validation set outputs processed through them, we can identify the optimal pair of K and threshold.</p><p>Let the probability of the model output be 𝑃 = {𝑝 1 , 𝑝 2 , . . . , 𝑝 11255 }, where 𝑝 𝑖 is the predicted probability of species 𝑖. We filter the output by setting the threshold 𝜃 and K values:</p><formula xml:id="formula_17">𝑆 = {𝑖|𝑝 𝑖 &gt; 𝜃} ∪ Top-K(𝑃 ), (<label>14</label></formula><formula xml:id="formula_18">)</formula><p>where 𝑆 is the set of filtered species. We can see that the darker colored regions represent higher scores for this pair of parameters. In the end, we choose the optimal K of 44 and the optimal threshold of 0.23 (shown in Figure11).  We note that there are 11,255 species IDs, but only 5,016 appear in PA, with the remainder in PO, prompting us to correct the model's data through mining of the PO data. The specific approach involves merging and deduplicating the list of high-probability species for each test set node obtained in section 3.2 with the model's prediction results to produce the final output. In addition, We find that generating a high-probability species list using only the PO data (auxiliary nodes) is more consistently beneficial compared to generating a high-probability species list using both PO and PA data for result correction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>In this chapter, we present the experiments conducted to validate the superiority of the Proposal model and analyze the experimental results. We omit the hyperparameter tuning process in the paper because each backbone network corresponds to different model hyperparameters and training hyperparameters. Given the limited time, it is challenging to prove the optimality of the selected hyperparameters through grid search. Instead, we judge the current parameters' potential to cause overfitting or underfitting by observing the gradient descent process, or whether the current parameter combination results in lower loss and higher scores compared to previous combinations.We perform exploratory tuning for each model involved in the comparative and ablation experiments to ensure that the current hyperparameter combination is the best among all attempts. We also employ an early stopping strategy, which minimizes the impact of hyperparameter changes on training when there is no significant overfitting or underfitting.</p><p>At the end of this chapter, we provide a table of the hyperparameters used for the final submission.We select the following metrics to analyze the model's performance, including public and private leaderboard scores on the official test set. Additionally, since we ultimately need to fuse the logits output by the models, we introduce the loss on the validation and training sets. Since the calculation of BCE essentially equals the sum of entropy and KL divergence, it describes the difference between the probability distribution of the model output and the distribution of the true labels. Therefore, the logits of models with lower validation and training set losses result in better fusion effects than models with lower validation scores but higher losses. Finally, we explore how to effectively lightweight the model, using the training time per epoch as a metric.</p><p>Our experiments are all conducted on the Colab platform running on an A100 instance. The specific computational resources include 83.5GB of memory and 40GB of GPU memory.</p><p>To facilitate the customization of model parameters, we use the Timm library instead of the torchvision library to instantiate models. Surprisingly, the baseline reconstructed based on the Timm library achieved a better private leaderboard score compared to the official baseline (the official baseline's private leaderboard score is 0.31626, while the baseline improved by optimizing the Top-K strategy achieved a private leaderboard score of 0.32359).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Comparative Experiments</head><p>For the satellite image feature extraction network, the comparative experiments are as follows: Bold indicates the best score for this metric, while blue, if present, indicates a score close to the best. Training loss is used to help determine if overfitting has occurred and does not represent model performance. We observe that, based on the scores, the best-performing model is ViT-Base, followed by EfficientNet-B0. However, our goal is to identify the models most suitable for logits fusion, so we also focus on the loss. We find that ViT-Base and Swin-T-Tiny have very similar validation losses, making ViT-Base the preferred model. In terms of training time, EfficientNet-B0 and Swin-T-Tiny are the most efficient models. Considering the number of epochs to convergence, EfficientNet-B0 reaches the overfitting threshold in almost half the time of Swin-T-Tiny, but its test set loss is higher than that of Swin-T-Tiny. Therefore, EfficientNet-B0 can be considered a successful attempt at model lightweighting. However, for the highest score after fusion, the focus should still be on <ref type="bibr">ViT</ref>  Based on the experimental results, the Swin-T backbone network we proposed for extracting temporal features achieves the highest private leaderboard score and the lowest validation loss. It is also the only backbone network that scored higher than the Baseline. In terms of training time and convergence epochs, it is comparable to the Baseline. Therefore, we believe that the Swin-T backbone network can replace ResNet18 in the Baseline.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Ablation Studies</head><p>Based on the conclusions drawn from the comparative experiments, we initially replace the backbone networks for image and temporal feature extraction starting from the Baseline model. Subsequently, we attempt to use the Swin-T backbone network for temporal feature extraction and ViT-Base for satellite image feature extraction, but encounter difficulties with gradient descent. We analyze this issue and found that the significant difference in the number of parameters between the two networks, along with the misalignment in the gradient descent speeds, led to this problem. Since we cannot effectively resolve this issue, we opt to use Swin-T-Tiny, which has a similar validation loss, as the satellite image feature extraction network. We also try replacing Swin-T-Tiny with EfficientNet-B0, but this only accelerates training efficiency without improving the scores. Therefore, we determine that the optimal backbone network for temporal feature extraction is Swin-T, and for satellite image feature extraction is Swin-T-Tiny.</p><p>Next, we attempt to introduce the Graph modality. We find that models incorporating the Graph modality showed significant improvement in validation scores and the lowest validation loss so far, but the public and private leaderboard scores decrease. Upon examining the output, we discover that including the Graph modality makes the model more aware of some minority classes that are often overlooked, significantly increasing their logits values. However, under the Top-K output rules, these minority classes still do not rank high enough, and not all boosted minority classes are present. Those that do not appear show up in the logits values of some Survey IDs with generally low confidence. This result in better validation loss but lower private leaderboard scores. This phenomenon inspire us to propose the threshold Top-K as an improved post-processing algorithm.</p><p>We then try using HCAM (Hierarchical Cross-Attention Mechanism) for feature fusion. We find that the validation loss of models fused with HCAM slightly increased, but the public and private leaderboard scores return to the levels before integrating the Graph modality. This demonstrates that HCAM effectively optimizes the logits distribution while improving model scores.</p><p>To verify whether the model incorporating GFV (Graph Feature Vector) + HCAM is indeed more suitable for ten-fold cross-validation fusion, we conduct ten-fold cross-validation training on the model with GFV + HCAM and the baseline model using Swin-T for temporal feature extraction and Swin-Transformer-Tiny for image feature extraction with feature concatenation for modality fusion. We find that the model incorporating GFV + HCAM scored higher on the leaderboard and validation set compared to the control group.</p><p>It is important to note that the validation set mentioned in the ten-fold cross-validation method is the same as the validation set used in previous ablation experiments. However, since the complete training set is used in ten-fold cross-validation, the data in the validation set is actually included in the training set. Therefore, the validation set scores here are higher than in experiments without ten-fold cross-validation, and should only be compared between experiments using the same ten-fold cross-validation method.</p><p>In the official competition submission, we overlook checking the output of the model incorporating GFV + HCAM and simply judge the failure of our GFV construction and HCAM design based on the public leaderboard score. Consequently, we choose the previous model with ten-fold cross-validation training and applied post-processing. However, when completing this paper, we realize that the model incorporating GFV + HCAM might have more potential for fusion. Subsequent experiments confirme that this approach can indeed provide further improvements in leaderboard scores.</p><p>We acknowledge that the current results lack persuasive power for the ablation experiments on HCAM and Graph modality features. Besides the reasons we analyzed, HCAM and the MLP used for compressing Graph modality features are the parts of our model structure that are more sensitive to parameter changes. Our lack of time for sufficient hyperparameter analysis is also one reason for the unsatisfactory ablation experiment data. In future work, we will explore the optimal parameters and structures for these two sub-networks.</p><p>Finally, we apply our proposed post-processing methods to the model outputs. From the ablation experiments, it is evident that both Threshold Top-K and corrections based on PO data improved model scores. However, since the validation set cannot contain species present only in PO, the validation set scores are inaccurate in this context. Furthermore, the post-processing operations do not affect the model's training and inference efficiency, so they do not need to be included in the time cost comparison.</p><p>The final results are summarized in the table below(* denotes that the expected improvement is not achieved, thus this approach is abandoned): To provide a more intuitive confirmation of our analysis on the training process losses of different models discussed in this chapter, we visualize the training processes of all models. This facilitates the comparison of the number of epochs required for different models to reach their optimal loss (shown in Figure11). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Conclusion</head><p>Our comprehensive comparative experiments demonstrate that we select the most suitable feature extraction backbone networks for each modality's data. Through rigorous ablation studies, we prove that each improvement we proposed incrementally enhance the model's performance, ultimately achieving a score of 0.36242 on the private leaderboard and in third place(The leaderboard showed 0.35292, but we continue to optimize some parameters while doing the experiment to finally get 0.36242). In addition to proposing high-scoring solutions, we also introduce a lightweight model and an efficient training strategy. Without significantly sacrificing accuracy, we significantly reduce the training time for the ten-fold cross-fusion model (by more than 50%) and the time for a single epoch and the total number of epochs (by approximately 75%). Thus, we manage to train ten models for logits fusion in roughly the same amount of time it previously takes to train one baseline model, achieving prediction results far exceeding the baseline score. It is worth mentioning that we average the outputs of all models that score higher than the baseline and apply post-processing, ultimately achieving a private leaderboard score of 0.36501.</p><p>In summary, from a theoretical perspective, our research provides new insights into the extraction of temporal information for modality fusion tasks, namely folding it into a 2D matrix and extracting features through a visual backbone network. We also design a robust fusion method for features extracted from multiple modalities with varying information densities, using a hierarchical cross-attention mechanism to dilute features from high to low density progressively. Additionally, we propose a graph-based feature construction method and output correction post-processing algorithm for the multi-task classification task of species prediction under specific spacetime, which often involves extremely unbalanced or sparse labels. From an application perspective, our research is of significant importance in fields such as ecology, agriculture, environmental protection, and climate change studies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Future Work</head><p>Despite of our comprehensive comparative and ablation experiments, our work may be further explored in the following way:</p><p>• We will further utilize the organized PO data, noting that there are some high-quality publishers within PO whose surveys of the same ID often contain many species. Additionally, the competition provides supplementary data about other modalities for Survey IDs in PO. We believe that these Survey IDs appearing in PO can be selected through certain logical criteria to serve as training samples for weakly supervised learning, incorporated into PA. This approach will better leverage crowdsourced data, reduce the workload of data collection, and enhance model accuracy. • Referring to past programs, highly ranked teams would extract the rasters around the Survey ID geographic location from the tiff files provided by EnvironmentalRasters as images to be processed. However, due to arithmetic and time constraints, we finally give up on implementing this scheme, and we plan to use this part of the data in our subsequent work to realize a leap in model performance. • Our current model establishes graph relationships solely for feature aggregation and result calibration. In future work, we plan to use graph neural networks to replace the current method of manually setting weights for adjacent nodes in feature aggregation. This will support the use of weakly supervised and semi-supervised learning training strategies to progressively correct labels of weakly supervised and semi-supervised nodes, thereby improving training outcomes. • We hope to introduce NAS technology to optimize the hyperparameters of our 2D time series feature extraction network based on Swin-Transformer and the hierarchical cross-attention mechanism, further enhancing the model's performance.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>(a) Time series of four climatic features for four samples (b) Time series of four climate features for four samples</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Flattened visualization of time series cubes</figDesc><graphic coords="3,88.81,443.30,415.16,275.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Visual comparison between NIR image and RGB image</figDesc><graphic coords="4,77.60,70.59,103.79,103.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Comparison of numerical feature heatmaps in different regions</figDesc><graphic coords="4,72.00,320.96,451.28,349.28" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Comparison of numerical feature boxplots in different regions</figDesc><graphic coords="5,72.00,65.60,451.27,360.57" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Visualization of survey occurrence locations in the PO,PA as well as test set</figDesc><graphic coords="6,72.00,65.60,451.27,386.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: The frequency of species occurrences and the number of species included in surveys</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Distribution of node degrees in the graph and their square root transformations.</figDesc><graphic coords="8,72.00,65.61,451.26,148.74" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Comparison of Swin-Transformer for temporal cubes processing and image processing</figDesc><graphic coords="13,72.00,65.61,451.28,498.83" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head></head><label></label><figDesc>However, using Swin-Transformer for temporal feature extraction and EfficientNet-B0 for image feature extraction reduced training time by approximately 75% without a significant decrease in accuracy, thanks to faster per-epoch training durations and overall faster convergence. The input image is 𝐼 ∈ R 𝐻×𝑊 ×𝐶 . 𝑆 = Swin-Transformer-Tiny(𝐼), (</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Schematic of the structure of the proposal model</figDesc><graphic coords="16,72.00,453.24,451.27,263.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Training loss and validation loss plots of 10 fold training</figDesc><graphic coords="17,72.00,565.98,451.26,149.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_14"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Heatmap of the threshold and K values correspond to the validation set score</figDesc><graphic coords="19,78.30,67.67,203.08,186.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_15"><head></head><label></label><figDesc>-Base and Swin-T-Tiny. Because the comparative experiments of the temporal feature extraction network and the image feature extraction backbone network are conducted simultaneously, the corresponding image feature extraction network during the comparative experiments of the temporal feature extraction network is still the Swin-T-Tiny from the baseline. The comparative experiments are as follows:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_16"><head>Figure 12 :</head><label>12</label><figDesc>Figure 12: Training loss and validation loss plots of Ablation and comparative experiments</figDesc><graphic coords="22,72.00,325.26,451.27,300.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Algorithm 1 Graph Construction and Feature Vector Aggregation 1: Input: Nodes</head><label></label><figDesc>with attributes, 𝑁 ; maximum distance threshold, 𝑑 max = 10 km. 2: Output: Graph with edges and node feature vectors (GFV).</figDesc><table><row><cell>3: procedure CreateEdges(𝑁 )</cell></row><row><cell>4:</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>for each node 𝑛 𝑖 in 𝑁 do 5: for each</head><label></label><figDesc>node 𝑛 𝑗 in 𝑁 where 𝑖 ̸ = 𝑗 do 𝑛 𝑖 , 𝑛 𝑗 ) ≤ 𝑑 max and 𝑛 𝑖 .year = 𝑛 𝑗 .year and 𝑛 𝑖 .region = 𝑛 𝑗 .region then</figDesc><table><row><cell>6:</cell><cell>Calculate distance, 𝑑𝑖𝑠𝑡(𝑛 𝑖 , 𝑛 𝑗 )</cell></row><row><cell cols="2">7: if 𝑑𝑖𝑠𝑡(8: 𝑤𝑒𝑖𝑔ℎ𝑡 = 𝑑 max − 𝑑𝑖𝑠𝑡(𝑛 𝑖 , 𝑛 𝑗 )</cell></row><row><cell>9:</cell><cell>Add edge (𝑛 𝑖 , 𝑛 𝑗 ) with weight 𝑤𝑒𝑖𝑔ℎ𝑡</cell></row><row><cell>10:</cell><cell>end if</cell></row><row><cell>11:</cell><cell>end for</cell></row><row><cell>12:</cell><cell>end for</cell></row><row><cell cols="2">13: end procedure</cell></row><row><cell cols="2">14: procedure ComputeGFV(𝑁 )</cell></row><row><cell>15:</cell><cell>for each node 𝑛 𝑖 in 𝑁 do</cell></row><row><cell>16:</cell><cell>Initialize GFV 𝑉 𝑖 = 0</cell></row><row><cell>17:</cell><cell>Compute degree 𝑑𝑒𝑔(𝑛</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>𝑖 ) of node 𝑛 𝑖 18: for each adjacent node 𝑛 𝑗 do 19:</head><label></label><figDesc>𝑉 𝑖 += weight(𝑛 𝑖 , 𝑛 𝑗 ) • label(𝑛 𝑗 )</figDesc><table><row><cell>20:</cell><cell>end for</cell><cell></cell></row><row><cell>21:</cell><cell>Normalize 𝑉 𝑖 using</cell><cell>√︀ 𝑑𝑒𝑔(𝑛 𝑖 )</cell></row><row><cell>22:</cell><cell>end for</cell><cell></cell></row><row><cell>23:</cell><cell cols="2">Normalize all 𝑉 𝑖 across 𝑁</cell></row><row><cell cols="2">24: end procedure</cell><cell></cell></row><row><cell cols="2">25: Call CreateEdges(𝑁 )</cell><cell></cell></row><row><cell cols="2">26: Call ComputeGFV(𝑁 )</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>1 :</head><label>1</label><figDesc>Input: Graph 𝐺 with all attributes 2: Output: Enhanced species lists for test nodes 3: procedure CloneGraph(𝐺) 𝐺 𝑛𝑒𝑤 ← clone of 𝐺 with nodes, edges, weights, years, labels 5: Label nodes in 𝐺 𝑛𝑒𝑤 as training or test based on category labels 6: end procedure 7: procedure LabelAuxiliaryNodes(𝐺 𝑛𝑒𝑤 , PO) 𝐿 𝑡 ← species in 𝐶𝑜𝑢𝑛𝑡_𝐿 𝑡 where count &gt; 4 𝑀 𝑡 ← species in 𝐶𝑜𝑢𝑛𝑡_𝑀 𝑡 where count &gt; 8 𝑆 𝑡 ← merge and remove duplicates from 𝐿 𝑡 and 𝑀 𝑡</figDesc><table><row><cell>4:</cell><cell></cell></row><row><cell>8:</cell><cell>for each node in PO grouped by Survey ID do</cell></row><row><cell>9:</cell><cell>Apply edge creation conditions from step 1</cell></row><row><cell>10:</cell><cell>if node qualifies then</cell></row><row><cell>11:</cell><cell>Create edges to test nodes in 𝐺 𝑛𝑒𝑤</cell></row><row><cell>12:</cell><cell>end if</cell></row><row><cell>13:</cell><cell>end for</cell></row><row><cell cols="2">14: end procedure</cell></row><row><cell>24:</cell><cell>end for</cell></row><row><cell>25:</cell><cell>end for</cell></row><row><cell>26:</cell><cell></cell></row><row><cell>27:</cell><cell>◁ Sort and process auxiliary nodes</cell></row><row><cell>28:</cell><cell>Sort all eligible auxiliary nodes by weight in descending order</cell></row><row><cell>29:</cell><cell></cell></row><row><cell>32:</cell><cell>Increment 𝐶𝑜𝑢𝑛𝑡_𝑀 𝑡 [𝑠] by 1</cell></row><row><cell>33:</cell><cell>end for</cell></row><row><cell>34:</cell><cell>end for</cell></row><row><cell>35:</cell><cell></cell></row><row><cell>36:</cell><cell>◁ Merge and deduplicate lists</cell></row><row><cell>37:</cell><cell></cell></row><row><cell>38:</cell><cell>end for</cell></row><row><cell cols="2">39: end procedure</cell></row><row><cell cols="2">40: procedure PostProcessOutput(𝐺 𝑛𝑒𝑤 )</cell></row><row><cell>41:</cell><cell>for each test node 𝑡 in 𝐺 𝑛𝑒𝑤 do</cell></row><row><cell>42:</cell><cell>Use 𝑆 𝑡 to correct model output</cell></row><row><cell>43:</cell><cell>end for</cell></row><row><cell cols="2">44: end procedure</cell></row><row><cell cols="2">45: Call CloneGraph(𝐺)</cell></row></table><note>15: procedure GenerateSpeciesLists(𝐺 𝑛𝑒𝑤 ) 16: for each test node 𝑡 in 𝐺 𝑛𝑒𝑤 do 17: Initialize species count maps 𝐶𝑜𝑢𝑛𝑡_𝐿 𝑡 and 𝐶𝑜𝑢𝑛𝑡_𝑀 𝑡 18: ◁ Sort and process adjacent training nodes 19: Sort all adjacent nodes of 𝑡 by weight in descending order 20: 𝑁 𝑡 ← top 5 nodes from sorted list 21: for each node 𝑛 in 𝑁 𝑡 do 22: for each species 𝑠 in 𝑛.𝑙𝑎𝑏𝑒𝑙𝑠 do 23: Increment 𝐶𝑜𝑢𝑛𝑡_𝐿 𝑡 [𝑠] by 1 𝐴 𝑡 ← top 10 nodes from sorted list 30: for each node 𝑎 in 𝐴 𝑡 do 31: for each species 𝑠 in 𝑎.𝑙𝑎𝑏𝑒𝑙𝑠 do 46: Call LabelAuxiliaryNodes(𝐺 𝑛𝑒𝑤 , PO) 47: Call GenerateSpeciesLists(𝐺 𝑛𝑒𝑤 ) 48: Call PostProcessOutput(𝐺 𝑛𝑒𝑤 )</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 1</head><label>1</label><figDesc>Comparative experiment for satellite image feature extraction network</figDesc><table><row><cell>Module</cell><cell cols="6">Val score Public score Private score Val loss Training loss Time cost</cell></row><row><cell>Swin-T-Tiny</cell><cell>0.42110</cell><cell>0.32774</cell><cell>0.32816</cell><cell>0.00379</cell><cell>0.00353</cell><cell>271</cell></row><row><cell>ViT-Base</cell><cell>0.42645</cell><cell>0.33376</cell><cell>0.33280</cell><cell>0.00379</cell><cell>0.00349</cell><cell>550</cell></row><row><cell>ConvNeXt-Base</cell><cell>0.42066</cell><cell>0.33104</cell><cell>0.32792</cell><cell>0.00381</cell><cell>0.00361</cell><cell>705</cell></row><row><cell>EfficientNet-B0</cell><cell>0.41114</cell><cell>0.33316</cell><cell>0.33085</cell><cell>0.00393</cell><cell>0.00338</cell><cell>130</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 2</head><label>2</label><figDesc>Comparative experiment for time series feature extraction network</figDesc><table><row><cell>Module</cell><cell cols="6">Val score Public score Private score Val loss Training loss Time cost</cell></row><row><cell>ResNet18</cell><cell>0.42110</cell><cell>0.32774</cell><cell>0.32816</cell><cell>0.00379</cell><cell>0.00353</cell><cell>271</cell></row><row><cell>Swin-T</cell><cell>0.42319</cell><cell>0.33428</cell><cell>0.33476</cell><cell>0.00377</cell><cell>0.00362</cell><cell>279</cell></row><row><cell>ViT</cell><cell>0.41794</cell><cell>0.31680</cell><cell>0.31772</cell><cell>0.00390</cell><cell>0.00339</cell><cell>277</cell></row><row><cell>EfficientNet-B0</cell><cell>0.42083</cell><cell>0.31940</cell><cell>0.31996</cell><cell>0.00381</cell><cell>0.00344</cell><cell>292</cell></row><row><cell>MobileNet-V3</cell><cell>0.41479</cell><cell>0.31435</cell><cell>0.31649</cell><cell>0.00390</cell><cell>0.00339</cell><cell>284</cell></row><row><cell>Xception41</cell><cell>0.42737</cell><cell>0.32488</cell><cell>0.32216</cell><cell>0.00377</cell><cell>0.00331</cell><cell>302</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 3</head><label>3</label><figDesc>Ablation studies results</figDesc><table><row><cell>Module/Tricks</cell><cell cols="4">Val score Public score Private score Time cost</cell></row><row><cell>Baseline</cell><cell>0.42110</cell><cell>0.32774</cell><cell>0.32816</cell><cell>271</cell></row><row><cell>Swin-T replace ResNet</cell><cell>0.42319</cell><cell>0.33428</cell><cell>0.33476</cell><cell>279</cell></row><row><cell>EfficientNet replace Swin-Transformer-Tiny</cell><cell>0.41114</cell><cell>0.33316</cell><cell>0.33085</cell><cell>130</cell></row><row><cell>EfficientNet and Swin-T*</cell><cell>0.40824</cell><cell>0.30872</cell><cell>0.31349</cell><cell>137</cell></row><row><cell>Graph Modal Feature Vector (GFV)</cell><cell>0.43950</cell><cell>0.32251</cell><cell>0.32543</cell><cell>290</cell></row><row><cell>HCAM with GFV</cell><cell>0.42489</cell><cell>0.33186</cell><cell>0.33358</cell><cell>292</cell></row><row><cell>10 Fold Cross Fusion without HCAM and GFV*</cell><cell>0.49120</cell><cell>0.34790</cell><cell>0.34625</cell><cell>-</cell></row><row><cell>10 Fold Cross Fusion with HCAM and GFV</cell><cell>0.50455</cell><cell>0.35170</cell><cell>0.34994</cell><cell>-</cell></row><row><cell>Threshold Top-K</cell><cell>0.52655</cell><cell>0.36428</cell><cell>0.36121</cell><cell>-</cell></row><row><cell>Output Correction (Final Model)</cell><cell>-</cell><cell>0.36478</cell><cell>0.36242</cell><cell>-</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The data for this paper is organized and published by INRIA. We express our gratitude to all the institutions and individuals involved in data collection and processing, including but not limited to the Global Biodiversity Information Facility (GBIF, www.gbif.org), NASA, Soilgrids, and the Ecodatacube platform. Additionally, this project has received funding from the European Union's Horizon Research and Innovation program under grant agreements No. 101060639 (MAMBO project) and No. 101060693 (GUARDEN project) <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. All authors contribute helpful ideas during the course of the competition and participate in writing and revising the paper, so all authors are co-first authors. All of the authors, as corresponding authors, are obliged to reply to emails to provide readers with the relevant code and data of this work and explain the details of the work. Among them, Haixu Liu, as the first corresponding author, is responsible for the necessary communication for the publication of the article. Our final submission CSV download link is as follows: submission_036242.csv and submission_036501.csv. The link to the code we use to run and obtain the final submission is as follows: code.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Overview of GeoLifeCLEF 2024: Species presence prediction based on occurrence data and high-resolution remote sensing images</title>
		<author>
			<persName><forename type="first">L</forename><surname>Picek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Botella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Servajean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Deneu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">Marcos</forename><surname>Gonzalez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Palard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Larcher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Leblanc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Estopinan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bonnet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joly</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Overview of LifeCLEF 2024: Challenges on species distribution prediction and identification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Joly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Picek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Goëau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Espitalier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Botella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Deneu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Marcos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Estopinan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Leblanc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Larcher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Šulc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hrúz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Servajean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference of the Cross-Language Evaluation Forum for European Languages</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Species distribution modeling based on aerial images and environmental features with convolutional neural networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Leblanc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lorieul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Servajean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bonnet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF (Working Notes)</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="2123" to="2150" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="770" to="778" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Block label swap for species distribution modelling</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kellenberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tuia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF (Working Notes)</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="2103" to="2114" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Leverage samples with single positive labels to train cnn-based models for multi-label plant species prediction</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">Q</forename><surname>Ung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kojima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Working Notes of CLEF</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Overview of lifeclef 2023: evaluation of ai models for the identification and prediction of birds, plants, snakes and fungi</title>
		<author>
			<persName><forename type="first">A</forename><surname>Joly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Botella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Picek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Goëau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Deneu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Marcos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Estopinan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Leblanc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Larcher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference of the Cross-Language Evaluation Forum for European Languages</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="416" to="439" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Landsat analysis ready data for global land cover and land cover change mapping</title>
		<author>
			<persName><forename type="first">P</forename><surname>Potapov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Hansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kommareddy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kommareddy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Turubanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pickens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Ying</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">426</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A spatiotemporal ensemble machine learning framework for generating land use/land cover time-series maps for europe (2000-2019) based on lucas, corine and glad landsat</title>
		<author>
			<persName><forename type="first">M</forename><surname>Witjes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Parente</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Van Diemen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hengl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Landa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Brodský</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Glušica</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PeerJ</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">e13573</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">.eu: analysis-ready open environmental data cube for europe</title>
		<author>
			<persName><forename type="first">M</forename><surname>Witjes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Parente</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Križan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hengl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Antonić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ecodatacube</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PeerJ</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">e15478</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Climatologies at high resolution for the earth&apos;s land surface areas</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Karger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Conrad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Böhner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kawohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kreft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Soria-Auza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kessler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific data</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Karger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Conrad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Böhner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kawohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kreft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Soria-Auza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kessler</surname></persName>
		</author>
		<title level="m">Data from: Climatologies at high resolution for the earth&apos;s land surface areas</title>
				<imprint>
			<publisher>EnviDat</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ł</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Timesnet: Temporal 2d-variation modeling for general time series analysis</title>
		<author>
			<persName><forename type="first">H</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Long</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The eleventh international conference on learning representations</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Swin transformer: Hierarchical vision transformer using shifted windows</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Guo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF international conference on computer vision</title>
				<meeting>the IEEE/CVF international conference on computer vision</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="10012" to="10022" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A convnet for the 2020s</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Feichtenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Darrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Xie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</title>
				<meeting>the IEEE/CVF conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="11976" to="11986" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Xception: Deep learning with depthwise separable convolutions</title>
		<author>
			<persName><forename type="first">F</forename><surname>Chollet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1251" to="1258" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Efficientnet: Rethinking model scaling for convolutional neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Le</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="6105" to="6114" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Howard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kalenichenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Weyand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Adam</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1704.04861</idno>
		<title level="m">Mobilenets: Efficient convolutional neural networks for mobile vision applications</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Dosovitskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Beyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kolesnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weissenborn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Unterthiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Houlsby</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2010.11929</idno>
		<title level="m">An image is worth 16x16 words: Transformers for image recognition at scale</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
