<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Analyzing Decades-Long Environmental Changes in Namibia Using Archival Aerial Photography and Deep Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Girmaw Abebe Tadesse</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Caleb Robinson</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gilles Quentin Hacheme</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Akram Zaytar</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rahul Dodhia</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tsering Wangyal Shawa</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan M. Lavista Ferres</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emmanuel H. Kreike</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Microsoft AI for Good Research Lab</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>1943</year>
      </pub-date>
      <abstract>
        <p>This study explores object detection in historical aerial photographs of Namibia to identify long-term environmental changes. Specifically, we aim to identify key objects - Waterholes, Omuti homesteads, and Big trees - around Oshikango in Namibia using sub-meter gray-scale aerial imagery from 1943 and 1972. In this work, we propose a workflow for analyzing historical aerial imagery using a deep semantic segmentation model on sparse handlabels. To this end, we employ a number of strategies including class-weighting, pseudo-labeling and empirical p-value-based filtering to balance skewed and sparse representations of objects in the ground truth data. Results demonstrate the benefits of these different training strategies resulting in an average 1 = 0.661 and 1 = 0.755 over the three objects of interest for the 1943 and 1972 imagery, respectively. We also identified that the average size of Waterhole and Big trees increased while the average size of Omutis decreased between 1943 and 1972 reflecting some of the local effects of the massive post-Second World War economic, agricultural, demographic, and environmental changes. This work also highlights the untapped potential of historical aerial photographs in understanding long-term environmental changes beyond Namibia (and Africa). With the lack of adequate satellite technology in the past, archival aerial photography offers a great alternative to uncover decades-long environmental changes.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Aerial photos</kwd>
        <kwd>Geo-spatial machine learning</kwd>
        <kwd>Climate impact</kwd>
        <kwd>Sustainability</kwd>
        <kwd>Africa</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Satellite imagery is a valuable source of data that can shed light on the long-term impacts of climate
change [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, until the launch of IKONOS in 1999, commercial satellite imagery with a spatial
resolution of &lt; 1m/pixel was not available. The spatial resolution of older satellite images is insufficient
to uncover detailed and long-term changes for specific areas of interest. Moreover, the archive of satellite
imagery does not start early enough to analyze changes such as the massive post-Second World War
global transformation – the Landsat-1 satellite was the first that collected continuous imagery over the
Earth starting in 1972 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In contrast, archival aerial photographs – widely available since the early 20th
century (for military observation, mapping and planning) provide a longer temporal coverage bringing
the post-Second World War “Second Industrial Revolution” or “Great Acceleration” into focus at
submeter resolution to monitor subtle changes on the ground in local areas. Massive stocks of historical
aerial photos remain underutilized in archives across the globe. For example, the US National Archives
preserves 35 million historical aerial photos; tens of millions more are found in private and state archives,
store rooms and offices in other countries.
      </p>
      <p>In this work, we aim to utilize archival aerial photos from north-central Namibia, taken in 1943 and
1972, to uncover the decades long changes on the ground predating the introduction of high-resolution
satellite imagery. The 1943 aerial photos are assumed to be the first instance where aerial photography</p>
      <p>E</p>
      <p>Angola
Namibia</p>
      <p>Semantic
Segmentation
Framework</p>
      <p>B</p>
      <p>Train and Test region</p>
      <p>C</p>
      <p>
        Big Tree
Omuti
Waterhole
technology was systematically used in capturing the landscape of northern Namibia [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This region is
home to a significant portion of Namibia’s population, but it is highly vulnerable to climate changes due
to its semi-arid environment. Individual aerial photos were first digitized, geo-referenced and joined into
a large orthomosaic for further machine learning (ML) driven analysis as described in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We particularly
focused on identifying Waterholes, Omuti homesteads and Big trees. Waterholes used to be the main
source of water for the population in the dry season, which resulted in a dispersed settlement pattern of
Omuti homesteads in the past. Big trees, e.g., marula and palm trees, were main sources of nutrition [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        With the encouraging potential of machine learning (ML) algorithms to decipher large collection of
data and identify patterns, we employ a deep learning framework that aims to take the digitized aerial
photos as input and detected these objects of interest. Specifically, the framework contains a U-Net-based
segmentation model [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] with a backbone of a pre-trained ResNet [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] network. To validate the framework,
we utilized a sparsely annotated portion of a 45 2 area as our train and test region. Once the model
was trained, we scaled up the detection stage to identify Waterholes, Omuti homesteads and Big trees
in an area of ≈ 5000 2. In summary, this work offers the following contributions: i.) utilizing aerial
photos to identify long-term environmental changes, ii.) a class weighting strategy that jointly optimizes
both the sparsity of annotated objects (classes) and the inter-class imbalance, iii.) empirical p-value based
post-processing to plausibly select pseudo-labels from the previous prediction stage for a semi-supervised
learning strategy.
      </p>
      <p>The remainder of the paper is organized as follows. Section 2 presents the methodology, with details on
the main contributions. Section 3 describes the experimental setup including the specifics of the datasets
used, segmentation model employed and its setting, and evaluation metrics. We present the notable results
and follow up discussions in Section 4. Finally, Section 5 concludes the paper with next steps as a future
work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>The overview of our approach is shown in Fig. 1. Given aerial photos from the Oshikango region in
Namibia from 1943 and 1972, we aim to detect specific objects of interest: Big Trees, Omuti homesteads
and Waterholes at each of the time stamps to uncover long-term environmental changes. To this end, we
Annotated</p>
      <p>Data</p>
      <p>Train/Test Split</p>
      <p>Training
Train Set
Test Set</p>
      <p>Class Weighting
Pseudo Labeling</p>
      <p>Inference
Multi-level
Inference</p>
      <p>Postprocessing</p>
      <p>Evaluation
Performance</p>
      <p>Metrics
Evaluation</p>
      <p>
        Sets
employed a pre-processing step that aims to digitize and geo-reference each of the photos and merge
them into a large orthomosaic input following the steps in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The domain experts annotated a sparse
examples of these objects in the subset of the input data (≈ 45 2 ) as a ground truth data for our
semantic segmentation framework (shown in Fig. 2). Next, we describe the details of the main steps in
the framework.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Problem Formulation</title>
        <p>Let Dt represents an orthomosaic of multiple aerial photos taken in a year, , after each photo is digitized
and geo-referenced. The problem is related to evaluating the potential of these aerial photos to quantify
the long-term environment changes by detecting a set of objects of interest - : Big tree, : Omuti and :
Waterhole at  = 1943 and  = 1972. To this end, we employ a deep learning framework to detect these
objects at each Dt with a dedicated model, Θt. We assume a few examples of these objects are available
as polygons or mask data, Mt, annotated by an experienced expert in the region. Each pixel,  ∈ Mt,
assumed to be one of the classes:  = {, , , }, where  represents unknown or background pixels.
Due to the sparse nature of annotation performed in a smaller region, i.e., |Mt|) &lt;&lt; |Dt|), where | · |
represents dimension, a key aspect of the framework involves effective usage of Mt where the number
of labeled pixels, , is quite small compared to the unknown pixels, . Furthermore, there is a large
degree of imbalance in annotated pixels among classes ,  and . This also poses a critical question on
how to utilize the larger number of  pixels in Mt thereby assisting the training process and enhancing
detection performance. Mt is, first, split into train ( Mtr) and test (Mte) splits with no overlapping between
the two splits. The model, Θ, is trained using Mtr and evaluated for both Mtr and Mte. We further extend
Mtr by incorporating new masks derived from predicted polygons from previously unannotated regions
in Dt as pseudo-labels.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Class weighting</title>
        <p>
          Class imbalance is a common challenge in geospatial machine learning as it is often resource-demanding
to do manual annotations, resulting in a sparse set of annotated regions. This is also partly due to
the different observation frequencies of objects of interest. For example, in the related aerial photo
Oshikango region in this work, we observed a higher occurrence of Big trees compared to Omuti
homesteads. In addition, the coverage area of each class may vary (see Table 1) resulting imbalanced
number of pixels across classes. Varieties of solutions have been employed to address class imbalance
challenges over the years that could be clustered under re-sampling [
          <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
          ] and re-weighting [
          <xref ref-type="bibr" rid="ref7 ref9">7, 9</xref>
          ].
Resampling includes sampling minority classes [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], which can lead to overfitting, or under-sampling majority
classes [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], potentially losing valuable data in cases of extreme imbalance. Data-augmentation also helps
to synthetically generate additional samples for minority classes [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. On the other hand, re-weighting
assigns adaptive weights to classes often inversely to the frequency of the class [
          <xref ref-type="bibr" rid="ref7 ref9">7, 9</xref>
          ]. Sample-based
re-weighting, such as Focal loss, adjusts weights based on individual sample characteristics, targeting
well-classified examples and outliers [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ].
        </p>
        <p>In this work, we propose a simple class weighting strategy that considers both the sparsity of annotated
regions) (compared to unannotated regions) and the imbalance of pixels annotated across the classes of
interest. Our weighting strategy follows a re-weighting approach that aims to provide a class weight
that is inversely proportional to the ratio of pixels annotated per each class (compared to the remaining
classes). Let ,  and  be the number of pixels annotated with Waterhole, Omuti and Big tree
classes in the training set, Mtr, respectively. The number of unlabeled pixels is denoted by . The total
number of pixels in  is  + , where  =  +  + . Thus, the weight of each class is
formulated as follows:   = /( + ),   = /,   = /, and   = /,</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Pseudo-labeling and Post-processing</title>
        <p>
          To further improve the efficiency of our training steps, we propose to incorporate the weak labels generated
from the inference of the model on the previously un-annotated pixels in training – i.e. a pseudo-labeling
based approach [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ]. We assume that we have large amounts of unlabeled imagery, however we
only have sparse labels (Section 2.1 ). Once the deep semantic segmentation model,Θ, is trained and
model parameters   are obtained, the inference is applied on the training set Mtr, resulting a class
prediction probability for each pixel,  ∈ Mtr. New instances for each class are then recruited from the
pseudo-labels to further train the model semi-supervised.
        </p>
        <p>
          However, deep learning models are known to provide over-confident predictions even in cases where
the predicted classes are not correct [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], which may result in re-training our model with noisy labels.
To this end, we propose a post-processing approach that is based on an empirical p-value derived from
the features of the predicted polygons, such as area and perimeter. This approach is motivated by
recent studies on the robustness of deep learning frameworks, where similar empirical evaluations were
conducted to identify out-of-distribution samples coming from synthesized content [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] or adversarial
attacks [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. This approach also aligns with a growing interest in data-centric research [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] that aims to
improve model performance by focusing on the data than the model, e.g., by improving the quality of
data [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
        <p>In this work, we aim to utilize area feature and discard predicted polygons with area values that are
out-of-distributions from areas of training polygons per each class. Typical threshold-based filtering could
be applied directly on the histogram of the area values. However, it has been found that the distribution is
skewed heavily (see Fig. 3 (a)) and hence a threshold based filtering will be very sensitive to the threshold
value. On the other hand, the distributions of empirical p-values (see Fig. 3 (b)) is relatively less skewed
and hence more stable for threshold-based filtering. The pseudo-code of the proposed empirical p-value
based post-processing is shown in Algorithm 1. Let’s assume that we are given the set of annotated
training polygons,  , and predicted polygons,  , for each ℎ class of interest in ^ = {, , }. Then

we compute the area of each annotated polygon in  , e.g., . Similarly, we compute the area of each
polygon in  , e.g., . The empirical p-value of each predicted polygon, , is calculated by counting

the number of polygons in  that are greater than or equal with . Thus, a predicted polygon with
out-of-distribution area will have extreme empirical p-value, i.e.,  ≈ 0 and  ≈ 1 for predicted
polygons with very small area and very large area, compared to the training set, respectively. Finally, the
predicted polygons that satisfy the empirical-value-based threshold, ℎ, are considered as pseudo labels
to be used in the follow up recursive training steps.
2500 5000
Area (m2)</p>
        <sec id="sec-2-3-1">
          <title>Omuti</title>
          <p>25000
Area (m2)
BigTree
50000
00.0
00.0
y
c
en2000
u
q
e
r
F</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>Waterhole</title>
          <p>0.5
Empirical p-value
Omuti</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental Setup</title>
      <p>In this section, we describe the data sources used in this work, along with the distribution of annotations
across classes, the deep learning model architecture and hyper-parameters set for our experiments, and
performance evaluation metrics.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>We have used aerial photos taken in Northern Namibia in the years 1943 and 1972. See Fig. 4 for an
example of these photos and the types of classes annotated in these photos. Note the annotations were
done manually by a domain expert. Table 1 shows the distributions of annotations in pixels and polygons.
The aggregated percentage of annotated pixels is &lt; 1% in 1943 imagery and &lt; 4% in 1972 imagery
demonstrating the sparsity of annotated pixels (regions) compared to the unannotated regions - a typical
challenge in geospatial imagery. Furthermore, Table 1 demonstrates the nature of the imbalanced number
of annotated pixels or polygons across classes, e.g., ≈ 90% or more of these annotated polygons belong
to Tree class whereas Waterhole constitute only &lt; 4% of the polygons in both 1943 and 1972 images.
1 for  in ^ do
2  ← CountPolygons ( );</p>
        <p>FilterByThreshold ( , ℎ);</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Model Selection and Set-up</title>
        <p>
          We have employed a U-Net-based [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] semantic segmentation deep learning framework, using a
pretrained ResNet-50 [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] architecture as a backbone architecture for each of the 1943 and 1972 aerial photos.
We have also employed a 70% − 30% train-test split of the annotated regions. We also employed a
cross-entropy loss and a learning rate of 0.001. The batch size and maximum epochs were set to 64 and
50, respectively. Note that the pixels with no annotation are treated as background class, and weighted
accordingly so as not to affect the optimization significantly.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Multi-level Inference and Performance Evaluation Metrics</title>
        <p>Inference is performed at each pixel level in the test set, which can also be aggregated to polygon-level
inference. Thus, the evaluation metrics are also computed corresponding to the level of inference.
Generally, we employed Accuracy, Precision, Recall, and 1 score to evaluate how well each class’s
annotated pixels (regions) were detected during inference. All the four metrics represent detection
performance based on true positive (tp), true negative (tn), false positive (fp) and false negative (fn) values.
For pixel-level performance metrics, true positive is when the class pixel is correctly identified; true
negative is when the pixels associated with the remaining classes are correctly identified as negative; false
positive is the case when pixels corresponding to the remaining classes are incorrectly detected as the
class pixel, and false negative refers to the case when class pixels are incorrectly detected as the remaining
class pixels. For polygon-level performance metrics, tp, tn, fp, fn are computed from a threshold-based
overlapping of regions, e.g., 5%, between the predicted and ground truth polygons. Note that we have not
computed the evaluation metrics for the background  class as it could still be a real background class or
any of the classes but left unlabeled during annotation.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <sec id="sec-4-1">
        <title>4.1. Performance of different training strategies</title>
        <p>Table 2 shows the results derived from different training strategies employed in our semantic segmentation
framework across two imagery timestamps: 1943 and 192. Compared to 1 = 0.549 in 1943 imagery, we
achieved a higher 1 = 0.706 on the 1972 imagery. This is partly due to the higher number of examples
from 1972 imagery to train its model (see Table 1). Furthermore, our different training strategies, i.e.,
1943
1972
class weighting, pseudo labeling and post-processed pseudo-labeling, outperformed the Baseline that does
not include any of these strategies. Particularly, the class weighting strategy alone improved the Recall
values from 0.261 to 0.656 in 1943 imagery and from 0.663 to 0.809 in 1972 imagery by effectively
weighting the cross-entropy loss by the inverse of the observation of each class. Pseudo-labeling that aims
to utilize high-confident predictions in a semi-supervised learning fashion is also shown to further improve
the Precision (by reducing the false positives) and then the 1 score in both imagery sources. Additional
difference between 1943 and 1972 images involve the impact of using pseudo-labels after empirical
p-value based post-processing (i.e., Post-processed Pseudo Labeling). Since the ground truth data of
1943 imagery suffers from very few number of training samples per class (i.e., &lt; 1% of the imagery
is annotated), discarding the predicted polygons based on a threshold did not result in an improved
performance. On the other hand, filtering the pseudo-labels before recursive training improved all the
metrics in 1972 imagery, resulting in the highest 1 = 0.706.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Impact of post-processing on evaluation metrics</title>
        <p>Tables 3 demonstrates the need of lfitering predicted polygons as a part of our empirical area p-value
based post-processing even for evaluating the performance metrics. The highest average 1 score across
the three classes is achieved in 1943 (1 = 0.661) imagery using a p-value threshold of ℎ = 0.5 STD.
Similarly, the post-processing has improved the average 1 score from 0.706 to 0.755 using an p-value
threshold of ℎ = 1.0 STD, which is partly due to a larger and more balanced set of ground truth data
and hence does not require a strong threshold that would discard more polygons.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Analyzing false positives</title>
        <p>Among the metrics employed to evaluate our detection performance, Recall values are found to be
consistently higher than Precision values for both imagery sources (see Table 2). This is partly due to
the higher observation of false positives compared to false negatives. Further analysis demonstrates that
all false positives are not actually falsely identified objects of interest but previously unlabeled objects.
Figure 5 shows such an instance, where two objects were shown unlabeled in the ground truth polygons in
Figure 5 (a). But these two objects are detected as Waterholes during inference time, thereby suggesting
the framework could also help to discover objects of interest that were not labeled during annotation
though they might still be evaluated as false positives. This further motivates the need of pseudo-labeling
(a) Ground truth polygons</p>
        <p>(b) Predicted polygons
in our framework that aimed to utilize such objects that were left unlabeled during annotation but later
found out to be objects of interest with high confidence.
Moreover, our analysis facilitates understanding of past events with less/limited data, and provides added
quantitative and qualitative details on historic reports (see Fig. 6). For example, the number and location
of Waterholes and homesteads confirm the sharing of Waterholes by neighbors, low-yielding reports of
these water holes, and increased population density. Furthermore, class-based changes (e.g., Waterholes)
in number, area, coverage, locations and/or proximity to other objects of interest reveal further insights
on the changes that took place between 1943 and 1972.</p>
        <p>Furthermore, we have utilized the trained semantic segmentation model in a larger area covering
≈ 5000 2. See Fig. 1(B) to visualize the scale of a region compared to the relatively smaller annotated
region (≈ 45 2) used as a ground truth. Such large scale implementation of the framework helps to
benefit domain experts by reducing the resource necessary to make exhaustive annotation, and generate
large scale insights.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>
        Understanding long-term environmental changes requires the use of old and remotely sensed images
such as satellite imagery. However, the resolutions of satellite images were not at sub-meter level a
few decades back. Old aerial photos, on the other hand, satisfy these requirements though they are
often stored unused in archives and museums across the world. In this work, we aim to demonstrate the
capabilities for understanding long-term environmental changes in Namibia using aerial photos taken
during 1943 and 1972 using deep learning to detect the following objects: Waterholes, Omuti homesteads
and Big trees. To this end, we employed a deep semantic segmentation framework that includes U-Net [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
model with a pre-trained ResNet-50 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] architecture as its backbone. To address the challenges associated
with the sparseness of annotated regions and imbalance among classes, we proposed a class weighting
strategy followed with a pseudo labeling step that aims to utilize predicted polygons. The pseudo-labels
were further filtered using an empirical p-value based post-processing step. The results demonstrate the
capabilities of aerial photos to understand long-term environmental changes by detecting those classes
with encouraging performance. Thus, efforts need to be accelerated to digitize and analyze them to better
understand long-term environmental and socio-demographic changes. This work highlighted that aerial
photos provide a promising alternative to study environmental changes prior to 1990s as there was no
adequate satellite technology to capture images with &lt; 1/ resolution.
      </p>
      <p>Future work aims to investigate further use cases where lower detection performance metrics, both in
precision and recall, were observed. In addition, we aim to scale up the validation of the proposed approach
beyond the use case in Namibia. Deploying under-utilized archival aerial photographs is, potentially, a
promising alternative to validate current understanding of the past and uncover new insights, which is
critical to ensure sustainability in Africa, where climate change poses a significant and disproportionate
risk compared to its emission.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dickinson</surname>
          </string-name>
          ,
          <article-title>The role of satellite remote sensing in climate change studies</article-title>
          ,
          <source>Nature Climate Change</source>
          <volume>3</volume>
          (
          <year>2013</year>
          )
          <fpage>875</fpage>
          -
          <lpage>883</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Loveland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Dwyer</surname>
          </string-name>
          , Landsat:
          <article-title>Building a strong future</article-title>
          ,
          <source>Remote Sensing of Environment</source>
          <volume>122</volume>
          (
          <year>2012</year>
          )
          <fpage>22</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T. W.</given-names>
            <surname>Shawa</surname>
          </string-name>
          ,
          <article-title>Creating orthomosaic images from historical aerial photographs</article-title>
          ,
          <source>e-Perimetron</source>
          <volume>18</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kreike</surname>
          </string-name>
          ,
          <article-title>Environmental Infrastructure in African History: examining the myth of natural resource management in Namibia</article-title>
          , Cambridge University Press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ronneberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Brox</surname>
          </string-name>
          , U-net:
          <article-title>Convolutional networks for biomedical image segmentation</article-title>
          ,
          <source>in: 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learning for image recognition</article-title>
          ,
          <source>in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jia</surname>
          </string-name>
          , T.-
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Belongie</surname>
          </string-name>
          ,
          <article-title>Class-balanced loss based on effective number of samples</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>9268</fpage>
          -
          <lpage>9277</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Buda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Mazurowski</surname>
          </string-name>
          ,
          <article-title>A systematic study of the class imbalance problem in convolutional neural networks</article-title>
          ,
          <source>Neural Networks</source>
          <volume>106</volume>
          (
          <year>2018</year>
          )
          <fpage>249</fpage>
          -
          <lpage>259</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. C.</given-names>
            <surname>Loy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Deep imbalanced learning for face recognition and attribute prediction</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>42</volume>
          (
          <year>2019</year>
          )
          <fpage>2781</fpage>
          -
          <lpage>2794</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Unsupervised domain adaptation for semantic segmentation via class-balanced self-training</article-title>
          ,
          <source>in: Proceedings of the European Conference on Computer Vision (ECCV)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>289</fpage>
          -
          <lpage>305</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>T.-Y. Lin</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Goyal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Girshick</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dollár</surname>
          </string-name>
          ,
          <article-title>Focal loss for dense object detection</article-title>
          ,
          <source>in: Proceedings of the IEEE International Conference on Computer Vision</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>2980</fpage>
          -
          <lpage>2988</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Gradient harmonized single-stage detector</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>33</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>8577</fpage>
          -
          <lpage>8584</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Rizve</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Duarte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Rawat</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Shah, In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <article-title>Boosting semi-supervised learning by exploiting all unlabeled data</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>7548</fpage>
          -
          <lpage>7557</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>X.-Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , G.-S. Xie,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mei</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-L. Liu</surname>
          </string-name>
          ,
          <article-title>A survey on learning to reject</article-title>
          ,
          <source>Proceedings of the IEEE</source>
          <volume>111</volume>
          (
          <year>2023</year>
          )
          <fpage>185</fpage>
          -
          <lpage>215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cintas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Speakman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Tadesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Akinwande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. McFowland III</given-names>
            ,
            <surname>K. Weldemariam</surname>
          </string-name>
          ,
          <article-title>Pattern detection in the activation space for identifying synthesized content</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          <volume>153</volume>
          (
          <year>2022</year>
          )
          <fpage>207</fpage>
          -
          <lpage>213</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cintas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Tadesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Speakman</surname>
          </string-name>
          ,
          <article-title>Spatially constrained adversarial attack detection and localization in the representation space of optical flow networks</article-title>
          ,
          <source>in: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>965</fpage>
          -
          <lpage>973</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Oala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maskey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bat-Leah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Parrish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Gürel</surname>
          </string-name>
          , T.-S. Kuo, Y. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dror</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Brajovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yao</surname>
          </string-name>
          , et al.,
          <article-title>Dmlr: Data-centric machine learning research-past, present and future</article-title>
          ,
          <source>arXiv preprint arXiv:2311.13028</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Tadesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fei-Fei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaharia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Zou,
          <article-title>Advances, challenges and opportunities in creating data for trustworthy ai</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>4</volume>
          (
          <year>2022</year>
          )
          <fpage>669</fpage>
          -
          <lpage>677</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>