<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Multi-Feature Fusion Deep Convolutional Network based on A Coarse-Fine Structure for Cloud Detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dorcas Gicuku Mwigereri</string-name>
          <email>dorcausgicuku@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lawrence Nderu</string-name>
          <email>lawrence_nderu2@live.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tobias Mwalili</string-name>
          <email>mwalili@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computing and Information, Technology, Jomo Kenyatta university of Agriculture, and Technology</institution>
          ,
          <addr-line>Nairobi</addr-line>
          ,
          <country country="KE">Kenya</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <abstract>
        <p>- Accurate detection of the cloud in remote sensing analytics is considered an essential task in remote sensing imagery with various spectral, temporal and spatial information. As a result we propose a multifeature fusion deep convolutional neural networks for analysis of remote sensing satellite images so as to detect clouds, which are the region of interest. To ensure the algorithm was trained with data acquired from multiple satellites, the Landsat 7 ETM, Landsat 8 OLI/TIRS and Gaofen-1 wide field view datasets were used. The experimental results obtained showed that the proposed model gave accuracy, precision and recall measures of 95.2%, 89% and 89.9%respectively. The developed algorithm posted consistent and accurate results for cloud detection using satellite images that had clouds of different types and those obtained over different land surfaces that contain other objects in the images.</p>
      </abstract>
      <kwd-group>
        <kwd>Deep convolutional neural networks</kwd>
        <kwd>multifeature fusion</kwd>
        <kwd>remote sensing satellite imagery</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I . INTRODUCTION</title>
      <p>Presence of clouds is seen to pose a challenge in
the process of extracting information of the surface
and/or atmosphere in remote sensing satellites in
addition to affecting the amount of radiation a surface
can receive[1]. Accurate detection of the cloud in
remote sensing analytics has so far been seen to be a
challenging task due to the various shapes the clouds
may take in addition to having different ground objects
captured in the satellite images[2]. To identify clouds
given a satellite image, three approaches have so far
been evaluated, that is, threshold based approaches
which evaluate the reflectance and brightness across a
given channel so as to detect presence of clouds,
machine learning algorithms such as support vector
machine(SVM)[3], artificial neural networks
(ANN)[4] and random forest[5] that learn from
handcrafted features so as to detect presence of clouds
in an image and deep-learning technique[6] that
automatically learn high complex features given a
training dataset.</p>
      <p>[7]reviewed literature on cloud detection using
remote sensing satellite imagery from 2014 -2018.In
their work, it was reported that most researchers
explored a variety of cloud detection forms such as
Cloud/No Cloud, Thin Cloud/ Thick Cloud and
Snow/Cloud using threshold based techniques, deep
learning algorithms and machine learning algorithms
such as ANN, decision trees, random forests and
Bayesian classification. Threshold based techniques
were observed to perform differently for different
climatic conditions or areas with different surface type
thus making them have poor universality while
machine learning algorithms that used handcrafted
features were observed to be dependent on the feature
selection process as different people have different
understanding of clouds and their features. As a result,
deep learning algorithms became more appealing due
to their capability of automatically extracting highly
complex features based on the spectral, temporal and
spatial information provided in the training dataset.</p>
      <p>Depending on the task being performed, deep
learning techniques for cloud detection have been
categorized to three major categories that is, patch
based deep learning approach which uses an image
patch as an input and gives a labelled output indicating
whether the image is cloudy or not cloudy, region
based deep learning techniques that segment an input
image to different regions that are labelled using a
pretrained network and a pixel level deep learning
technique that takes a fixed size input image and trains
the model to output a pixel level labels that have the
same size as the input image.</p>
      <p>In this work, a multi-feature fusion deep
convolutional neural network (MFF – DCNN) to
predict presence of cloud given a remote sensing
satellite image as an input is proposed. A single set
feature, created by fusing the spectral, temporal and
spatial features, is used to train the developed model
for cloud detection.</p>
      <p>II . RELATED WORK
[8] Developed a CNN algorithm for cloud detection based
on a residual network (ReSNet) architecture which was
crafted from the u-net architecture by adding a clipping layer,
batch normalization and halving the depth of feature maps.
The dataset obtained from the NASA Landsat 8 satellite
consisted of five spectral band combinations that is the
red/green/blue/infrared (RGBI) band, the red/green/blue
(RGB) and the green band alone. In their work, they noted
that the develop CNN model posted good results for semantic
segmentation in addition to improving the performance levels
and reducing the time it took to train the model by reducing
the requirements during the preprocessing phase. To improve
on the performance measure of the CNN algorithm for cloud
detection, they stated the need for a method that would fully
incorporate the spectral, spatial and temporal dimensions.</p>
      <p>[9] Proposed a technique of detecting clouds that was
based on cloud segmentation by fusing multi-scale
convolutional features (MSCF) with aim of improving on the
accuracy of the convolutional neural network (CNN) for
object detection especially when using a multispectral image
that contains the visible and infrared bands only. The
proposed deep learning technique was based on fully
convolutional network (FCN) for pixel-to-pixel semantic
segmentation and Semantic Segmentation (SegNet)
architecture which was built as a convolutional
encoderdecoder for semantic segmentation of the pixel values. To
train and evaluate the proposed model, images from Gaofen-1
WFV satellite were used and the performance posted by the
developed model compared with two other techniques, that is,
a multi-feature combined method (MFC) and a deep
convolutional network (DCN).The obtained results shows
that the MSCN model performs better with an accuracy level
of 97.85% compared to the MFC model that posted an
accuracy of 96.80%. Additionally, the MSCF was seen to
keep details of the cloud boundaries on the cloud mask
produced. For future studies, they recommended use of cloud
images obtained from different satellite imageries to
investigate if their suggested model can be generalizable on
the different datasets.</p>
      <p>
        [10]proposed a two-step deep learning technique for
cloud detection on remote sensing satellite imagery. The first
process involved use of a feature concatenation network
which would obtain the cloud probability map from deep the
convolutional neural network while the second process
involved extraction of multilevel structural features using a
multi-window guided filtering so as to refine the cloud mask.
To validate the proposed model, the 502 Gaefen-1 WFV
cloud images collected from May 2013
        <xref ref-type="bibr" rid="ref1">to December 2016</xref>
        and obtained from different global regions, was used. To
evaluate the performance measure of the model, the
accuracy, the Intersection-Over-Unions(IOU) , Hansens
Kuipers Discriminant(HK), False Alarm Ratio(FAR) and
Probability Of Detection(POD, metrics were used and then
compared with traditional cloud detection methods such as a
multi-feature combined (MFC)[11], Scene Learning for
Cloud Detection on Remote-Sensing Images[12] and a
progressive refinement scheme [13]. According to the
quantitative results obtained the proposed model posted a
better performance with an accuracy of 95.45%, POD of
89.09%, FAR of 2.67%, HK of 93.07% and an IOU of
85.38%. They further recommended on improvements of the
computational efficiency of the proposed model for cloud
detection.
      </p>
    </sec>
    <sec id="sec-2">
      <title>III. METHODOLOGY</title>
      <sec id="sec-2-1">
        <title>A. Dataset Description</title>
        <p>To train and validate the model, satellite images obtained
from different satellites and covering different land surfaces
as illustrated in the fig 1 were used. A total of 90 Landsat 8
Operational Land Imager/Thermal Infrared Sensor
(OLI/TIRS) satellite images with a resolution of 30m which
are provided to the public by [14], 160 Landsat 7 Enhanced
Thematic Mapper Plus(ETM+) with a resolution of 30m[15]
and 100 Gaofen-1 wide field view(WFW) with a resolution
of 16m[16] were used. The Landsat-8 images were broadly
categorized to eight biomes, that is, barren, forest, grass/crop,
shrub land, urban, water and are further divided to four
classes, that is, cloud, thin cloud, cloud shadow and clear.
According to [17], data used to train a model should be more
as compared to training dataset with a preferred percentage
split of 70/30 whereby 70% of the total dataset is used to train
the model while the other 30% is used to evaluate its
performance. In our study, to obtain the training and test
dataset, percentage split was used to randomly split the
datasets whereby 70% of the each dataset was used for
training the model while the remaining 30% was used for
testing the performance of the developed model for cloud
detection given satellite images.</p>
      </sec>
      <sec id="sec-2-2">
        <title>B. The Proposed Method</title>
        <p>The proposed MFF – DCNN for cloud detection is
composed of a deep coarse network for extraction of high
level features and three deep fine extraction network for
separation of the cloud pixels from other objects present in an
image as illustrated in fig 2.A fully connected (FC) layer is
used to flatten the outputs obtained from the coarse and the
fine modules and the output obtained from this layer fed to a
feature fusion layer that is used to fuse features obtained from
the four network components. The output from the feature
fusion layer used for classification.</p>
        <p>1) Deep Coarse Network: The deep coarse network
(DCN) is constituted of three convolutional layers for the
purpose of extracting high level features. The first layer is
structured such that it is comprised of 36 filters and a kernel
size of 3*3. The rectified linear unit(ReLU) activation
function is then be applied to the convolved patches and a
2*2/2 max-pool function applied to each response generated.
The second and the third convolution layers are then
modelled such that they are made of 64 filters with the ReLU
activation function and a 5*5 kernel and a 2*2 max-pooling
with stride 2.</p>
        <p>Fig 1: Display of the Obtained Datasets on various land surfaces Source: [18]</p>
      </sec>
      <sec id="sec-2-3">
        <title>2) Deep Fine Extraction Network: Most of the</title>
        <p>images acquired from remote sensing satellites are
rarely annotated and also lack bounding boxes to
represent the most likely region of interest
(ROI)[19]. As a result, this deep fine extraction
network is developed to help in identification of the
ROIs. The three deep fine extraction layer are built
using the ResNet50 architecture which has so far
proved to have better performance for object
detection as compared to the other convolutional
architectures in addition to helping to mitigate the
vanishing gradient problem[20]. The areas marked
by the bounding boxes are then extracted using ROI
pooling layer so as to come up with a feature map to
be fed to a fully connected layer that would compute
the score for the input image for each class. The fig 3
illustrates the structure of the deep fine extraction
module.</p>
        <p>3) Feature Fusion layer: The feature fusion layer
is introduced to the architecture so as to ensure that
the features extracted from the deep-coarse and
deep-fine networks are combined to form a single
feature set for classification.To avoid overfitting and
improve on the generalization of the model, cross
entropy loss function[21] was used to regularize the
feature fusion process and the softmax regression
function[22], defined in equation (1) used for
classification.</p>
        <p>( )
[
( ) ]
( )</p>
        <p>(1)
where x is the filter i score obtained from the previous
layer and      is the corresponding output.
Fig 2: The Proposed Multi-Feature Fusion Deep Convolutional Neural Network for cloud detection
Fig 3: The Architecture of the Deep Fine Extraction Network</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>IV. RESULTS</title>
      <sec id="sec-3-1">
        <title>A. Performance metrics</title>
        <p>The developed model was trained to detect clouds
given a remote satellite image dataset that captures
different land
surfaces in
addition to
containing
spatial, temporal and spectral information. To evaluate
the performance levels for extracting and fusing
multiple features
proposed model,
for cloud
detection</p>
        <p>using the
the accuracy, precision and recall
measures
as</p>
        <p>specified
(4)respectively were used.</p>
        <p>in
equation
(2),(3)
and

(2)

 
(3)
(4)
Where TP is the true positives, TN is the true
negatives, FP is the false positives and FN is the false
negatives.</p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Results and Discussion</title>
        <p>In our work the performance measure of the FCN
network proposed by [9], multi-feature fusion of point
and block feature using SVM classifier proposed by
[2] and a multilevel feature cloud detection FCN
proposed by [10] are compared
with our proposed
model. These models were evaluated using the test
datasets that contained a total 57 Landsat-8 images
that were different from the images used to train the
model and the</p>
        <p>accuracy results obtained are as
summarizer in table 1.These results showed that our
model performed the best with an accuracy level of
99.06 which was 1.21% higher than that of the fully
convolutional network implemented using the Segnet
architecture.</p>
        <p>Which performed as the second best
model for cloud detection. Of the four compared
models,
the
model
composed
of
the
feature
concatenation and window guided filtering gave the
lowest accuracy level of 92.92%.
Furthermore, the recall and precision
measure
obtained when testing our model were 89.9% and 89%
respectively and an increase in the number of epochs
to about 25 epochs was seen to lead to an increase in
both the training and
validation accuracy
of the
proposed model but this was seen to stabilize. Further
increase of the number of epochs led to an unstable
increase and decrease in the training and validation
accuracy as illustrated in the fig 4.</p>
        <p>The ability of the proposed MFF – DCNN to
effectively detect both thick and thin clouds is mainly
been
attributed
to
the
use
of</p>
        <p>ResNet50
skip
connections in the DFCN that enables the model
utilize all available feature for cloud detection.
Fig 4:</p>
        <p>Proposed Model training and validation accuracy against
the number of epochs</p>
        <p>According to[20] the residual block of the ResNet
architecture as illustrated in the fig 4 enables a model
train deeper neural networks by optimally tuning the
number
of
layers
while
training
the
model.</p>
        <p>Consequently, it is associated with its high capabilities
of addressing the vanishing gradient problem that is
frequent especially when more layers are added to a
deep learning
model. Fusing
both the high level
features obtained from the deep-coarse network and
the low level features from the DFCN enables the
model take into consideration all features available in
remote sensing satellite imagery, that is, the spectral,
textural and spatial information during the training
process. Thus the model is capable of learning more
information and as a result enable it improve on its
predictive predictability.</p>
        <p>Fig 5: ResNet Residual Block</p>
        <p>V. CONCLUSION</p>
        <p>In this work, we propose a multi-feature fusion
extraction based on deep convolutional neural network
for cloud detection in remote sensing analytics given a
satellite image that consists of spectral, spatial and
textural information. The proposed MFF – DCNN
model architecture consisted of a deep-coarse network
for extraction of high level features and deep-fine
network for extraction of low level features. For
identification of the region of interest, feature fusion
layer made of a fully connected layer was then used to
fuse features identified in the deep-coarse and
deepfine networks and the result fed to a softmax
regression function for classification. The cross
entropy loss function was then used to regularize the
outputs. The quantitative results obtained showed that
the proposed model was capable of performing well
given datasets that have different clouds types with
varying cloud size and density and as a result, it was
concluded that this model can also be replicated to
different scenarios for accurate and reliable cloud
detection. The proposed model is thus seen to be
useful in providing insights about clouds in remote
sensing analytics and as a results it has proved to be
useful in tasks such as prediction of the amount of
solar irradiance a given surface can receive given the
locations’ atmospheric and cloud conditions.</p>
        <p>For future works, we recommend evaluation of the
proposed model on images obtained from other
satellites such as Sentinel-2, SPOT-5 and the
Moderate Resolution Imaging Spectroradiometer
(MODIS) data so as to evaluate generalizability of the
proposed model on different datasets. Additionally,
more research should be made on how to improve on
the computational efficiency of the proposed model.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>ACKNOWLEDGEMENT</title>
      <p>
        The authors of this paper would like to thank the
AFRICA-ai JAPAN for funding this project under the
AFRICA-ai-JAPAN project innovation r
        <xref ref-type="bibr" rid="ref15">esearch
grants (2020</xref>
        /2021).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>T.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          , “
          <article-title>Cloud Detection for High-Resolution Satellite Imagery Using Machine Learning</article-title>
          and
          <string-name>
            <surname>Multi-Feature</surname>
            <given-names>Fusion</given-names>
          </string-name>
          ,” Remote Sens., vol.
          <volume>8</volume>
          , no.
          <issue>715</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          ,
          <year>2016</year>
          , doi: 10.3390/rs8090715.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Z. X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          <string-name>
            <surname>Bing</surname>
          </string-name>
          , Wu Wei, Shi Qin, Yuan Chenhzong, “
          <article-title>Cloud Detection of Remote Sensing Image Based on Multi Feature Fusion,”</article-title>
          <source>in 5th IEEE International Conference on Big Data Analytics Cloud</source>
          ,
          <year>2020</year>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Environ.</surname>
          </string-name>
          , vol.
          <volume>205</volume>
          , pp.
          <fpage>390</fpage>
          -
          <lpage>407</lpage>
          ,
          <year>2018</year>
          , doi: https://doi.org/10.1016/j.rse.
          <year>2017</year>
          .
          <volume>11</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Jang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Viau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Anctil</surname>
          </string-name>
          , and E. Bartholomé, “
          <article-title>Neural network application for cloud detection in SPOT VEGETATION images,”</article-title>
          <string-name>
            <given-names>Int. J. Remote</given-names>
            <surname>Sens</surname>
          </string-name>
          ., vol.
          <volume>27</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>719</fpage>
          -
          <lpage>736</lpage>
          ,
          <year>2006</year>
          , doi: 10.1080/01431160500106892.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Ghasemian</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Akhoondzadeh</surname>
          </string-name>
          , “
          <article-title>Introducing two Random Forest based methods for cloud detection in remote sensing images</article-title>
          ,
          <source>” Adv. Sp. Res.</source>
          , vol.
          <volume>62</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>288</fpage>
          -
          <lpage>303</lpage>
          ,
          <year>2018</year>
          , doi: https://doi.org/10.1016/j.asr.
          <year>2018</year>
          .
          <volume>04</volume>
          .030.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Environ.</surname>
          </string-name>
          , vol.
          <volume>229</volume>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>259</lpage>
          ,
          <year>2019</year>
          , doi: https://doi.org/10.1016/j.rse.
          <year>2019</year>
          .
          <volume>03</volume>
          .039.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Mahajan</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Fataniya</surname>
          </string-name>
          , “
          <article-title>Cloud detection methodologies: variants and development-a review,” Complex Intell</article-title>
          . Syst., p.
          <fpage>11</fpage>
          ,
          <year>2019</year>
          , doi: 10.1007/s40747- 019-00128-0.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Skjødeberg</surname>
          </string-name>
          , “
          <article-title>Remote Sensing of Environment A cloud detection algorithm for satellite imagery based on deep learning,” Remote Sens</article-title>
          . Environ., vol.
          <volume>229</volume>
          , no.
          <source>May</source>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>259</lpage>
          ,
          <year>2019</year>
          , doi: 10.1016/j.rse.
          <year>2019</year>
          .
          <volume>03</volume>
          .039.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Sci</surname>
          </string-name>
          ,
          <year>2018</year>
          , vol. IV, pp.
          <fpage>7</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shi</surname>
          </string-name>
          , “
          <article-title>Utilizing Multilevel Features for Cloud Detection on Satellite Imagery,” Remote Sens</article-title>
          ., vol.
          <volume>10</volume>
          , p.
          <year>1853</year>
          ,
          <year>2018</year>
          , doi: 10.3390/rs10111853.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gamba</surname>
          </string-name>
          , and L. Zhang, “
          <article-title>Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery,” Remote Sens</article-title>
          . Environ., vol.
          <volume>191</volume>
          , pp.
          <fpage>342</fpage>
          -
          <lpage>358</lpage>
          ,
          <year>2017</year>
          , doi: https://doi.org/10.1016/j.rse.
          <year>2017</year>
          .
          <volume>01</volume>
          .026.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>Earth</given-names>
            <surname>Obs</surname>
          </string-name>
          . Remote Sens., vol.
          <volume>8</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>4206</fpage>
          -
          <lpage>4222</lpage>
          ,
          <year>2015</year>
          , doi: 10.1109/JSTARS.
          <year>2015</year>
          .
          <volume>2438015</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          and C. Xiao, “
          <article-title>Cloud Detection of RGB Color Aerial Photographs by Progressive Refinement Scheme,”</article-title>
          <source>IEEE Trans. Geosci</source>
          . Remote Sens., vol.
          <volume>52</volume>
          , no.
          <issue>11</issue>
          , pp.
          <fpage>7264</fpage>
          -
          <lpage>7275</lpage>
          , Nov.
          <year>2014</year>
          , doi: 10.1109/TGRS.
          <year>2014</year>
          .
          <volume>2310240</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Missions</surname>
          </string-name>
          , “
          <article-title>Landsat 8,” USGS Science for changing world</article-title>
          .,
          <year>2017</year>
          . https://www.usgs.gov/landresources/nli/landsat/landsat-8
          <article-title>?qtscience_support_page_related_con=0#qtscience_support_page_related_con.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>E. O.</given-names>
            <surname>System</surname>
          </string-name>
          , “Landsat 7
          <string-name>
            <given-names>Earth</given-names>
            <surname>Observing</surname>
          </string-name>
          <string-name>
            <surname>System</surname>
          </string-name>
          ,” Earth Observing System,
          <year>2017</year>
          . https://eos.com/landsat%0A7/ (accessed Jun.
          <volume>04</volume>
          ,
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Eoportal</surname>
          </string-name>
          , “Gaofen-1
          <string-name>
            <surname>Satellite</surname>
          </string-name>
          Missions - eoPortal
          <string-name>
            <surname>Directory</surname>
          </string-name>
          .,” Satellite Missions,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          https://directory.eoportal.org/web/eoportal/satellitemissions/g/gaofen-1#:~:text=Gaofen-1
          <string-name>
            <surname>- Satellite</surname>
          </string-name>
          Missions - eoPortal Directory&amp;
          <article-title>text=Gaofen-1 (gao fen %3D</article-title>
          ,Administration)%2C Beijing%2C China.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>(accessed Jul</source>
          .
          <volume>05</volume>
          ,
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Romano</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Diciccio</surname>
          </string-name>
          , “By Cyrus
          <source>DiCiccio Technical Report No . 2019-03</source>
          April 2019 Department of Statistics,” California, USA,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shen</surname>
          </string-name>
          , Q. Cheng, and Y. Liu, “
          <article-title>Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors</article-title>
          .” W. S. Yongjie Zhan, Jian Wang, Jianping Shi, Guangaliang Cheng, Lele Yao, “
          <article-title>Distinguishing Cloud and Snow in Satellite Images via Deep Convolutional Network,” IEEE Geosci</article-title>
          .
          <article-title>Remote Sens</article-title>
          . Lett., no.
          <issue>14</issue>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          1785-
          <fpage>1789</fpage>
          .,
          <year>2017</year>
          , doi: https://doi.org/10.1109/LGRS.
          <year>2017</year>
          .
          <volume>2735801</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Khabir</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          <string-name>
            <surname>Abir</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Islam</surname>
          </string-name>
          , “
          <article-title>A newly proposed object detection method using Faster R-CNN inception with ResNet based on Tensorflow,” in Real-Time Image Processing</article-title>
          and
          <source>Deep Learning</source>
          <year>2019</year>
          ,
          <year>2019</year>
          , vol.
          <volume>10996</volume>
          , pp.
          <fpage>246</fpage>
          -
          <lpage>256</lpage>
          , doi: 10.1117/12.2523930.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>Y.</given-names>
            <surname>Upadhyay</surname>
          </string-name>
          , “
          <article-title>Regularization techniques for Neural Networks,” towards data science</article-title>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          https://towardsdatascience.com
          <article-title>/regularizationtechniques-for-neural-networks-e55f295f2866 (accessed Feb</article-title>
          .
          <volume>10</volume>
          ,
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <given-names>H.</given-names>
            <surname>Mahmood</surname>
          </string-name>
          , “The Softmax Function, Simplified,”
          <source>towards data science</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          https://towardsdatascience.com/softmax-functionsimplified-714068bf8156
          <source>(accessed Jul</source>
          .
          <volume>12</volume>
          ,
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>