<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Detecting Wildfire-Damaged Areas From Satellite Images Using Deep Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Esmanur Alican</string-name>
          <email>esmnralicann@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Caner Ozcan</string-name>
          <email>canerozcan@karabuk.edu.tr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CMIS-2025: Eighth International Workshop on Computer Modeling and Intelligent Systems</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Software Engineering, University of Karabuk</institution>
          ,
          <addr-line>Karabuk, 78050</addr-line>
          ,
          <country country="TR">Turkiye</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Rapid detection of forest fires is crucial to reduce their devastating impact on ecosystems and human lives. In this paper, we present an AI-based solution for forest fire detection using deep learning from satellite imagery using the ResNet50V2 convolutional neural network (CNN). The dataset used to train the model consists of 1,900 images (950 per class), carefully curated to reflect real-world scenarios of both active forest fires and undisturbed forests. Data preprocessing included image augmentation to reduce overfitting and enhance model performance. Transfer learning, model regularization, and reconstructed pooling layers were applied during training on this dataset, which was augmented with techniques such as random horizontal rotations, zooming, and cropping to improve model generalization. The model achieved 97.63% accuracy and 98.40% precision in detection. Forest fire detection using satellite images is very useful because CNN methods can detect and locate active fires more than once per hour. It is well known that the earlier a forest fire is detected, the more effective it is for people and the environment. This method can help to develop of new strategies for real-time fire monitoring systems, in addition to greatly enhancing wildfire management and prevention efforts. This study focuses not on early fire detection, but on identifying post-wildfire damage using deep learning techniques applied to satellite imagery.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Forest</kwd>
        <kwd>Forest fire</kwd>
        <kwd>Detection</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>ResNet50V21</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A forest is a closed area of trees, other plant species and animals at a certain level of closure, together
with the invisible is defined as a living system and community in which organisms interact [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The
world's forests cover a total cumulative area of a staggering 4.06 billion hectares, covering about 31% of
the planet's land area [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Climate change is expected to have a particularly significant impact on boreal
forests due to rapid and significant temperature increases in this region [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], as each additional degree of
warming could result in a tripling of the area burned [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Therefore, the aim of this study is to determine
how much land is destroyed after the fires are extinguished by using artificial intelligence (AI)
integrated systems. By identifying affected regions after fire events, the proposed model can support
post-disaster assessment and resource planning.
      </p>
      <p>
        Deep learning, as a subset of AI, has the ability to enhance the detection rate of fires and other natural
disasters using large datasets [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. More specifically, image processing techniques have also been
proposed that would benefit the response time by improving the ability to spot a forest wildfire in its
early stages [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. It is also noted that deep learning (DL) image classification models are able to
successfully analyze visualization artifacts such as smoke and flames in order to determine the presence
of fire [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. DL models such as ResNet50v2 have recently achieved high accuracy rates in forest fire
detection in remote sensing application systems.
      </p>
      <p>
        ResNet50v2[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], a convolutional neural network model, effectively recognizes key details underlying
images from image-trained data due to its layered structure. This model is particularly useful in building
a forest fire detection system because it maintains its efficiency even with very large datasets [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The
content of the image passes through the network with less distortion and easily through the use of
“residual” connections, which enables ResNet50v2 to speed up the learning process, thus improving the
0009-0003-0575-7766 (E. Alican); 0000-0002-2854-4005 (C. Ozcan)
© 2025 Copyright for this paper by its authors.
      </p>
      <p>
        Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
overall accuracy [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This characteristic feature combined with the strength of the neuron structure,
allows ResNet50v2 to be useful in practical problems requiring high accuracy, such as fire detection.
      </p>
      <p>Wildfires can have a devastating effect on ecosystems, people, and economies especially in areas
vulnerable to wildfires. Current methods of fire remote sensing via satellite still struggle with offering
timeliness, precision, or flexibility. Most of the traditional approaches depend on systematic monitoring
(by humans) or use local sensor networks. This aids in scaling detection but isn't helpful in real-time
detection over extensive areas. An AI-based method that utilizes satellite images to quickly pinpoint the
exact location of the wildfires and assist with rapid aid is highly essential.</p>
      <p>
        For this study, we used the Forest Fire Detection Dataset presented by Khan and Hassan[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and
available from Mendeley Data. This dataset contains a large number of images specifically selected for
the purpose of forest fire detection. It is a balanced dataset consisting of 1900 images in total, with 950
images belonging to each class. The comprehensive size of such a dataset also makes it suitable for the
effective development of a DL model that can positively contribute to the early detection and monitoring
of forest fires.
      </p>
      <p>The purpose of this paper is to recommend an AI-based approach for fast and effective detection of
forest fires. In this regard, the authors trained the ResNet50v2 model which was prepared on a large
dataset for forest fire detection and evaluated the model’s performance. The focus of this research is to
find possible extensions to current fire detection systems and to emphasize the use of AI in the
management of environmental threats.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Over the past few years, there has been significant attention towards the applications of AI and DL on
detecting forest fires [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Attempts have been made on the researches front to build models that help
in detection of wild fires in real time, through the usage of computer vision as well as machine
learning. Several satellite imaging as well as ground sensors, and unmanned aerial vehicles have been
integrated into the wildfire monitoring systems that help in detecting, analyzing, and responding to
these events in real time [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The detection of wildfires has also been effectively done through various
DL structures.
      </p>
      <p>
        Harkat et al. and Yang et al. [
        <xref ref-type="bibr" rid="ref14 ref15">14,15</xref>
        ] have shown that DL does not perform adequately due to limited
data, generalization, interpretability, and missing features, but integration of DL with other methods
can improve efficiency. Sathishkumar et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] used DL based forgetting learning technique for forest
fire and smoke detection. VGG16, InceptionV3 and Xception models were trained with fine tuning and
their performances were compared. They utilized deep learning-based learning for fire and smoke
detection, highlighting the potential of AI in early fire detection systems. In another study, Best, et al.
[17] compared frozen VGG, 4-layer CNN, and fully trainable VGG for UML diagram classification and
showed that the frozen VGG achieved higher accuracy with reduced sample size and required less
computation time compared to the fully trainable VGG.
      </p>
      <p>Achieving efficient and fast operation of endpoint devices is one of the achievements of Peng et al.
[18] with their proposed fire detection algorithm. An effective balance between accuracy and speed is
achieved by using quantization-compatible activation functions, a QARep component, and image size
optimization using a YOLOv8 algorithm. While image transfer has a positive impact on accuracy,
there is an impact on accuracy with respect to INT8 quantization, resulting in some loss of accuracy.
The study by Ginkal et al. [19] explores the use of AI methods for forest fire detection. The study
provides an AI-based framework for early detection of forest fires. The framework uses machine
learning techniques to perform fire detection by combining color, motion and shape features. Features
such as color probabilities, color histograms and image moments are used for fire region
segmentation, classification, and verification. Experiments show that the proposed framework works
with high accuracy and provides real-time processing time.</p>
      <p>Another research article by Titu et al. [20] explores the integration of lightweight DL models for
real-time fire detection using drones and edge computing. Using knowledge distillation techniques,
the study develops DL models such as Detection Transformer (DETR), Detectron2, and YOLOv8.
Using this approach, the YOLOv8n model achieved the highest accuracy (95.21%). In another study in
a similar area, Anh et al. [21] offer a different approach to detecting forest fires with UAVs, using
different color spaces in combination with correlation coefficients to determine the actual fire area.</p>
      <p>Liu et al. [22] propose two AI agents armed with large digital databases that autonomously control
fiber optic temperature monitoring systems and DL algorithms to detect fires in large commercial
spaces. The research examines the effectiveness and reliability of this combined approach and
expresses how it can revolutionize fire safety measures when applied to large commercial spaces. In
their work, Dampage et al. [23] propose the use of wireless sensor networks in conjunction with
machine learning to detect wildfires at very non-extensive stages. Machine learning models are used
to evaluate data gathered by sensor networks, so as to estimate the likelihood of a wildfire. In a second
part, rechargeable batteries and a solar-powered power supply are used to ensure that the system
remains energy efficient.</p>
      <p>To summarize, previous research has shown that image-based wildfire detection can be performed
with deep learning models VGG, Inception, and variants of YOLO. However, most of these works
emphasize detection and monitoring using UAVs or ground sensors. Relatively few have attempted
the post-wildfire damage identification using satellite images with high accuracy CNN architectures
like ResNet50V2. Our study seeks to fill this gap by utilizing a powerful transfer learning technique
for detecting wildfire damage using satellite imagery, providing a valuable resource for post-disaster
evaluation and recovery design.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <sec id="sec-3-1">
        <title>3.1. Data Collection and Data Pre-Processing</title>
        <p>The dataset for forest fire detection is a comprehensive and carefully selected resource specifically
designed to assist the development of algorithms for tasks such as forest fire detection and object
detection. The images are the result of a search for different keywords in different search engines. As
depicted in Fig. 1, designed for the binary problem, the dataset of 1,900 images (950 images per class) is
divided into two main categories: The first category contains images documenting active forest fires,
while the second category contains images of undisturbed, fire-free forest areas.</p>
        <p>To improve the performance of machine learning and DL models, all images in the dataset are
three-channel with a spatial resolution of 250 × 250 and consistent formatting. Each image in the
dataset is carefully reviewed and pre-processed to remove irrelevant elements, such as human activity
or firefighting equipment, to focus only on fire and non-fire regions. This is important for the model
that will be used for training, as it eliminates false positives when the model is asked to identify areas
of the forest that have burned and those that have not.</p>
        <p>This balanced division of the dataset is critical for the model used in training to accurately
distinguish between fire-affected and burned areas in forested and unaffected areas. As shown in
Table 1, the dataset is divided into three subsets; this separation allows the model to be effectively
trained on a variety of samples while achieving higher accuracy rates on the test data.</p>
        <p>Augmentation was performed here because the number of data in the dataset is not sufficient for the
ResNet architecture and would lead to overfitting of the model. Initially, 20% of the training data was
reserved for validation. As shown in Fig. 2, augmentation was then applied at each step of the training:
horizontal rotation of the images, random zooming, and certain cropping operations were applied
separately for each data in the dataset. For the test data, the data was only scaled to "1./255". Applying
the augmentation to the test data may not reflect the real performance of the model and may lead to
misleading results.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. ResNet50v2 Model Architecture</title>
        <p>Convolutional Neural Networks (CNNs) are among the fundamental building blocks of DL methods,
and they're popular in tasks such as image and computer vision [24]. CNNs are based on extracting
local features from high-order inputs and passing them to lower layers for more complex features
[25]. This process allows the model to learn and achieve more accurate results. However, CNNs
frequently tend to have issues training deeper, especially in deep networks. This is where deep
network architectures like ResNet50v2 can provide a solution. ResNet50v2 is a member of the Residual
Networks family and adds an important innovation to the traditional structure of CNNs: residual or
jump connections [26]. These structures help solve the problem of gradient loss as the depth of the
network increases.</p>
        <p>Using ResNet50V2 architecture involves opting for residual blocks which helps to bypass the issues
of vanishing and exploding gradient problems during deep representation learning. The purpose of
this residual block is captured in an equation that includes the image to be processed, pre-trained
weights corresponding to the YOLO CNN, and skip connections. This method is superior at producing
results when there are variations in dimension [27]. Furthermore a solution to the degradation is
provided using DL framework where the mapping of the layer of the non-linear stack is treated as a
‘cut’ from the original input.</p>
        <p>The application of ResNet to computer vision has shown outstanding performance [28]. ResNet18,
ResNet50, and ResNet101 are the most widely used types of ResNet networks. Among these network
types, ResNet50 has achieved better identification accuracy and real-time performance [29]. The
number 50 in its name represents the 50 layers that make up the ResNet50v2 architecture. These layers
include the convolutional layer, the batch normalization layer, and the ReLU activation function [30].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Results</title>
      <p>In this work, the ResNet50v2 model was transformed into a transfer learning model that was trained
on a dataset defined as a forest fire detection dataset. The data was cleaned with fire and non-fire
images in an equally weighted ratio. To increase variability and prevent overfitting in the data
augmentation stage, random horizontal rotation, zooming, and cropping were applied during the
training set hours to allow variability while ensuring that overfitting is contained. The validation and
test sets did not require augmentation, although the test set was normalized to a range of 1/255.</p>
      <p>The model was then modified to have a global mean pooling layer, a fully connected layer
consisting of 128 neurons, an L2 regularization term with λ = 0.01, and a drop of 50%. The binary
classification output layer had a sigmoid activation function reflecting the probability of firing.
Training was done with the Adam optimizer. The learning rate was set to 0.0001 and the binary cross
entropy loss was used. Early stopping was implemented to track accuracy loss, and training was
terminated after observing 10 consecutive epochs in which there was no improvement.</p>
      <p>Most academic research evaluates model performance not by a single measure, but rather by a
combination of measures, including accuracy (1), recall (2), precision (3), and F1-score (4). These
metrics allow for fair and comprehensive comparisons across tasks, in addition to providing a
quantitative measure of model performance [32].</p>
      <sec id="sec-4-1">
        <title>True Positive (TP )+True Negative ( TN )</title>
        <p>True Positive (TP )+True Negative (TN )+ False Positive ( FP )+ False Negativ(1e)( FN )
,
Recall=
Precision=</p>
      </sec>
      <sec id="sec-4-2">
        <title>True Positives ( TP )</title>
      </sec>
      <sec id="sec-4-3">
        <title>True Positives (TP )+ False Negatives ( FN )</title>
      </sec>
      <sec id="sec-4-4">
        <title>True Positives ( TP )</title>
      </sec>
      <sec id="sec-4-5">
        <title>True Positives (TP )+ False Positives ( FP )</title>
        <p>,</p>
      </sec>
      <sec id="sec-4-6">
        <title>2 x Precision x Recall</title>
        <p>F 1 Score=</p>
      </sec>
      <sec id="sec-4-7">
        <title>Precision+ Recall</title>
        <p>(2)
(3)
(4)</p>
        <p>There is a need to analyze the working of the model in a detailed manner, which can be done by
analyzing certain different parameters. One of these metrics can be a confusion matrix as shown in
Fig. 3. The confusion matrix protocol allows the user to more accurately determine the type of classes
that have been solved compared to the others. In this way, it can be determined which fire class was
correctly recognized and which was more prone to errors.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>This article demonstrates a useful case of DL application for an important ecological problem:
detecting wildfires. Applying the ResNet50V2 architecture together with small but cleverly
augmented dataset results in 97.63\% accuracy and 98.40\% precision in locating wildfire affected
regions from satellite images. With regard to transfer learning, the use of already trained ImageNet
weights is one of the most notable advantages, as it allows for better results and faster convergence
even when there isn’t sufficient training data available. Data augmentation methods like horizontal
flips, zooms, and crops help control overfitting, which is a serious problem when working with small
datasets. Regularization techniques also improve the model's generalizability.</p>
      <p>Still, the error analysis section would improve by providing more detail outside of the confusion
matrix analysis. Determining if the true miss-classifications are false positives (areas without fire but
marked as fire) or false negatives (burned areas that should have been marked but are not) is the most
important part misclassification analysis. Examining those misclassified images could show the flaws
in the model and the biases it holds.</p>
      <p>Although many researchers use the ResNet50V2 architecture with transfer learning for image
classification, our research is different. We customize the model to detect post-wildfire land damage
using satellite images, an area not previously investigated. Unlike most prior work on real-time fire or
smoke detection, our focus is on identifying areas of fire damage in forests. Moreover, unlike other
more sophisticated solutions such as the dual-agent detection system described in [22], our model is
less complicated while achieving the same high accuracy, thus better tailored for environments with
constrained resources. To address the problem of limited datasets, we applied specific augmentation
strategies, dropout, and L2 regularization. These methods help ensure robustness and generalization.
The modular architecture and training pipeline enhance central and edge-based fire monitoring
practicality.</p>
      <p>Furthermore, examining the practical aspects of the proposed methods would greatly increase the
impact of the study. For example, in what ways could this model be used with current wildfire
monitoring systems? What are the implications for safeguarding the environment, saving money, and
improving response time? Trying to answer these questions would reiterate the importance of the
topic while enhancing the discussion providently.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Using the ResNet50V2 architecture, the model achieved a remarkable accuracy of 97.63% when
classifying satellite images into wildfire and non-wildfire categories. The model's accuracy, coupled
with its capacity to spot forest fires, makes it an invaluable asset for prompt fire detection and
prevention. Adopting an approach based on DL and satellite imagery provides the opportunity to
enhance the detection of fires’ earliest stages, thus enabling quicker, more efficient actions. In
addition, these real-time assessments can aid in firefighting efforts on a personal and communal level.
The use of satellite information for instant evaluation can have supportive implications in helping
onlookers assess the location of fire activity. This is important in determining the location to dispatch
firefighting teams to, hence optimizing resource use and reducing damage.</p>
      <p>In addition, this study helps refine the general approach to managing forest fires. As the current
version of the model improves, further steps can look into testing other augmentation strategies to
increase the model's strength, new fine-tuning adjustments to improve performance, or even other
neural network designs that are better intended for certain satellite images or regions geography.
Such changes would greatly improve the range of applications of the model so that it can be
configured to work in varying environmental and geographical regions, including those that are
untapped. This development may enhance the capacity to monitor and prevent wildfires in different
ecosystems around the world.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Grammarly in order to: Grammar and spelling
check. After using this tool, the authors reviewed and edited the content as needed and take full
responsibility for the publication’s content.
[17] Best, N., Ott, J., &amp; Linstead, E. J. (2020). Exploring the efficacy of transfer learning in mining
image-based software artifacts. Journal of Big Data, 7, 1–10.
[18] Peng, R., Cui, C., &amp; Wu, Y. (2025). Real-time fire detection algorithm on low-power endpoint
device. Journal of Real-Time Image Processing, 22(1), 29.
[19] Ginkal, P. M., &amp; Kalaiselvi, K. (2024). Forest Fire Detection using AI. Grenze International Journal
of Engineering &amp; Technology (GIJET), 10.
[20] Titu, M. F. S., Pavel, M. A., Michael, G. K. O., Babar, H., Aman, U., &amp; Khan, R. (2024). Real-Time
Fire Detection: Integrating Lightweight Deep Learning Models on Drones with Edge Computing.</p>
      <p>Drones, 8(9), 483.
[21] Anh, N. D., Van Thanh, P., Lap, D. T., Khai, N. T., Van An, T., Tan, T. D., ... &amp; Dinh, D. N. (2022).</p>
      <p>Efficient forest fire detection using rule-based Multi-color Space and correlation coefficient for
application in Unmanned Aerial Vehicles. KSII Transactions on Internet and Information
Systems (TIIS), 16(2), 381–404.
[22] Liu, G., Liu, Z., Qu, G., Ren, L., Wang, L., &amp; Yan, M. (2024). Dual-agent intelligent fire detection
method for large commercial spaces based on numerical databases and artificial intelligence.</p>
      <p>Process Safety and Environmental Protection, 191, 2485–2499.
[23] Dampage, U., Bandaranayake, L., Wanasinghe, R., Kottahachchi, K., &amp; Jayasanka, B. (2022). Forest
fire detection system using wireless sensor networks and machine learning. Scientific reports,
12(1), 46.
[24] Huang, C., &amp; Yang, Y. (2024, April). Gaussian noise image recognition based on convolutional
neural networks. In 2024 5th International Conference on Computer Vision, Image and Deep
Learning (CVIDL) (pp. 98–101). IEEE.
[25] Shinde, P. P., &amp; Shah, S. (2018, August). A review of machine learning and deep learning
applications. In 2018 Fourth international conference on computing communication control and
automation (ICCUBEA) (pp. 1–6). IEEE.
[26] Rassil, A., Chougrad, H., &amp; Zouaki, H. (2022). Augmented graph neural network with hierarchical
global-based residual connections. Neural Networks, 150, 149–166.
[27] Riyadi, S., Abidin, F. A., &amp; Audita, N. (2024, May). Comparison of ResNet50V2 and MobileNetV2
Models in Building Architectural Style Classification. In 2024 International Conference on
Intelligent Systems and Computer Vision (ISCV) (pp. 1-8). IEEE.
[28] Li, B., &amp; Lima, D. (2021). Facial expression recognition via ResNet-50. International Journal of</p>
      <p>Cognitive Computing in Engineering, 2, 57–64.
[29] Shafiq, M., &amp; Gu, Z. (2022). Deep residual learning for image recognition: A survey. Applied</p>
      <p>Sciences, 12(18), 8972.
[30] Hindarto, D. (2023). Use ResNet50V2 Deep Learning Model to Classify Five Animal Species.</p>
      <p>Jurnal JTIK (Jurnal Teknologi Informasi dan Komunikasi), 7(4), 758–768.
[31] Mungoli, N. (2023). Adaptive Ensemble Learning: Boosting Model Performance through
Intelligent Feature Fusion in DeepNeural Networks. arXiv preprint arXiv:2304.02653.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>General</given-names>
            <surname>Directorate</surname>
          </string-name>
          of Forestry. (
          <year>2021</year>
          ).
          <year>2020</year>
          <article-title>Turkey's forest assets</article-title>
          .
          <source>Republic of Turkey Ministry of Agriculture and Forestry</source>
          , General Directorate of Forestry.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>FAO</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>The state of the world's forests 2022</article-title>
          .
          <article-title>Forest pathways for green recovery and building inclusive, resilient and sustainable economies</article-title>
          . Rome: FAO.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Price</surname>
            ,
            <given-names>D. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alfaro</surname>
            ,
            <given-names>R. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , K. J.,
          <string-name>
            <surname>Flannigan</surname>
            ,
            <given-names>M. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fleming</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hogg</surname>
            ,
            <given-names>E. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Venier</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Anticipating the consequences of climate change for Canada's boreal forest ecosystems</article-title>
          .
          <source>Environmental Reviews</source>
          ,
          <volume>21</volume>
          (
          <issue>4</issue>
          ),
          <fpage>322</fpage>
          -
          <lpage>365</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Ali</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blarquez</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Girardin</surname>
            ,
            <given-names>M. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hély</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tinquaut</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>El Guellab</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergeron</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Control of the multimillennial wildfire size in boreal North America by spring climatic conditions</article-title>
          .
          <source>Proceedings of the National Academy of Sciences</source>
          ,
          <volume>109</volume>
          (
          <issue>51</issue>
          ),
          <fpage>20966</fpage>
          -
          <lpage>20970</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Akhyar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zulkifley</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Song</surname>
            , T., Han,
            <given-names>J</given-names>
          </string-name>
          .,
          <string-name>
            <surname>Cho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>B. W.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Deep artificial intelligence applications for natural disaster management systems: A methodological review</article-title>
          .
          <source>Ecological Indicators</source>
          ,
          <volume>163</volume>
          ,
          <fpage>112067</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Barmpoutis</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Papaioannou</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dimitropoulos</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Grammalidis</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>A review on early forest fire detection systems using optical remote sensing</article-title>
          .
          <source>Sensors</source>
          ,
          <volume>20</volume>
          (
          <issue>22</issue>
          ),
          <fpage>6442</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Özel</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>M. U.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning</article-title>
          . Information,
          <volume>15</volume>
          (
          <issue>9</issue>
          ),
          <fpage>538</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Riyadi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abidin</surname>
            ,
            <given-names>F. A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Audita</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Comparison of ResNet50V2 and MobileNetV2 models in building architectural style classification</article-title>
          .
          <source>In Proceedings of the 2024 International Conference on Intelligent Systems and Computer Vision</source>
          (ISCV) (pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yandouzi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grari</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Idrissi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boukabous</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moussaoui</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Azizi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elmiad</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Forest fires detection using deep transfer learning</article-title>
          .
          <source>Forest</source>
          ,
          <volume>13</volume>
          (
          <issue>8</issue>
          ),
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>RS</surname>
            ,
            <given-names>V. K.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>CoC-ResNet-classification of colorectal cancer on histopathologic images using residual networks</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          ,
          <volume>83</volume>
          (
          <issue>19</issue>
          ),
          <fpage>56965</fpage>
          -
          <lpage>56989</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hassan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Dataset for forest fire detection</article-title>
          .
          <source>Mendeley Data</source>
          ,
          <volume>1</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bajwa</surname>
            ,
            <given-names>U. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raza</surname>
            ,
            <given-names>R. H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Anwar</surname>
            ,
            <given-names>M. W.</given-names>
          </string-name>
          (
          <year>2025</year>
          ).
          <article-title>Beyond boundaries: Advancements in fire and smoke detection for indoor and outdoor surveillance feeds</article-title>
          .
          <source>Engineering Applications of Artificial Intelligence</source>
          ,
          <volume>142</volume>
          ,
          <fpage>109855</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques</article-title>
          .
          <source>Canadian journal of forest research</source>
          ,
          <volume>45</volume>
          (
          <issue>7</issue>
          ),
          <fpage>783</fpage>
          -
          <lpage>792</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Harkat</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nascimento</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernardino</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ahmed</surname>
            ,
            <given-names>H. F. T.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Fire images classification based on a handcraft approach</article-title>
          .
          <source>Expert Systems with Applications</source>
          ,
          <volume>212</volume>
          ,
          <fpage>118594</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hua</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fan</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ye</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Preferred vector machine for forest fire detection</article-title>
          .
          <source>Pattern Recognition</source>
          ,
          <volume>143</volume>
          ,
          <fpage>109722</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Sathishkumar</surname>
            ,
            <given-names>V. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cho</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subramanian</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Naren</surname>
            ,
            <given-names>O. S.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Forest fire and smoke detection using deep learning-based learning without forgetting</article-title>
          .
          <source>Fire ecology</source>
          ,
          <volume>19</volume>
          (
          <issue>1</issue>
          ),
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>