<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Bozen, Italy
⋆You can use this document as the template for preparing your publication. We recommend using the latest version of the
ceurart style.
$ chintan.bhatt@sot.pdpu.ac.in (P. P. K. D. R. J. C. Bhatt)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Deep Learning Framework for Real-time Oil Spill Detection and Classification⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pranshu Patel</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kedar Desai</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rinkal Jain</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chintan Bhatt</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Steve Vanlanduit</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Bruno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pier Luigi Mazzeo</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Business, Law, Economics and Consumer Behaviour “Carlo A. Ricciardi”, IULM AI Laboratory, IULM University</institution>
          ,
          <addr-line>20143 Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science and Engineering, School of Technology, Pandit Deendayal Energy University</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>ISASI Institute of Applied Sciences Intelligent Systems-CNR</institution>
          ,
          <addr-line>73100 Lecce</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>InViLab Research Group, Universiteit Antwerpen</institution>
          ,
          <addr-line>Groenenborgerlaan 171 2020 Antwerpen</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Eficient detection of oil spills is critical for minimizing environmental damage. This study introduces a novel approach utilizing deep learning, specifically the YOLOv8 architecture, augmented with advanced computer vision techniques for oil spill detection. Through meticulous dataset curation and model training, the YOLOv8 model achieved an impressive overall accuracy (R-score) of 0.531 and a Mean Average Precision (mAP) of 0.549. Performance varied across diferent spill types, with the model demonstrating notable accuracy in distinguishing between oil spills and natural features, achieving precision and recall rates of up to 0.75 and 0.68, respectively, for sheen detection. Visualizations such as box loss, class loss, and confusion matrices provide insights into the model's performance dynamics, revealing a steady decrease in losses and an improvement in accuracy over epochs. In this dataset, the measurements are drone measurements performed by Port of Antwerp Bruges. Furthermore, practical applications showcase the model's versatility in detecting various oil spill types in both image and video data, afirming its potential for real-world deployment in environmental monitoring and disaster response scenarios. This research represents a significant stride towards more efective oil spill detection, contributing to environmental sustainability and resilience eforts.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Oil spills</kwd>
        <kwd>Environmental risk</kwd>
        <kwd>Oil exploration</kwd>
        <kwd>YOLOv8</kwd>
        <kwd>Object Detection</kwd>
        <kwd>Computer Vision</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The global energy demand heavily relies on the exploration, extraction, and transportation of oil.
However, alongside meeting energy needs, these activities also pose significant environmental risks,
with oil spills emerging as a primary concern. The detrimental impacts of oil spills on ecosystems,
aquatic life, and human communities underscore the critical necessity for efective and prompt detection
methods. Recent advancements in deep learning and computer vision ofer promising avenues for
enhancing the eficiency of oil spill detection processes. This study introduces an innovative approach
to oil spill detection, harnessing the power of deep learning, specifically the YOLOv8 architecture [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Renowned for its real-time object detection capabilities, YOLOv8 is adept at addressing the dynamic and
time-sensitive nature of oil spill incidents. Our proposed model, augmented with advanced computer
vision techniques, aims not only to accurately identify oil spills but also to distinguish them from natural
environmental features, thereby minimizing false positives. The motivation for this research stems from
the urgent need for proactive measures to manage and mitigate the environmental impact of oil spills.
Conventional methods, such as manual interpretation of satellite imagery, are not only time-consuming
but also prone to errors. The integration of deep learning and computer vision technologies into this
domain presents an opportunity to revolutionize the speed and accuracy of oil spill identification,
facilitating faster response times and more efective containment eforts [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>In the subsequent sections, we delve into the methodology, elucidating how YOLOv8 was tailored to
the unique challenges of oil spill detection. We provide insights into the construction and curation of a
diverse dataset of drone images to train the model, ensuring its robustness across various environmental
conditions. Additionally, we present the results of our model’s performance evaluation against private
port datasets, showcasing its superiority over existing methods. This research aims to contribute to the
realm of environmental monitoring and disaster response, with a specific focus on mitigating the impact
of oil spills. Beyond environmental conservation, the implications of an eficient oil spill detection
system extend to economic and social dimensions. The scalability and integration potential of our
proposed model into existing monitoring systems underscore its applicability as a valuable tool for
bolstering environmental sustainability and resilience.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>
        The application of the YOLO model for real-time marine radar-based oil spill monitoring is thoroughly
explored. The research efectively addresses the need for timely detection capabilities in dynamic
marine environments by employing deep learning techniques to enhance the eficiency and accuracy of
oil spill identification using marine radar data. The study contributes valuable insights into the potential
of the YOLO architecture in addressing challenges posed by oil spills in marine ecosystems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The study investigates the integration of deep learning methods with Sentinel-1 Synthetic Aperture
Radar (SAR) imagery for oil spill detection. The study carefully examines how deep learning enhances
accuracy and eficiency in identifying oil spills in SAR data, which is crucial for timely response and
mitigation. The research provides valuable insights into using advanced remote sensing technology and
deep learning algorithms, highlighting their potential to improve the reliability of oil spill detection [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        A study on deep-water oil spill monitoring in Brazilian territory, utilizing Sentinel-1 time series data
and deep learning techniques. The research aims to understand the recurrence patterns of oil spills
in deep-water environments. By combining remote sensing data and advanced analysis methods, the
authors contribute to the knowledge of oil spill dynamics in challenging marine settings [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        The authors present a conference paper on a fully automated Synthetic Aperture Radar (SAR) based
oil spill detection method using the YOLOv4 architecture. The emphasis is on real-time and automated
identification of oil spills, addressing the urgent need for rapid response in emergencies. The study
contributes significantly by demonstrating the applicability of YOLOv4 in SAR imagery for eficient
and accurate oil spill detection [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        The research study investigates marine oil spill detection over the Indian Ocean using synthetic
aperture radar (SAR). The study emphasizes radar technology’s application in monitoring oil spills in
large water bodies, providing valuable insights into the spatial and temporal dynamics of incidents.
This research establishes a foundation for efective monitoring and response strategies in oceanic
environments [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        This research introduces a Synthetic Aperture Radar (SAR) oil spill detection system using random
forest classifiers. The study integrates machine learning techniques with SAR data to develop efective
algorithms for oil spill detection. Leveraging random forest classifiers, the authors enhance the accuracy
and reliability of SAR-based oil spill identification, contributing significantly to the development of
robust detection systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        The authors presently work on automated detection and classification of spilt loads on freeways using
an improved YOLO network. Although not directly related to oil spills, this research provides valuable
insights into the broader applicability of YOLO networks for object detection. The study highlights
advancements in the YOLO network, suggesting potential implications for adapting similar techniques
to oil spill detection scenarios [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        The research proposes an advanced convolutional neural network for oil spill detection in
quadpolarimetric Synthetic Aperture Radar (SAR) images. The research focuses on using complex SAR data
and advanced neural network architectures for accurate oil spill identification. This study contributes
significantly to understanding the potential of convolutional neural networks in extracting intricate
features from SAR data, enhancing oil spill detection [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>
        In the initial phase of our study, meticulous attention was dedicated to the collection and preprocessing
of datasets, ensuring uniformity and data quality. This involved resizing both images and videos to a
consistent resolution, normalizing pixel values to a standardized scale, and addressing any artifacts
or anomalies present in the dataset. To facilitate model training and evaluation, we implemented a
structured Train-Validation-Test split, with 70% of the data allocated for training, 20% for validation,
and 10% for testing. Random shufling was employed during the split to maintain a representative
distribution across the sets. The subsequent step involved annotation, a crucial aspect in training object
detection models. Utilizing specific tools like labelImg [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] or RectLabel [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], annotators underwent
training to ensure annotation consistency. Bounding boxes delineating oil spills were applied to images
and videos, and a meticulous validation process was executed to verify the accuracy and consistency of
annotations. For the object detection model, the choice was YOLOv8 [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], selected for its well-established
efectiveness in object detection tasks. Justification for this selection, however, necessitates clarity on
how our proposed method difers from the state-of-the-art and why a new method is essential. To
address this, we have focused on enhancing the YOLOv8 configuration, incorporating data augmentation
techniques to boost the model’s robustness, enabling it to generalize efectively across diverse scenarios.
The training phase encompassed the specification of the optimization algorithm, such as Adam, along
with a defined learning rate. Determination of batch size and the number of epochs was achieved
through experimentation or adherence to best practices. Monitoring of progress throughout the training
procedure involved key metrics like loss, and we implemented early stopping mechanisms when deemed
necessary. During the evaluation phase, key metrics, including precision, recall, F1 score, and mean
average precision (mAP), were employed to assess the model’s performance comprehensively. Analysis
of the confusion matrix provided deeper insights into the model’s efectiveness on the test set. These
meticulous steps, spanning from dataset preparation to model evaluation, collectively contribute to the
robustness and reliability of our object detection system for identifying oil spills while clarifying the
unique contributions of our proposed method compared to existing state-of-the-art approaches.
3.1. Model Architecture
The YOLOv8 architecture, specifically tailored for object detection applications like identifying oil spills,
represents an evolution within the renowned You Only Look Once (YOLO) series. This architecture
integrates various layers, encompassing Rectified Linear Unit (ReLU), Convolutional 2D (Conv2d),
and Max Pooling 2D (MaxPool2D), to achieve its functionality. At its core, the architecture comprises
CSPDarknet53, serving as the backbone responsible for feature extraction from input images. Consisting
of multiple Conv2d layers and ReLU activation functions, CSPDarknet53 plays a pivotal role in discerning
hierarchical features from the input.
      </p>
      <p>The PANet (Path Aggregation Network) functions as the neck of the architecture, enhancing object
detection across diferent scales through the aggregation of features. This is accomplished by employing
Conv2d layers and ReLU activation functions. The YOLO head, the final segment of the architecture, is
responsible for predicting bounding boxes, objectness scores, and class probabilities. It incorporates
YOLO layers, each dedicated to predicting information for objects at distinct scales. The YOLO layers
typically employ a combination of Conv2d layers, ReLU activation functions, and, at times, MaxPool2D
layers. SPP (Spatial Pyramid Pooling) is incorporated to capture contextual information at multiple
scales, enhancing the model’s capacity to comprehend intricate patterns. CSPNet (Cross-Stage Partial
Network) contributes to improved feature fusion across diferent stages of the network, facilitating more
efective information flow. The output layer, comprising Conv2d layers and ReLU activation functions,
ifnalizes the architecture, generating predictions including bounding boxes and class probabilities.
Collectively, these interconnected layers form a cohesive architecture designed for real-time object
detection, making YOLOv8 particularly apt for tasks such as oil spill detection in drone images and
videos.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <p>The YOLOv8 model, upon training, achieved an overall accuracy (R-score) of 0.531 and a Mean Average
Precision (mAP) of 0.549. Notably, the performance for sheen detection exhibited a lower value of 0.4,
which, regrettably, impacted the overall performance and accuracy of the model.</p>
      <p>Images</p>
      <p>Instances
mAP50
mAP50-95</p>
      <p>To delve into the intricacies of the model’s performance, we present visualizations in the form of
graphs depicting Box loss, Class loss, and Distribution Focal loss for both training and validation data.
Accompanying these visualizations are performance graphs that underscore the evolution of the model’s
eficacy across epochs. These graphs collectively illuminate a consistent trend of decreasing losses with
increasing epochs, indicating a significant enhancement in the model’s performance.</p>
      <p>In an efort to provide a comprehensive understanding of the model’s accuracy, we utilized a confusion
matrix visually represented through a heatmap. This matrix ofers a clear depiction of the object detection
rate, aiding in the nuanced evaluation of the model’s strengths and areas for improvement.</p>
      <p>To highlight the versatility of our model, we randomly selected examples of each colour shade of
oil spills, presenting the detected objects alongside their corresponding accuracy values in the top left
corner. This provides a practical insight into the model’s capability to discern diferent types of oil spills
with varying degrees of accuracy.</p>
      <p>Additionally, the model’s robustness was tested on drone video data, and the results are showcased
in Figure 6. The accuracy values are prominently displayed in the top left corner, ofering a concise
overview of the model’s performance in a real-world scenario.</p>
      <p>Addressing the aspect of novelty, it is essential to emphasize that our choice of the YOLOv8 model
is not merely a replication of existing methodologies. Instead, our innovation lies in the meticulous
configuration, augmentation techniques, and strategic interventions applied to the YOLOv8 framework.
This distinctive approach is geared towards addressing the limitations observed in current
state-of-theart methods, especially concerning generalization across diverse environmental conditions.</p>
      <p>Lastly, our results and discussions are framed with a focus on experiments that compare favourably
to the state-of-the-art. The visualizations, metrics, and practical applications collectively underscore
the efectiveness of our approach, reinforcing the nuanced improvements introduced to the YOLOv8
model for enhanced oil spill detection in dynamic settings.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In Conclusion, our investigation introduces a robust framework for detecting oil spills by integrating
YOLOv8 with advanced computer vision methodologies. By capitalizing on YOLOv8’s real-time object
detection capabilities and the discriminative features of computer vision, our model demonstrates
outstanding accuracy in pinpointing oil spills within satellite imagery. Rigorous benchmark assessments
validate its superior performance when compared to existing methodologies, suggesting its potential for
eficient utilization in monitoring and addressing oil spill incidents. The model’s versatility in handling
diverse environmental conditions, scalability, and seamless integration with prevailing monitoring
systems underscore its practical utility. This research represents a notable advancement in optimizing
the efectiveness of oil spill detection systems, aligning with broader objectives related to environmental
preservation and prompt disaster response.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Scope for Future Work</title>
      <p>A critical aspect of advancing the model’s eficacy involves delving into regional optimization and
tailoring the deep learning framework to accommodate the distinctive environmental characteristics of
specific geographical regions. This adaptation process is key to enhancing the model’s performance
and increasing its applicability across diverse ecosystems. Additionally, a strategic exploration of
multisensor fusion is imperative. Integrating data from various sensors, including Synthetic Aperture Radar
(SAR) and optical sensors, presents an opportunity to create a more comprehensive and resilient oil
spill detection system. The synergistic utilization of diferent sensing modalities holds the potential to
elevate detection accuracy and reliability, particularly in the face of fluctuating environmental conditions.
To further elevate the practical utility of the model, a concentrated efort on real-time deployment
optimization is paramount. This entails refining the model’s speed and eficiency in the detection
algorithm, ensuring its responsiveness to emerging oil spill incidents in operational scenarios. In
essence, these refinements contribute to the model’s adaptability, robustness, and practical efectiveness
across a spectrum of real-world applications.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Emna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Alexandre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bolon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Véronique</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bruno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Georges</surname>
          </string-name>
          ,
          <article-title>Ofshore oil slicks detection from sar images through the mask-rcnn deep learning model</article-title>
          , in: 2020
          <source>International Joint Conference on Neural Networks (IJCNN)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ghorbani</surname>
          </string-name>
          ,
          <article-title>Oil spill detection using deep neural networks</article-title>
          ,
          <source>Ph.D. thesis</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chen</surname>
          </string-name>
          , L. Ma,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lian</surname>
          </string-name>
          , et al.,
          <article-title>Preliminary investigation on marine radar oil spill monitoring method using yolo model</article-title>
          ,
          <source>Journal of Marine Science and Engineering</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>670</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.-J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mayerle</surname>
          </string-name>
          ,
          <article-title>A deep learning based oil spill detector using sentinel-1 sar imagery</article-title>
          ,
          <source>International Journal of Remote Sensing</source>
          <volume>43</volume>
          (
          <year>2022</year>
          )
          <fpage>4287</fpage>
          -
          <lpage>4314</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N. V.</given-names>
            <surname>A. de Moura</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. L. F. de Carvalho</surname>
            ,
            <given-names>R. A. T.</given-names>
          </string-name>
          <string-name>
            <surname>Gomes</surname>
            ,
            <given-names>R. F.</given-names>
          </string-name>
          <string-name>
            <surname>Guimarães</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. A. de Carvalho Júnior</surname>
          </string-name>
          ,
          <article-title>Deep-water oil-spill monitoring and recurrence analysis in the brazilian territory using sentinel-1 time series and deep learning</article-title>
          ,
          <source>International Journal of Applied Earth Observation and Geoinformation</source>
          <volume>107</volume>
          (
          <year>2022</year>
          )
          <fpage>102695</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.-J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mayerle</surname>
          </string-name>
          ,
          <article-title>Fully automated sar based oil spill detection using yolov4, in: 2021 IEEE International Geoscience</article-title>
          and
          <string-name>
            <surname>Remote Sensing Symposium</surname>
            <given-names>IGARSS</given-names>
          </string-name>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>5303</fpage>
          -
          <lpage>5306</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Naz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Iqbal</surname>
          </string-name>
          , I. Mahmood,
          <string-name>
            <given-names>M.</given-names>
            <surname>Allam</surname>
          </string-name>
          ,
          <article-title>Marine oil spill detection using synthetic aperture radar over indian ocean</article-title>
          ,
          <source>Marine Pollution Bulletin</source>
          <volume>162</volume>
          (
          <year>2021</year>
          )
          <fpage>111921</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. R. A.</given-names>
            <surname>Conceição</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. F.</given-names>
            <surname>F. de Mendonça</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A. D.</given-names>
            <surname>Lentini</surname>
          </string-name>
          , A. T. da Cunha Lima,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Lopes</surname>
          </string-name>
          , R. N. de Vasconcelos,
          <string-name>
            <surname>M. B. Gouveia</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          <string-name>
            <surname>Porsani</surname>
          </string-name>
          ,
          <article-title>Sar oil spill detection system through random forest classifiers</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>13</volume>
          (
          <year>2021</year>
          )
          <year>2044</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <article-title>Automated detection and classification of spilled loads on freeways based on improved yolo network</article-title>
          ,
          <source>Machine Vision and Applications</source>
          <volume>32</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Oil spill detection in quad-polarimetric sar images using an advanced convolutional neural network based on superpixel model</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>12</volume>
          (
          <year>2020</year>
          )
          <fpage>944</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Boesch</surname>
          </string-name>
          , Visio ai, https://viso.ai/computer-vision/labelimg-for
          <string-name>
            <surname>-</surname>
          </string-name>
          image-annotation/,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>O.</given-names>
            <surname>Resource</surname>
          </string-name>
          , Rect label, https://rectlabel.com/, -.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>O.</given-names>
            <surname>Resource</surname>
          </string-name>
          , Yolov8, https://docs.ultralytics.com/models/yolov8/, -.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>