<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Svystun);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Precision slicing for enhanced defect detection in high-resolution wind turbine blade imagery⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Svystun</string-name>
          <email>svystuns@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Melnychenko</string-name>
          <email>melnychenko@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo Radiuk</string-name>
          <email>radiukp@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleg Savenko</string-name>
          <email>savenko_oleg_st@ukr.net</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anatoliy Sachenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kazimierz Pulaski University of Technology and Humanities, Department of Informatics</institution>
          ,
          <addr-line>Radom</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Khmelnytskyi National University</institution>
          ,
          <addr-line>11, Institutes str., Khmelnytskyi, 29016</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Research Institute for Intelligent Computer Systems, West Ukrainian National University</institution>
          ,
          <addr-line>Ternopil</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The analysis of high-resolution aerial imagery captured by unmanned aerial vehicles (UAVs) presents significant analytical challenges, primarily due to the minuscule size of observable objects and the variability in object scale influenced by UAV altitude and positioning. These factors often lead to diminished data fidelity and complicate the detection of smaller objects, which are critical in applications such as infrastructure monitoring. Traditional image processing techniques, which typically segment images into smaller, randomly cropped sections before analysis, must sufficiently address these challenges. In this work, we propose a novel defect detection framework for identifying minor to medium-sized damages on wind turbine blades (WTBs), a critical component in renewable energy production. The proposed framework, termed 'slice-aided inference,' enhances the existing methodologies by incorporating both traditional patch division and a novel, more advanced technique known as slice-aided hyper-inference. These techniques are rigorously assessed with various advanced deep learning models, emphasizing their efficiency in identifying surface defects. The empirical testing conducted as part of this study demonstrates significant enhancements in detection capabilities, leveraging a dataset of high-resolution UAV images to highlight the practical applications and effectiveness of the proposed framework in real-world scenarios.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Aerial imagery</kwd>
        <kwd>drone imaging</kwd>
        <kwd>defect detection</kwd>
        <kwd>WTBs</kwd>
        <kwd>slice-aided inference</kwd>
        <kwd>hyper-inference</kwd>
        <kwd>deep neural networks</kwd>
        <kwd>image segmentation</kwd>
        <kwd>high-resolution imaging</kwd>
        <kwd>and object detection 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Renewable energy sources are increasingly recognized for their substantial benefits and are not
just solutions for individual nations but global imperatives in the face of climate change. Unlike
fossil fuels, they significantly mitigate CO2 emissions, which is crucial in the international
strategy to curb anthropogenic global warming [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Deploying renewable energy bolsters
national energy security and diminishes dependency on imported fossil fuels [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The localized
availability of renewable resources minimizes the risks associated with supply disruptions and
price volatility, promoting energy independence [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As a result, there is an escalating trend
among countries to invest in renewable energy infrastructures, including wind turbines and
solar panels.
      </p>
      <p>
        Wind turbines convert kinetic wind energy into electrical power, reducing greenhouse gas
emissions and enhancing energy supply diversity, security, and sustainability. These turbines
operate by harnessing the motion of wind to generate electricity without the need for
combustion processes [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The efficiency and output of wind turbines are significantly
influenced by the condition of their blades, which are integral to maximizing energy capture
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Research suggests that the blades are accountable for up to 25% of a turbine’s total energy
production. Maintaining these blades in optimal condition is imperative for maximizing energy
generation and minimizing operational downtime and associated costs [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Historically, the
inspection of wind turbines, especially offshore ones, has been predominantly manual and
labor-intensive, contributing to elevated maintenance costs and extended operational delays.
Recent advancements in sensor technologies, including those for acoustic, vibration, ultrasonic,
and strain measurements, have substantially improved wind turbines' maintenance and
condition monitoring [
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7–9</xref>
        ]. The integration of visual sensors, which facilitate detailed
examinations of turbine surfaces, is expected to yield significant benefits in the maintenance
regimes of these critical energy assets.
      </p>
      <p>
        There is a growing need for safer and more efficient methods of inspecting WTBs to enhance
cost-effectiveness and efficiency [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The ideal solution would balance sensors' reliability,
accuracy, and affordability. By implementing an effective and economical approach, wind farm
operators can optimize energy production while cutting maintenance expenses. Technological
advancements, especially in UAVs, are at the forefront in developed countries for various
applications [11, 12].
      </p>
      <p>UAVs have proven to be highly effective tools for aerial inspections, particularly in assessing
WTBs [13]. By utilizing UAV-based inspection methodologies, practitioners have achieved an
impressive throughput of 10–12 daily turbines, which could increase to 15–20 with complete
automation [13]. These advancements surpass conventional inspection methods and hold
significant promise for improving inspection efficacy, reducing expenses, and enhancing energy
generation with minimal operational disruption, all of which are crucial in the energy sector.</p>
      <p>The automated scrutiny of energy infrastructure, notably WTBs, can gain substantially from
the progression in drone technology and remote image surveillance, engendering cost
efficiencies and fostering climate change mitigation endeavors and safety protocols. Despite
strides made in deep learning paradigms, persistent challenges in object detection persist,
stemming from factors such as diminished image resolution, occlusions, intricate backgrounds,
and the diminutive dimensions of target objects. The essence of deep learning methodologies
lies in the iterative processes of training and inference, whereby models are honed to discern
anomalies through optimization methodologies applied to datasets replete with instances of
defects [14]. Subsequently, once trained, these models can identify deviations in new imagery
by extrapolating learned patterns. The overarching significance of meticulous dataset curation,
judicious selection of architectural frameworks, and optimization of these facets underscores
the imperative of achieving precise outcomes.</p>
      <p>Introducing high-resolution technology in aerial imaging has significantly enhanced our
ability to capture intricate details from a high vantage point [15, 16]. However, this
advancement also brings its own set of challenges. Depending on the UAV's proximity to the
subject, the varying scales of objects within these images can make it difficult to discern small
entities as the UAV moves away from its focal point. Additionally, the large amount of
background information in high-resolution imagery can challenge effective computational
processing.</p>
      <p>Within the purview of deep learning detection methodologies, exemplified by convolutional
neural networks (CNNs), these challenges manifest as impediments to optimal classifier
training, frequently culminating in compromised detection accuracy [17]. The emergence of
high-resolution data streams, typified by high-definition (HD) 4K, necessitates the innovation
of novel analytical techniques to navigate these intricacies.</p>
      <p>Figure 1 visually represents the multifaceted challenges encountered in inspecting surface
defects on WTBs using drone technology.</p>
      <p>Inference</p>
      <p>Inference Zoomed
Training</p>
      <p>The scheme in Figure 1 underscores the intricacies inherent in high-resolution images,
accentuating the complexities inherent in this domain.</p>
      <p>The rest of the paper is structured as follows. Related works review existing methodologies
and similar approaches. The suggested architecture section details the proposed defect detection
framework, covers the DTU-Drones dataset and annotation process, and outlines training and
inference strategies, emphasizing slice-aided hyper-inference. The results section presents
experimental outcomes and performance analysis. The discussion highlights comparative
analysis and practical implications. Finally, the conclusion summarizes findings, improvements
in defect detection, and future research directions, followed by references.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>The analysis of defects on WTBs encompasses a spectrum of methodologies, ranging from
conventional techniques rooted in image processing to contemporary approaches leveraging
hand-crafted features. Wang and Zhang, for instance, deployed Haar-like features in tandem
with cascaded classifiers to discern surface cracks on WTBs, with a principal emphasis on
distinguishing cracked regions from non-cracked ones [18]. Similarly, Huang and Wang
extended the utility of Haar-like features, integrating them with the parallel Jaya K-means
algorithm to achieve enhanced precision in surface crack detection [19]. Deng and Guo devised
a novel strategy by amalgamating an optimized Lévy flight strategy combined with the
logGabor filter for defect identification [20].</p>
      <p>To efficiently detect large-scale cracks on WTBs, Peng et al. [21] proposed an analytical
framework harnessing UAV-captured imagery. Alternatively, Ruiz and Magda [22] adopted a
distinct approach by converting operational signals from wind turbines into grayscale
representations, subsequently leveraging multichannel texture features for pattern recognition.</p>
      <p>The advent of deep learning methodologies has catalyzed a significant paradigm shift within
the realm of WTB defect detection. Shihavuddin et al. [23] spearheaded the introduction of a
feature pyramid network augmented with offline data augmentation techniques tailored
specifically for processing higher-resolution imagery. Their methodology involved training
diverse Faster-RCNN detectors on heterogeneous private datasets, yielding promising
outcomes. Subsequent explorations delved into the efficacy of YOLO models and EfficientDet,
further substantiating the potential of deep learning frameworks in this domain.</p>
      <p>The superiority of CNNs over traditional descriptors has been underscored, particularly with
the added advantage of ensemble classifiers. Foster et al. [17] contributed to this discourse by
categorizing WTB defects through the utilization of image patches for both training and
inference tasks. Yu et al. [24] addressed challenges associated with blurry imagery by deploying
a super-resolution CNN model complemented by Laplacian variance pre-processing techniques.
Remarkable advancements in detection performance for WTB defects have also been ascribed
to deploying other deep learning architectures.</p>
      <p>Collectively, these studies illuminate the pivotal role of deep learning methodologies in
augmenting defect detection capabilities in WTB inspection.</p>
      <p>The processing of high-resolution drone-captured images poses substantial challenges due
to the variability in object scales and the requisite computational resources [25]. To mitigate
these challenges, a prevalent approach involves partitioning these images into smaller patches,
a practice endorsed by several studies in the field [13, 19]. This strategy alleviates computational
burdens, enhances object clarity, and augments dataset dimensions, fortifying model
performance.</p>
      <p>Despite the substantial body of literature dedicated to detecting WTB defects, a notable
discrepancy endures in the methodologies and classification schemes employed across these
studies, a gap that our research aims to address. Benchmarking the performance of WTB surface
defect detectors proves particularly challenging due to the confidentiality of data [24] and the
absence of annotations, even in cases where data accessibility is feasible [17, 23]. Our work
strives to establish a unified approach and set a benchmark for future studies in this area.</p>
      <p>Utilizing drones for detecting surface defects on wind turbine blades (WTB) has proven to
be a cost-effective and efficient method, supported by prior research findings. However, this
inspection technique comes with its own set of challenges, such as processing high-resolution
images, detecting small objects, and adjusting for changes in object scale due to variations in
the drone's position.</p>
      <p>This study aims to improve accuracy of defect detection in renewable energy assets using
imagery captured by UAVs. To accomplish the stated objective, this paper intends to make the
following distinctive contributions:
•
•
•</p>
      <p>We present a defect detection framework that integrates a realistic slice-based inference
strategy for object detection in high-resolution images.</p>
      <p>We conduct a benchmark comparison of our framework against several state-of-the-art
deep learning detection baselines and slicing strategies, tailored specifically for
inspecting wind turbine blades.</p>
      <p>We perform an extensive evaluation using a high-resolution drone image dataset,
showcasing significant improvements in detecting minor and medium-sized defects on
wind turbine blades.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Suggested architecture</title>
      <p>Figure 2 depicts our proposed framework for WTB surface defect detection. During the
preprocessing step, high-resolution images undergo partitioning into patches, which are
subsequently incorporated into the training process. This framework aims to optimize defect
detection performance by leveraging patch-based and full-resolution inference strategies,
thereby addressing the challenges associated with high-resolution imagery in WTB inspection.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>This study uses the DTU-Drones inspection dataset of WTB images sourced from the Technical
University of Denmark (DTU). This publicly available dataset, accessible at [26], comprises 589
high-resolution images captured between 2017 and 2018 across diverse environmental
conditions. Notably, while high-resolution images typically possess dimensions of 1920 × 1080
pixels, the images within this dataset boast 5280 × 2890 pixels, unequivocally classifying them
as high resolution.</p>
        <p>Given the absence of surface defect bounding boxes in the DTU dataset, we embarked on an
annotation process after an exhaustive analysis of various WTB surface defects prevalent in the
dataset. It's worth noting that several recent studies have utilized the current dataset for similar
purposes [17, 23]. However, these studies could have increased their value by making their
annotations publicly accessible for broader utilization and by providing more consistent
interpretations of surface defect types.
3.2. Surface defects on wind turbine blades
Identifying defects required a comprehensive review of the existing literature concerning the
detection of surface defects in wind turbine blades WTBs. Utilizing publicly available datasets
(Section 3.1) and conducting a comprehensive literature search, we systematically categorized
various surface defects for our study.</p>
        <p>We carefully selected a total of 314 images, each representing one of five distinct types of
surface defects, as detailed in this investigation. These images collectively encompass 879
instances, with multiple defects observed per image. Specifically, we identified the following
five categories of defects, as illustrated on the left side of Figure 2.</p>
        <p>•
•
•
•
•</p>
        <p>Missing Teeth. This defect category involves missing teeth in the vertex-generating
panel, a critical component of WTBs. Identifying the presence or absence of these teeth
is essential for ensuring optimal blade performance.</p>
        <p>Paint-Off. Paint-off describes the loss or peeling of the protective paint layer on the
surface of WTBs. Although not directly harmful, paint-off signals the necessity for
maintenance to maintain the blade's structural integrity and extend its lifespan.
Erosion. Erosion is a form of surface degradation affecting WTBs that experiences
gradual wear and tear caused by environmental conditions or extended exposure to
natural elements. While erosion may not pose immediate threats, it necessitates regular
maintenance to mitigate potential issues.</p>
        <p>Crack. Surface cracks in WTBs are considered critical defects owing to their potential
to induce structural instability, ultimately leading to catastrophic failure. Detecting and
accurately localizing surface cracks are paramount for facilitating prompt maintenance
and averting further structural deterioration.</p>
        <p>Damage Lightning Receptor. The lightning receptor plays a crucial role in protecting
the wind turbine blade from lightning strikes. Identifying any surface damage to the
lightning receptor is essential for evaluating its effectiveness and ensuring it provides
sufficient protection against lightning-related damage.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Data annotation</title>
        <p>The dataset annotation process was conducted with meticulous attention to detail, explicitly
targeting the defective segments of WTBs. Our annotation methodology entailed precisely
localizing regions of interest corresponding to each defect type.</p>
        <p>During the annotation process, significant focus was placed on precisely identifying and
delineating specific areas of the wind turbine blades WTBs that exhibited defects such as
missing teeth in the vertex-generating panel, erosion, damage to the lightning receptor, cracks,
and paint-off. These surface defects were carefully localized within their respective regions,
providing a thorough and detailed annotation of the defective segments.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Pre-processing</title>
        <p>Before embarking on the learning phase, a series of pre-processing steps were diligently
executed to ensure the manageability of processing and model training for high-resolution
images, all while preserving the intricate details inherent in the imagery. Building upon the
insights gleaned from previous research, notably [17], we undertook an empirical analysis to
systematically assess the impact of various patch sizes on our study. The chosen patch sizes
proved compatible with our object detection models and demonstrated a discernible
performance advantage over alternative patch sizes.</p>
        <p>An automated approach was employed to select image patches containing at least one defect,
excluding patches exhibiting only background or devoid of defects from the dataset. Following
the patching process, the dataset was partitioned into distinct subsets for training, testing, and
validation purposes, facilitating subsequent experimental inquiries. Notably, both online
(onthe-fly) and offline augmentation techniques were applied exclusively after utilizing data
samples during the model training and inference processes.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.5. Detection system framework</title>
        <p>This section provides a comprehensive exposition of the detection framework tailored to
address the nuances of high-resolution images. Illustrated in Figure 2 is the entirety of the
proposed training and inference pipeline devised for the detection of (WTB) surface defects
from high-resolution images. The delineation of the proposed detection framework unfolds
across two distinct phases, namely, training and inference.
{  ,   }, where preprocessing produces non-overlapping image patches.</p>
        <p>Let’s  ∈   represents a high-resolution image in the training partition database  =
 = {( ,  ) ∣  ⊆  ,  ∈ R,  ×  × 3,  ≠ ∅}.
where b is the set of bounding boxed associated with each image patch p and K = 1024, as
ℒ ℰ = −</p>
        <p>log(  ).</p>
        <p>In this formula,   and   represent the ground truth label and the SoftMax probabilities for
the  -th class of  total classes, respectively. The function captures the concept of cross-entropy
loss, a common loss function used in classification problems to measure the difference between
the true distribution  and the predicted distribution s.
established in Section 4.1.</p>
        <p>A model ℳ is trained using image patches that make up the training set.</p>
        <p>=</p>
        <p>,
 ∈|  |
such that ℳ ← train( ).</p>
        <p>We trained three baseline neural network architectures for evaluation and benchmarking
purposes: YOLOv5 and RetinaNet, both known for their efficiency and Faster-RCNN, renowned
for their accuracy but requiring additional computational resources. YOLOv5 employs a
compact yet practical architecture, featuring a deep CNN backbone with 21 convolutional layers
(CSPDarknet21), supplemented with a feature pyramid network (PANet) and multiple detection
heads for efficient object detection. The system utilizes anchor boxes and a composite loss
function play pivotal roles in its training process, with non-maximum suppression refining
results during inference. On the other hand, RetinaNet adopts a one-stage design that
emphasizes efficiency and employs anchor boxes for region proposals. It utilizes a backbone
CNN (ResNet50) and a Feature Pyramid Network (FPN) to capture multi-scale features crucial
for detecting objects of various sizes. Lastly, Faster-RCNN operates as a two-stage model,
featuring a Region Proposal Network (RPN) for generating proposals and RoI Align layers for
feature extraction from these proposals, utilizing ResNet50 as its backbone. Despite its accuracy,
Faster-RCNN may demand more computational resources. These neural network architectures
were employed to train surface defect detection models, with images pre-processed as outlined
in Section 3.4. All models underwent training using the standard multi-class cross-entropy loss
function.
(1)
(2)
(3)</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.6. Inference strategies</title>
        <p>In the subsequent phase, the inference process unfolds through two distinct strategies: Scenario
I and Scenario II. Each plan is meticulously crafted to assess the proposed method's performance
under unique conditions. Figure 3 visually delineates both scenarios, offering a graphical
representation of their respective methodologies.</p>
        <p>Evaluating the method's performance under these contrasting conditions yields valuable
insights into the proposed framework’s practical viability and efficacy.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6.1. Scenario I: Patch-based inference</title>
        <p>Scenario I revolves around constructing the test set by leveraging image patches, as expounded
in Section 3.4. Within this framework, individual patches are fed into the trained model for
inference. Essentially, Scenario I illustrates a configuration where the model is trained with
image patches and evaluated on test patches that, while not identical, maintain the same patch
size as those used in training. This setup adheres to the conventional paradigm of machine
learning model training. However, it's crucial to acknowledge that Scenario I may segment
extended defects, creating separate bounding boxes within distinct image patches, which may
not align with practical feasibility. While the patch-based inference process facilitates swift
processing, it necessitates additional postprocessing steps for consolidating and identifying
corresponding image patches.</p>
      </sec>
      <sec id="sec-3-7">
        <title>3.6.2. Scenario II: Slice-aided inference</title>
        <p>In Scenario II, heightened realism is achieved by employing unprocessed high-resolution
images for the test set. This circumvents the need for manual pre-processing, instead employing
an internal pre-processing mechanism intrinsic to the testing process, as demonstrated in
Equation (4). This configuration offers notable advantages, notably enabling the direct
utilization of high-resolution images for prediction and amalgamating multiple detected defects
in the original image rather than treating them individually, thereby better reflecting the
challenges encountered in real-world scenarios.</p>
        <p>To effectively manage the processing of a high-resolution image during inference,
particularly within Scenario II, standard resizing or fixed cropping methods are suboptimal for
two primary reasons: (1) small objects may become nearly invisible following such
transformations, potentially eluding detection; and (2) the precision between overlapping
objects may be significantly compromised after resizing the original image.</p>
        <p>The text describes the implementation of a computational technique called "slice-aided
hyper-inference" for processing high-resolution images, particularly for detecting small and
medium-sized objects. The image  ′ from the dataset   , which is part of the test partition, is
segmented into  ×  patches, denoted as  ′ . This approach is designed to facilitate the
analysis of each segment individually, improving the inference accuracy and efficiency in
handling high-resolution data.</p>
        <p>To effectively manage the detection of surface defects, mainly to avoid issues with disjointed
defects across the boundaries of patches, the method incorporates overlapping sampling of
patches using a sliding window technique. This approach ensures that each window samples
patches with a defined overlap percentage v between adjacent windows. This overlapping
strategy aids in maintaining continuity in the areas of interest across patches, thereby reducing
the risk of missing or misidentifying defects that span multiple patches.</p>
        <p>After the patches are extracted and overlapped, they are resized to a uniform size of  × 
pixels. This standardization is crucial for consistent processing and analysis, ensuring that all
patches are subjected to the same scale regardless of their original dimensions. Finally, the
resized patches undergo patch-level model inference. This step involves analyzing each patch
using a trained model to predict or identify defects. The inference process on each patch is done
independently, allowing for detailed and localized defect detection within the high-resolution
image. The inference is carried out as follows:</p>
        <p>← inf   ( ′ ,  ) ∀ ∈ [1,  ],  ∈ [1,  ]. (4)</p>
        <p>The process described involves the detection of defects within images through a series of
computational steps. Initially,   represents the set of output bounding boxes determined by
the analysis. The function  (⋅) is a resizing module that standardizes the size of each image
patch  ′ to a uniform width  . Which is crucial for consistent processing across different
patches.</p>
        <p>Once the patches are processed, the bounding boxes with confidence scores exceeding the
detection threshold   are retained for further analysis. These selected boxes are then
subjected to Non-Maximum Suppression (NMS), a method used to eliminate redundant
bounding boxes. NMS is applied based on the Intersection over Union (IoU) metric, where boxes
overlapping more than a predefined threshold   are consolidated. This step ensures that the
final set of bounding boxes is non-redundant, optimizing the clarity and accuracy of the defect
detection process.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>Experiments utilized the DTU Images dataset for evaluation. Subsequent sections detail the
outcomes corresponding to the two scenarios outlined in Section 3.6. These results include the
class-wise mean average precision and comparisons among small, medium, and large objects
across both scenarios.</p>
      <sec id="sec-4-1">
        <title>4.1. Evaluation details</title>
        <p>To assess the efficacy of two distinct methodologies, the dataset was divided into three parts:
training, validation, and testing, with proportions of 70%, 15%, and 15%. In the first scenario
(Scenario I), we divided the original high-resolution images into patches measuring 1024 pixels
on each side, as detailed in Table 1.
mAP
@0.50.95
44.4
44.7
44.6
43.8</p>
        <p>This patch size was chosen to facilitate manageable training under resource constraints
while preserving the image's intricate details, as discussed in Section 3.4. In the second scenario
(Scenario II), through detailed experimentation, it was found that a patch width of 800 pixels
(W) yielded the best results, as indicated in Table 1.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Performance metric</title>
        <p>To assess the effectiveness of our models, we evaluated their performance on the test partition
by calculating the mean average precision (mAP). This metric was examined at the commonly
used 0.5 IoU threshold (expressed as mAP@.50) typical in object detection analyses and through
a more detailed measure spanning a range from 0.5 to 0.95 IoU thresholds, in increments of 0.05
(expressed as mAP@.5-.95). In alignment with the criteria set by the COCO challenge, we
analyzed performance specifically for small, medium, and large objects. For this purpose,
mAP@.5-.95s was calculated for small objects (area &lt; 322), mAP@.5-.95m for medium objects
(322 &lt; area &lt; 962), and mAP@.5-.95l for large objects (area &gt; 962).</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Training configurations</title>
        <p>In our experiments, training was performed with a batch size of 8, using the Stochastic Gradient
Descent (SGD) optimizer. Learning rates were set according to standard practices: for
FasterRCNN and RetinaNet, and for YOLOv5. Baseline models were sourced from the Detectron2 and
Ultralytics repositories. All experiments were conducted on a system equipped with an Intel i5
processor and a single NVIDIA RTX 4060 GPU.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Experimental results</title>
      </sec>
      <sec id="sec-4-5">
        <title>4.4.1. Overall results</title>
        <p>In Table 2, the YOLOv5 model demonstrates stable performance, with a maximum variation of
1.7 points observed in the "large" object size category.</p>
        <p>In contrast, Faster-RCNN displays more significant fluctuations, especially achieving notable
improvements for "small" and "medium" objects by 14.2 and 7.2 points, respectively. However, it
also shows a marked decrease of 14.3 points for "large" objects, indicating its sensitivity to scenario
changes and significant variability across different object sizes. RetinaNet also shows improved
performance for "small" and "medium" objects, with increases of 5.8 and 7.8 points, but
experiences a drop of 15.5 points for "large" objects, mirroring the trend seen in Faster-RCNN.
Overall, it is clear from the comparison of detection performance across small, medium, and large
objects in Scenarios I and II that the proposed framework in Scenario II significantly enhances the
performance of smaller objects when using the Faster-RCNN and RetinaNet models.</p>
        <p>Table 3 delineates the comparative effectiveness of baseline models across Scenarios I and II
using two key metrics.</p>
        <p>In Scenario I, YOLOv5 achieves a mAP@.50 of 81.3, which increases to 85.1 in Scenario II.
Similarly, Faster-RCNN exhibits a rise from 73.2 to 83.4 when moving from Scenario I to
Scenario II. In contrast, RetinaNet records a slight decrease in mAP@.50 from 70.6 to 70.4. For
the mAP@.5-.95 metric, all models show enhancements; YOLOv5 improves from 41.7 to 44.2,
Faster-RCNN from 37.8 to 43.1, and RetinaNet from 32.9 to 37.9 when transitioning from
Scenario I to Scenario II. The results indicate that RetinaNet, along with the other models,
benefits significantly under the optimized conditions of Scenario II, particularly in detecting
small and medium objects within high-resolution imagery captured by drones. This suggests
that Scenario II surpasses the typical configurations in previous studies, enhancing overall
detection capabilities.</p>
      </sec>
      <sec id="sec-4-6">
        <title>4.4.2. Class-wise results</title>
        <p>From Table 4, in the case of YOLOv5 under our proposed framework (Scenario II), there is a
twofold improvement in the performance for the CR class, although decreases are observed in
the MT and DA.</p>
        <p>But for Faster-RCNN, significant enhancements are noted in the ER, DA, and CR classes with
the adoption of our framework. Conversely, RetinaNet shows performance gains in all classes
except for the DA class. It's noted that samples in the DA class usually represent minor defects
on the WTB surface, which perform well under the slice-aided setup.</p>
        <p>Figure 4 provides a graphical comparison that highlights class-specific performance
differences between Scenario I and Scenario II for the evaluated models.</p>
        <p>Notably, YOLOv5 consistently improves, particularly in the CR class within Scenario II. In
contrast, Faster-RCNN demonstrates enhanced performance across three distinct classes under
Scenario II conditions, indicating increased reliability. Furthermore, RetinaNet exhibits
exceptional results in Scenario II for all classes, which can largely be attributed to the
implementing of the focal loss function. This function effectively tackles class imbalances
within the dataset.</p>
        <p>In addition, Figure 5 presents the precision-recall curve for the DTU test set, highlighting
the effectiveness of our method in detecting surface defects on WTBs.</p>
        <p>The curves in Figure 5 evaluate two key performance metrics: precision, which indicates the
accuracy of the detections, and recall, which measures the method's ability to identify all
relevant defects in the images.</p>
        <p>The curves illustrate the method's performance across various conditions by employing
different Intersections over Union (IoU) thresholds, namely C75 (IoU threshold of 0.75), C65
(IoU threshold of 0.65), C50 (IoU threshold of 0.5), and C30 (IoU threshold of 0.3). These IoU
thresholds are instrumental in assessing the robustness of the method and the trade-offs
between precision and recall at various levels. Such insights are vital for refining and optimizing
defect detection methods to enhance accuracy and efficiency in real-world applications.</p>
      </sec>
      <sec id="sec-4-7">
        <title>4.4.3. Visual comparisons</title>
        <p>Close analysis reveals that our proposed framework significantly enhances the ability to
detect defects, particularly in cases where Scenario I might miss or inadequately detect them.
This improvement is notably illustrated in Figure 6, particularly in the second row, the second
column, where the model under Scenario I completely misses a DA defect. Conversely, this
defect is successfully detected in Scenario II, shown in the second row, third column; it remains
unnoticed in a 1024-pixel setting but becomes apparent in an 800-pixel context, as used in
Scenario II. These findings underscore the efficacy of a multi-scale image processing strategy.</p>
        <p>However, it is essential to recognize that both scenarios still encounter challenges, especially
with certain defect classes that are difficult to localize. This issue is highlighted in Figure 6,
where a PO defect at the edge of the image in the second row, third column, poses localization
challenges for the model in Scenario II. This example demonstrates the complex challenges
present in analyzing drone-captured imagery.</p>
      </sec>
      <sec id="sec-4-8">
        <title>4.4.4. Efficiency</title>
        <p>Given the complexities involved with slice-aided inference, it naturally leads to longer inference
times when processing a full-size image. In Scenario II, we observed an average inference time
of 0.418 seconds per patch using the YOLOv5 model, equating to approximately 27.6 seconds
for an entire full-size image. However, it is worth noting that the inference duration could be
shortened by selectively processing only specific patches.</p>
        <p>A comparative analysis of inference speeds between the two scenarios reveals that Scenario
I is more efficient. This efficiency advantage stems from Scenario I's method of patch-based
processing, which contrasts with the slower performance observed in Scenario II. The latter's
extended processing time is primarily due to its detailed analysis, which thoroughly considers
predictions from the original high-resolution images.</p>
        <p>The complexity of object detection models can be gauged by examining the number of
parameters they incorporate. YOLOv5, known for its simplicity, utilizes approximately 7.2
million parameters. In contrast, RetinaNet features around 32 million parameters, and
FasterRCNN is even more complex, with about 38 million parameters.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>This study explored two distinct methodologies for evaluating our defect detection technique
tailored for WTB inspections. Scenario I utilized segmented patches from images for both
training and testing. This method proved fast but had the drawback of occasionally missing
defects that span multiple patches, resulting in fragmented detections of a single defect.
Additionally, detecting small objects is challenging due to the high-resolution aerial imagery
and the variable scale of objects caused by the drone's varying distance from the target.</p>
      <p>Common strategies to address this issue in high-resolution images involve randomly
cropping or rescaling images before they are introduced to the model for training and testing.
Nevertheless, these tactics may still result in poor representation of objects during the training
phase. Alternatively, we considered segmenting the images into smaller patches for direct
application in both the training and testing phases.</p>
      <p>On the other hand, Scenario II evaluated our method on raw, high-resolution images. This
approach successfully identified defects overlooked in Scenario I, especially those that were
small or spanned multiple patches. Performance-wise, YOLOv5 demonstrated consistent results
in both scenarios, with slight improvements for medium and large objects in Scenario II. Faster
R-CNN showed substantial enhancements in detecting small and medium objects in Scenario II,
though its efficiency declined for larger objects. Likewise, RetinaNet improved its detection of
small and medium-sized objects in Scenario II but struggled with larger objects.</p>
      <p>The comparative analysis summarized in Table 3 underscores that the proposed method in
Scenario II consistently elevates the performance of YOLOv5, Faster-RCNN, and RetinaNet
across various metrics. Our approach could significantly boost defect detection in practical
applications, especially for smaller objects. This technique is versatile for both on-shore and
offshore operations, requiring only an image of the turbine blade.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>This paper presents a robust framework specifically designed for detecting surface defects on
WTBs using high-resolution imagery. Our proposed slice-aided inference method significantly
improves the detection accuracy of small and medium-sized defects in high-resolution
UAVcaptured images. The experimental results show that our framework, particularly under
Scenario II conditions, enhances the performance of deep learning models such as YOLOv5,
Faster-RCNN, and RetinaNet. For instance, YOLOv5 achieved a mAP of 45.2% in Scenario II,
compared to 42.7% in Scenario I. Similarly, Faster-RCNN improved from 38.8% in Scenario I to
44.1% in Scenario II, and RetinaNet showed an increase from 33.9% to 38.9%.</p>
      <p>Despite these significant improvements, the proposed framework has some limitations. One
of the main challenges is the increased computational cost associated with slice-aided inference,
which leads to longer processing times. For example, the average inference time in Scenario II
was 27.6 seconds per full-size image, which is substantially higher than in Scenario I.
Additionally, the method still faces difficulties in detecting certain defect types, such as
paintoff defects at the image edges, which can affect localization accuracy.</p>
      <p>Future research should focus on addressing these limitations by optimizing the inference
process to reduce computational overhead and improving the detection algorithms to better
handle edge cases and complex backgrounds.
[11] Z. Liu, X. Liu, K. Wang, Z. Liang, J. A. F. O. Correia, A. De Jesus, GA-BP neural
networkbased strain prediction in full-scale static testing of wind turbine blades, Energies 12.6 (2019)
1026. doi:10.3390/en12061026.
[12] M. Shafiee, Z. Zhou, L. Mei, F. Dinmohammadi, J. Karama, D. Flynn, Unmanned aerial drones
for inspection of offshore wind turbines: A mission-critical failure analysis, Robotics 10.1
(2021) 26. doi:10.3390/robotics10010026.
[13] W. Qi, Object detection in high resolution optical image based on deep learning technique,</p>
      <p>Nat. Hazards Res. 2.4 (2022) 384–392. doi:10.1016/j.nhres.2022.10.002.
[14] O. Melnychenko, L. Scislo, O. Savenko, A. Sachenko, P. Radiuk, Intelligent integrated system
for fruit detection using multi-UAV imaging and deep learning, Sensors 24.6 (2024) 1913.
doi:10.3390/s24061913.
[15] R. Yang, R. Wang, Y. Deng, X. Jia, H. Zhang, Rethinking the random cropping data
augmentation method used in the training of CNN-based SAR image ship detector, Remote
Sens. 13.1 (2020) 34. doi:10.3390/rs13010034.
[16] O. Pavlova, O. Halytskyi, Video repeater design concept for UAV control, Comput. Syst. Inf.</p>
      <p>Technol. No. 1 (2024) 33–38. doi:10.31891/csit-2024-1-4.
[17] O. Melnychenko, O. Savenko, P. Radiuk, Apple detection with occlusions using modified
YOLOv5-v1, in: 2023 IEEE 12th International Conference on Intelligent Data Acquisition and
Advanced Computing Systems: Technology and Applications (IDAACS), IEEE, New York,
NY, USA, 2023, pp. 107–112. doi:10.1109/idaacs58523.2023.10348779.
[18] L. Wang, Z. Zhang, Automatic detection of wind turbine blade surface cracks based on
UAVtaken images, IEEE Trans. Ind. Electron. 64.9 (2017) 7293–7303. doi:10.1109/tie.2017.2682037.
[19] L. Wang, Z. Zhang, X. Luo, A two-stage data-driven approach for image-based wind turbine
blade crack inspections, IEEE/ASME Trans. Mechatron. 24.3 (2019) 1271–1281.
doi:10.1109/tmech.2019.2908233.
[20] L. Deng, Y. Guo, B. Chai, Defect detection on a wind turbine blade based on digital image
processing, Processes 9.8 (2021) 1452. doi:10.3390/pr9081452.
[21] L. Peng, J. Liu, Detection and analysis of large-scale WT blade surface cracks based on
UAVtaken images, IET Image Process. 12.11 (2018) 2059–2064. doi:10.1049/iet-ipr.2018.5542.
[22] M. Ruiz, L. E. Mujica, S. Alférez, L. Acho, C. Tutivén, Y. Vidal, J. Rodellar, F. Pozo, Wind
turbine fault detection and classification by means of image texture analysis, Mech. Syst.</p>
      <p>Signal Process. 107 (2018) 149–167. doi:10.1016/j.ymssp.2017.12.035.
[23] A. Shihavuddin, M. R. A. Rashid, M. H. Maruf, M. A. Hasan, M. A. u. Haq, R. H. Ashique, A.</p>
      <p>A. Mansur, Image based surface damage detection of renewable energy installations using a
unified deep learning approach, Energy Rep. 7 (2021) 4566–4576.
doi:10.1016/j.egyr.2021.07.045.
[24] Y. Yu, H. Cao, X. Yan, T. Wang, S. S. Ge, Defect identification of wind turbine blades based
on defect semantic features with transfer feature extractor, Neurocomputing 376 (2020) 1–9.
doi:10.1016/j.neucom.2019.09.071.
[25] V. V. Morozov, O. V. Kalnichenko, O. O. O. M. Mezentseva, The method of interaction
modeling on basis of deep learning the neural networks in complex IT-projects, Int. J.</p>
      <p>Comput. (2020) 88–96. doi:10.47839/ijc.19.1.1697.
[26] A. Shihavuddin and X. Chen, DTU - Drone inspection images of wind turbine, Software, v.
2, Mendeley Data, 2018. doi:10.17632/hd96prn3nc.2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Alharbi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Csala</surname>
          </string-name>
          ,
          <article-title>Gulf cooperation council countries' climate change mitigation challenges and exploration of solar and wind energy resource potential</article-title>
          ,
          <source>Appl. Sci. 11</source>
          .6 (
          <year>2021</year>
          )
          <article-title>2648</article-title>
          . doi:
          <volume>10</volume>
          .3390/app11062648.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ikram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sroufe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , M. Ferasso,
          <article-title>Assessment and prediction of environmental sustainability: novel grey models comparative analysis of China vs</article-title>
          .
          <source>the USA, Environ. Sci. Pollut. Res</source>
          .
          <volume>28</volume>
          (
          <year>2021</year>
          )
          <fpage>17891</fpage>
          -
          <lpage>17912</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11356-020-11418-3.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mamkhezri</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Khezri, Assessing the spillover effects of research and development and renewable eFiskernergy on CO2 emissions: international evidence</article-title>
          , Environ.,
          <string-name>
            <surname>Dev</surname>
          </string-name>
          . Sustain.
          <volume>26</volume>
          (
          <year>2023</year>
          )
          <fpage>7657</fpage>
          -
          <lpage>7686</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10668-023-03026-1.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Hernandez-Estrada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lastres-Danguillecourt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Robles-Ocampo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lopez-Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. Y.</given-names>
            <surname>Sevilla-Camacho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Perez-Sariñana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Dorrego-Portela</surname>
          </string-name>
          ,
          <article-title>Considerations for the structural analysis and design of wind turbine towers: A review</article-title>
          ,
          <source>Renew. Sustain. Energy Rev</source>
          .
          <volume>137</volume>
          (
          <year>2021</year>
          )
          <article-title>110447</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.rser.
          <year>2020</year>
          .
          <volume>110447</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Adeyeye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ijumba</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Colton,</surname>
          </string-name>
          <article-title>The effect of the number of blades on the efficiency of a wind turbine</article-title>
          ,
          <source>IOP Conf. Ser. 801.1</source>
          (
          <year>2021</year>
          )
          <article-title>012020</article-title>
          . doi:
          <volume>10</volume>
          .1088/
          <fpage>1755</fpage>
          -1315/801/1/012020.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cieslak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Childs</surname>
          </string-name>
          ,
          <string-name>
            <surname>Wind-Turbine</surname>
            <given-names>Inspection</given-names>
          </string-name>
          ,
          <article-title>Maintenance and Repair Robotic System</article-title>
          ,
          <source>in: ASME Turbo Expo 2023: Turbomachinery Technical Conference and Exposition</source>
          , American Society of Mechanical Engineers, New York, NY, USA,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .1115/gt2023-
          <fpage>101713</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kwok</surname>
          </string-name>
          ,
          <article-title>Damage detection techniques for wind turbine blades: A review</article-title>
          ,
          <source>Mech. Syst. Signal Process</source>
          .
          <volume>141</volume>
          (
          <year>2020</year>
          )
          <article-title>106445</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.ymssp.
          <year>2019</year>
          .
          <volume>106445</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>O.</given-names>
            <surname>Melnychenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Savenko</surname>
          </string-name>
          ,
          <article-title>A self-organized automated system to control unmanned aerial vehicles for object detection</article-title>
          ,
          <source>in: Proceedings of the 4th International Workshop on Intelligent Information Technologies &amp; Systems of Information Security (IntelITSIS</source>
          '
          <year>2023</year>
          ), CEURWS.org, Aachen,
          <year>2023</year>
          , pp.
          <fpage>589</fpage>
          -
          <lpage>600</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. I.</given-names>
            <surname>Panagiotopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tcherniak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. D.</given-names>
            <surname>Fassois</surname>
          </string-name>
          ,
          <article-title>Damage detection on an operating wind turbine blade via a single vibration sensor: A feasibility study</article-title>
          ,
          <source>in: Lecture Notes in Civil Engineering</source>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>405</fpage>
          -
          <lpage>414</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -64908-1_
          <fpage>38</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <article-title>In-situ condition monitoring of wind turbine blades: A critical and systematic review of techniques, challenges, and futures</article-title>
          ,
          <source>Renew. Sustain. Energy Rev</source>
          .
          <volume>160</volume>
          (
          <year>2022</year>
          )
          <article-title>112326</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.rser.
          <year>2022</year>
          .
          <volume>112326</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>