<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>” Journal
of Industrial Information Integration</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>control in automotive manufacturing: a comparative analysis of YOLOv8, YOLOv9, and YOLOv10 for vehicle damage detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Dolhopolov</string-name>
          <email>dolhopolov@icloud.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetyana Honcharenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Denys Chernyshev</string-name>
          <email>chernyshev.do@knuba.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Panina</string-name>
          <email>panina.ov@knuba.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonina Makhynia</string-name>
          <email>makhynia.aa@knuba.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kyiv National University of Construction and Architecture</institution>
          ,
          <addr-line>31, Air Force Avenue, Kyiv, 03037</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1254</volume>
      <issue>5</issue>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This study introduces a novel application of state-of-the-art object detection models for automating quality control in automotive manufacturing, presenting the first comprehensive comparative analysis of YOLOv8, YOLOv9, and YOLOv10 architectures for vehicle damage detection. Utilizing a custom-curated dataset of 7,258 images, we employ transfer learning techniques to optimize model performance, a pioneering approach in this domain. Our results demonstrate the unprecedented superiority of YOLOv10 across key metrics, achieving a mean Average Precision (mAP50) of 0.65077 and an F1-score of 0.64934. We uniquely quantify the effectiveness of transfer learning, showing substantial performance gains with pre-trained weights initialization. Notably, we establish YOLOv10's viability for real-time quality control applications despite marginally increased computational requirements, a finding not previously reported. This research contributes novel insights into AI-driven solutions for automotive quality control, advancing the digital transformation of manufacturing processes and paving the way for future industrial AI innovations.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Computer Vision</kwd>
        <kwd>Automated Quality Control</kwd>
        <kwd>Transfer Learning</kwd>
        <kwd>YOLO</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The integration of computer vision systems in manufacturing processes has become crucial for
automated quality control, especially in the automotive industry. Traditional manual inspection
methods are time-consuming, subjective, and error-prone, necessitating more robust and efficient
quality control mechanisms [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Recent advancements in deep learning, particularly the YOLO (You Only Look Once) family of
models, have shown promise in real-time object detection [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This study explores the latest
iterations – YOLOv8, YOLOv9, and YOLOv10 – for automated vehicle damage detection.
      </p>
      <p>
        Transfer learning has emerged as a powerful technique, allowing pre-trained models to be
finetuned for specific tasks with limited domain-specific data [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This approach is particularly valuable
in industrial settings where large, labeled datasets may be unavailable or costly to produce.
      </p>
      <p>
        Previous studies have demonstrated the potential of deep learning in manufacturing quality
control [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
        ]. Our research extends these foundations by:
1. Providing a comprehensive comparison of the latest YOLO models for vehicle damage
detection.
2. Exploring the application of transfer learning to optimize performance with limited data.
3. Evaluating practical implementation challenges in real-world manufacturing settings.
      </p>
      <p>
        The implementation of an effective automated quality control system has far-reaching
implications for the automotive industry, potentially increasing throughput, improving
consistency, and reducing waste [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Our study explores how advanced object detection models and transfer learning can enhance
automated quality control for vehicle damage detection. This research contributes to computer
vision and manufacturing technology, offering insights for industry practitioners adopting
AIdriven quality control solutions.</p>
      <p>
        Our research builds upon these foundations and extends them in several key ways. Firstly, we
provide a comprehensive comparison of the latest YOLO models, building on the work of Redmon
and Farhadi who introduced YOLOv3 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], offering insights into their relative strengths and
weaknesses for vehicle damage detection. YOLOv9, released after, further refined the architecture,
introducing novel features that promised even better performance in complex detection scenarios
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The most recent iteration, YOLOv10, represents the cutting edge in object detection
technology, and its potential for manufacturing applications is yet to be fully explored in the
literature [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        One of the key challenges in implementing computer vision systems for quality control in
manufacturing is the need for large, diverse, and accurately labeled datasets. This is particularly
true in the automotive industry, where the variety of vehicle models, colors, and potential defect
types can be vast. Transfer learning offers a promising solution to this challenge by allowing
models pre-trained on large, general datasets to be fine-tuned for specific tasks with relatively
small amounts of domain-specific data [
        <xref ref-type="bibr" rid="ref11 ref12">11-12</xref>
        ]. This approach has shown success in various fields,
from medical imaging to satellite imagery analysis, and its application to vehicle damage detection
represents a novel contribution of our study [
        <xref ref-type="bibr" rid="ref13 ref14">13-15</xref>
        ].
      </p>
      <p>The effectiveness of transfer learning demonstrated in this study aligns with the comprehensive
survey by Tan et al., who provided an in-depth overview of deep transfer learning techniques and
their applications [16]. Their work highlights the various approaches to transfer learning in deep
neural networks, which is particularly relevant to our application of pre-trained YOLO models for
vehicle damage detection. This understanding of different transfer learning strategies is crucial in
the rapidly evolving automotive industry, where the ability to efficiently adapt models for new
types of defects or different vehicle models is essential.</p>
      <p>This research addresses real-world implementation challenges [17], aligning with Industry 4.0
principles [18-19], and extends beyond defect detection to broader manufacturing process
optimization [20-22].</p>
    </sec>
    <sec id="sec-2">
      <title>2. Main research</title>
      <p>The research process is structured into several key components: dataset preparation and
preprocessing, model architecture and implementation, training methodology, and comprehensive
performance evaluation. Through this systematic approach, we aim to provide insights into the
effectiveness of these advanced computer vision techniques for enhancing quality control processes
in the automotive industry [23-26].</p>
      <sec id="sec-2-1">
        <title>2.1. Dataset and preprocessing</title>
        <p>We utilized the Car Dents Computer Vision Project dataset, comprising 7,258 images (6,855
training, 377 validation, 26 test) of various vehicle damage types. Preprocessing steps included:
1. Resizing images to 640x640 pixels.
2. Data augmentation: 90° rotation, ±15° shear transformation, and ±15% brightness
adjustment.</p>
        <p>Dataset analysis revealed class imbalance (Dent: 3391, Accident: 1927, Scratch: 2072) and
bounding box characteristics, informing model optimization strategies.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Model architecture and transfer learning</title>
        <p>We implemented YOLOv8, YOLOv9, and YOLOv10, leveraging their architectural advancements:
by:
1. YOLOv8. Anchor-free detection, new backbone network.
2. YOLOv9. Efficient neck structure, advanced loss functions.
3. YOLOv10. Dynamic attention mechanism, hybrid backbone (convolutional and transformer
layers).</p>
        <p>Transfer learning was applied using pre-trained weights from the COCO dataset, represented
θnew=θ pre+ ∆θ ,
where θnew are the new model parameters after fine-tuning;θ pre are the pre-trained parameters; ∆θ
represents the parameter updates during fine-tuning.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Training methodology</title>
        <p>We employed a consistent training methodology across all three models to ensure a fair
comparison. The key aspects of our training process were:
1. Optimizer. We used the Adam optimizer with a cosine learning rate schedule. The learning
rate can be described by the equation:
π
lr (t )=lrmin+0.5 ∙( lrmax−lrmin )∙( 1+cos ⁡( t ∙ )) ,</p>
        <p>T</p>
        <p>Ltotal= λ1 ∙ Lbox + λ2 ∙ Lobj+ λ3 ∙ Lcls ,
where t is the current epoch; T is the total number of epochs; lrmin is the minimum learning rate;
lrmax is the maximum learning rate.</p>
        <p>2. Loss Function. We utilized a combination of losses typical for object detection tasks:
where Lbox is the bounding box regression loss; Lobj is the objectness loss; Lcls is the classification
loss; λ1, λ2, and λ3 are weighting factors.
(1)
(2)
(3)
3. Early Stopping. We implemented early stopping with a patience of 20 epochs to prevent
overfitting.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Evaluation metrics</title>
        <p>Performance assessment utilized: Mean Average Precision (mAP@0.5 and mAP@0.5:0.95),
Precision and Recall, F1-Score, Confusion Matrix, and Inference Time</p>
        <p>The precision and recall can be calculated using the following equations:
where TP is True Positives; FP is False Positives; FN is False Negatives.</p>
        <p>The F1-score is then calculated as:</p>
        <p>Precision=</p>
        <p>Recall=</p>
        <p>TP
TP + FP</p>
        <p>TP ,
TP + FN</p>
        <p>,
F 1= 2 ∙ Precision ∙ Recall ,</p>
        <p>Precision+ Recall
(4)
(5)
where Precision represents the proportion of true positive predictions among all positive
predictions made by the model; Recall (also known as sensitivity or true positive rate) indicates the
proportion of true positive predictions among all actual positives in the dataset.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Experimental setup</title>
        <p>Experiments were conducted using PyTorch on NVIDIA GeForce RTX 4080 Super GPUs,
implementing models via the Ultralytics YOLO framework. The procedure included model
initialization, training, validation, testing, and performance analysis.</p>
        <p>To ensure reproducibility, a fixed random seed was used across all experiments, allowing fair
comparisons between YOLO versions while controlling for neural network training stochasticity.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and analysis</title>
      <p>After conducting our experiments with YOLOv8, YOLOv9, and YOLOv10 on the Car Dents dataset,
we obtained comprehensive results that provide insights into the performance of each model. In
this section, we will present and analyze these results in detail.</p>
      <sec id="sec-3-1">
        <title>3.1. Training performance</title>
        <p>All models showed consistent improvement during training, with YOLOv10 exhibiting the fastest
convergence. YOLOv10 achieved the lowest final box loss (1.1842) and classification loss (0.80735),
significantly outperforming YOLOv8 and YOLOv9. YOLOv10 demonstrated the highest mAP50(B)
throughout training, peaking at 0.65077.</p>
        <p>Models successfully detected various damage types with high confidence (0.3-0.9) and
demonstrated robustness to diverse scenarios. Multiple damages on single vehicles were effectively
identified. Some misclassifications were observed, particularly between dents and scratches.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Model performance comparison</title>
        <p>YOLOv8</p>
        <p>YOLOv10 consistently outperformed other models across all metrics, with a 5.7% improvement
in F1-score over YOLOv8 and 3.1% over YOLOv9.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Class-wise performance analysis</title>
        <p>To gain deeper insights into model performance across different types of vehicle damage, we
analyzed class-wise metrics. Figure 1 presents the F1-Confidence curves for each class (Accident,
Dent, Scratch) across all three models.</p>
        <p>YOLOv10 achieved the highest F1-scores for all damage types: dents (0.719), scratches (0.689),
and accidents (0.637). Accident detection proved most challenging across all models.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Precision-recall curve analysis</title>
        <p>Figure 2 illustrates the Precision-Recall curves for each model, providing a comprehensive view of
their performance across different confidence thresholds.</p>
        <p>YOLOv10 demonstrated the largest Area Under the Curve (AUC) and maintained higher recall
in high precision regions (&gt;0.8) compared to YOLOv8 and YOLOv9.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Transfer learning effectiveness</title>
        <p>To evaluate the effectiveness of transfer learning, we compared the performance of each model
when trained from scratch versus when initialized with pre-trained weights. Table 2 presents this
comparison:</p>
        <p>These results demonstrate the significant benefits of transfer learning across all models.
Interestingly, while YOLOv10 showed the highest overall performance, it had the smallest relative
improvement from transfer learning. This suggests that its architectural improvements allow it to
learn more effectively even from limited data.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Computational efficiency</title>
        <p>While YOLOv10 demonstrated superior detection performance, it's crucial to consider the
computational requirements for practical implementation. Table 3 compares the model sizes and
average inference times:</p>
        <p>The marginal increase in model size and inference time for YOLOv10 is relatively small
compared to the performance gains, suggesting that it remains a viable option for real-time
applications in manufacturing settings.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>Our comprehensive study on implementing computer vision systems for automated quality control
in automotive manufacturing, focusing on vehicle damage detection, has yielded significant
insights into the capabilities of state-of-the-art object detection models.</p>
      <p>YOLOv10 consistently outperformed its predecessors, achieving a mAP50 of 0.65077 and an
F1score of 0.64934, representing improvements of 5.7% and 3.1% over YOLOv8 and YOLOv9,
respectively. The application of transfer learning proved highly beneficial, with YOLOv8, YOLOv9,
and YOLOv10 showing mAP50 improvements of 22.5%, 20.4%, and 16.9% respectively when
initialized with pre-trained weights.</p>
      <p>Despite its superior performance, YOLOv10 required only marginally more computational
resources, with a negligible increase in inference time (14.1ms compared to 12.5ms for YOLOv8),
making it viable for real-time applications in manufacturing settings.</p>
      <p>Key implications for the automotive manufacturing industry include:
1. Enhanced Quality Control. Automation of damage detection can reduce human error and
increase consistency.
2. Increased Efficiency. Real-time defect detection enables inspection without production
bottlenecks.
3. Cost Reduction. Minimizing manual inspection and early defect detection can lead to
significant cost savings.
4. Adaptability. Transfer learning enables quick adaptation to new defect types or vehicle
models.
5. Data-Driven Insights. Deployment of these systems can generate valuable data on defect
patterns and trends.</p>
      <p>This study demonstrates the significant potential of advanced object detection models,
particularly YOLOv10, in revolutionizing quality control processes in automotive manufacturing.
The success of transfer learning techniques paves the way for widespread adoption of AI-driven
solutions in industrial quality control, contributing to enhanced product quality and manufacturing
efficiency.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          , L. Zhang,
          <string-name>
            <given-names>R. X.</given-names>
            <surname>Gao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Wu</surname>
          </string-name>
          , “
          <article-title>Deep learning for smart manufacturing: Methods and applications</article-title>
          ,”
          <source>The Journal of Manufacturing Systems</source>
          , vol.
          <volume>48</volume>
          , pp.
          <fpage>144</fpage>
          -
          <lpage>156</lpage>
          ,
          <year>July 2018</year>
          . https://doi.org/10.1016/j.jmsy.
          <year>2018</year>
          .
          <volume>01</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bochkovskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. Y. M.</given-names>
            <surname>Liao</surname>
          </string-name>
          , “
          <article-title>YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv preprint</article-title>
          ,
          <year>April 2020</year>
          . URL: https://arxiv.org/pdf/
          <year>2004</year>
          .10934.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Pan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          , “
          <article-title>A Survey on Transfer Learning,” IEEE Transactions on Knowledge and Data Engineering</article-title>
          , vol.
          <volume>22</volume>
          , no.
          <issue>10</issue>
          , pp.
          <fpage>1345</fpage>
          -
          <lpage>1359</lpage>
          ,
          <year>October 2010</year>
          . https://doi.org/10.1109/TKDE.
          <year>2009</year>
          .
          <volume>191</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tabernik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Šela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Skvarč</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Skočaj</surname>
          </string-name>
          , “
          <article-title>Segmentation-based deep-learning approach for surface-defect detection</article-title>
          ,
          <source>” Journal of Intelligent Manufacturing</source>
          , vol.
          <volume>31</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>759</fpage>
          -
          <lpage>776</lpage>
          , May
          <year>2019</year>
          . https://doi.org/10.1007/s10845-019-01476-x.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bronin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuchansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Biloshchytskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Zinyuk</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Kyselov</surname>
          </string-name>
          , “
          <source>Concept of Digital Competences in Service Training Systems,” Advances in Intelligent Systems and Computingm</source>
          vol.
          <volume>1192</volume>
          , pp.
          <fpage>379</fpage>
          -
          <lpage>388</lpage>
          ,
          <year>November 2019</year>
          . https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -49932-7_
          <fpage>37</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Biloshchytskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuchansk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Andrashko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neftissov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vatskel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yedilkhan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Herych</surname>
          </string-name>
          , “
          <article-title>Building a model for choosing a strategy for reducing air pollution based on data predictive analysis</article-title>
          ,
          <source>” Eastern-European Journal of Enterprise Technologies</source>
          , vol.
          <volume>3</volume>
          , no.
          <issue>4-117</issue>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2022</year>
          . https://doi.org/10.15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2022</year>
          .
          <volume>259323</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Qiao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Snoussi</surname>
          </string-name>
          , “
          <article-title>A fast and robust convolutional neural networkbased defect detection model in product quality control,”</article-title>
          <source>The International Journal of Advanced Manufacturing Technology</source>
          , vol.
          <volume>94</volume>
          , pp.
          <fpage>3465</fpage>
          -
          <lpage>3471</lpage>
          ,
          <year>August 2017</year>
          . https://doi.org/10.1007/s00170-017-0882-0.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Redmon</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          , “
          <article-title>YOLOv3: An Incremental Improvement,” arXiv preprint</article-title>
          ,
          <year>April 2018</year>
          . URL: https://arxiv.org/pdf/
          <year>1804</year>
          .02767.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Yeh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Liao</surname>
          </string-name>
          , “
          <article-title>YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information,” arXiv preprint</article-title>
          ,
          <year>February 2024</year>
          . URL: https://arxiv.org/pdf/2402.13616.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          , L. Liu,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          , J. Han, and G. Ding, “YOLOv10:
          <string-name>
            <surname>Real-Time End-toEnd Object</surname>
            <given-names>Detection</given-names>
          </string-name>
          ,” arXiv preprint, May
          <year>2024</year>
          . URL: https://arxiv.org/pdf/2405.14458.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dolhopolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Honcharenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Savenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Balina</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bezklubenko</surname>
          </string-name>
          , and T. Liashchenko, “
          <article-title>Construction Site Modeling Objects Using Artificial Intelligence and BIM Technology: A Multi-Stage Approach</article-title>
          ,”
          <source>2023 IEEE International Conference on Smart Information Systems and Technologies (SIST)</source>
          , pp.
          <fpage>174</fpage>
          -
          <lpage>179</lpage>
          , May
          <year>2023</year>
          . https://doi.org/10.1109/SIST58284.
          <year>2023</year>
          .
          <volume>10223543</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. R.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Nogues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Mollura</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Summers</surname>
          </string-name>
          , “
          <article-title>Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics</article-title>
          and Transfer Learning,
          <source>” IEEE Transactions on Medical Imaging</source>
          , vol.
          <volume>35</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>1285</fpage>
          -
          <lpage>1298</lpage>
          , May
          <year>2016</year>
          . https://doi.org/10.1109/TMI.
          <year>2016</year>
          .
          <volume>2528162</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>X. X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tuia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Fraundorfer</surname>
          </string-name>
          ,
          <article-title>"Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources,"</article-title>
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>8</fpage>
          -
          <issue>36</issue>
          ,
          <year>December 2017</year>
          . https://doi.org/10.1109/MGRS.
          <year>2017</year>
          .
          <volume>2762307</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dolhopolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Honcharenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Terentyev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Predun</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosynskyi</surname>
          </string-name>
          , “
          <article-title>Information system of multi-stage analysis of the building of object models on a construction site,” IOP</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>