<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Masses through Image Processing and Artificial Intelligence</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elognissè Erasme Guérin AGOSSADOU</string-name>
          <email>agossadourin@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mahugnon Géraud AZEHOUN PAZOU</string-name>
          <email>geraud.pazou@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Régis Donald HONTINFINDE</string-name>
          <email>hontinfinde7@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ahmed Dooguy Kora</string-name>
          <email>ahmed.kora@esmt.sn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>EDMI, Cheikh Anta Diop University.</institution>
          <addr-line>Dakar</addr-line>
          ,
          <country country="SN">Senegal</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>The prototype, which incorporates a webcam</institution>
          ,
          <addr-line>Arduino</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Université nationale des sciences</institution>
          ,
          <addr-line>Technologie, Ingénierie et Mathématiques (UNSTIM), POBox 486, SOGBO ALIHO, Abomey</addr-line>
          ,
          <country country="BJ">Benin</country>
        </aff>
      </contrib-group>
      <abstract>
        <p />
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Agriculture faces major challenges in sustainably feeding
a growing global population, making accurate crop yield
estimation essential for informed decision-making by
farmers. While traditional methods such as field surveys can be
helpful, they are often limited by issues of accuracy, cost,
and time eficiency.</p>
      <p>
        Tomato (Solanum lycopersicum) is a crucial vegetable
crop globally, boasting 183 million tonnes in 2018 [
        <xref ref-type="bibr" rid="ref6">1</xref>
        ]. Native
to Central and South America, the tomato was introduced to
Europe in the 16th century, quickly gaining popularity for
its delicious, nutrient-rich fruits loaded with vitamins,
minerals, and antioxidants [
        <xref ref-type="bibr" rid="ref7">2</xref>
        ]. Major producers include China,
India, the United States, and Turkey, with significant
cultivation also occurring in African nations such as Nigeria, Egypt,
Morocco, and Algeria, primarily for local consumption [
        <xref ref-type="bibr" rid="ref8">3</xref>
        ].
Tomatoes are generally classified into two main varieties:
determinate, which have limited growth, and
indeterminate, which continue growing throughout their lifecycle.
Whether cultivated in open fields or under protective covers
like greenhouses, tomato farming requires careful
irrigation due to the plant’s deep taproot system. Furthermore,
challenges such as pest infestations—like downy mildew
and Botrytis necessitate the use of appropriate cultivation
practices and phytosanitary measures to ensure optimal
yields.
      </p>
      <p>
        Several approaches have been investigated in the
literature to address the challenge of fruit weight estimation.
For instance, Yamamoto et al. [
        <xref ref-type="bibr" rid="ref9">4</xref>
        ] developed a method to
accurately count individual tomato fruits from images of
plants grown in a laboratory setting. This method employed
decision trees to analyze pixel color characteristics,
achieving precise pixel-level segmentation. Post-processing was
then applied to group pixels corresponding to fruits,
enet de la Communication de l’ANSALB, June 27–28, 2024, Cotonou, BENIN
∗Corresponding author.
†These authors contributed equally.
      </p>
      <p>
        Van Daalen et al. [
        <xref ref-type="bibr" rid="ref12">7</xref>
        ] examined the application of
augmented reality (AR) in agriculture, focusing on detecting
tomato ripeness using the 3D scanning capabilities of the
HoloLens [
        <xref ref-type="bibr" rid="ref13">8</xref>
        ]. Their experimental setup, which included
various tomato varieties, highlighted both the opportunities
and challenges of using AR for hands-free tasks like training
and harvesting in greenhouse environments.
      </p>
      <p>
        Similarly, Lee et al. [
        <xref ref-type="bibr" rid="ref1">9</xref>
        ] proposed an artificial
intelligencebased system for tomato detection and mass estimation,
utilizing multi-class detection and instance-wise
segmentation. By analyzing a tomato image dataset with a calibrated
vision system, the study demonstrated a high correlation
between fruit dimensions and mass. Their method achieved
a mean absolute percentage error of 7.09%, showcasing the
efectiveness of computer vision and machine learning for
automating tasks such as yield monitoring and fruit sizing.
      </p>
      <p>
        In another study, Nyalala et al. [
        <xref ref-type="bibr" rid="ref2">10</xref>
        ] developed seven
regression models, including Support Vector Regression (SVR)
[
        <xref ref-type="bibr" rid="ref3">11</xref>
        ] and artificial neural networks (ANNs) [
        <xref ref-type="bibr" rid="ref4">12</xref>
        ] with
diferent training algorithms. These models efectively estimated
fruit weight and volume, ofering significant potential for
improvements in fruit sorting and grading processes.
      </p>
      <p>
        Basak et al. [
        <xref ref-type="bibr" rid="ref5">13</xref>
        ] introduced a non-destructive method
for estimating strawberry fruit weight using machine
learning models. By analyzing 900 samples from three diferent
strawberry cultivars, they used image processing to
calcuCEUR
      </p>
      <p>ceur-ws.org
late pixel numbers. Linear regression (LR) and non-linear
SVR models were applied, resulting in training and testing
accuracies of 96.3% and 89.6%, respectively.</p>
      <p>This study focuses on applying recent advancements in
computer vision, particularly object detection, and machine
learning algorithms to estimate tomato weight from
realworld images. The subsequent sections describe the
equipment used, the structure and composition of the dataset,
and the methodology employed to generate accurate
quantitative measures such as projected surface area and total
weight for detected fruits. Our findings demonstrate the
efectiveness of this approach. Additionally, we discuss the
challenges faced and propose recommendations for future
research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Material and Methods</title>
      <sec id="sec-2-1">
        <title>2.1. Dataset</title>
        <p>The data used in this study consists of tomato fruit images
collected both online and in the field under real-world
conditions. The dataset includes a total of 180 images obtained
online and 100 images taken in the field, containing a
total of 1143 tomato fruit instances. Table 1 illustrates the
composition of our dataset.</p>
        <p>Images captured in the field helped to collect additional
information such as actual fruit area and actual fruit weight,
which enriches the dataset by providing accurate and
relevant measurements for tomato fruit weight estimation.
Table 2 presents additional insights concerning field-captured
images. Upon analysis of the table, the average fruit weight
is 35.30 g , with a standard deviation of 14.56 g . The
average true area is 2673.48 mm2, with a standard deviation of
873.68 mm2. Quartile values provide insights into the
distribution of the data. Thus, 25% of the fruits have a weight of
less than 25.21 g, 50% have a weight of less than 37.00 g , and
75% have a weight of less than 43.49 g. For the actual surface
area, 25% of fruits have an area less than 2, 024.93 mm2, 50%
have an area less than 2, 779.53 mm2, and 75% have an area
less than 3, 219.12 mm2.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Methods</title>
        <p>To estimate tomato fruit weights, we developed a four steps
approach (see figure 1)</p>
        <sec id="sec-2-2-1">
          <title>2.2.1. Detection, segmentation and extraction of tomato fruit masks</title>
          <p>To train our segmentation model, we prepared a dataset
of tomato images, labeled in the COCO format. The
dataset consisted of 180 images containing 1043 instances of
tomatoes, sourced from both the internet and field
photography, and annotated using the ROboflow platform.
We employed the Mask R-CNN instance segmentation
model through the Detectron2 framework, selecting the
mask_rcnn_R_50_FPN_3x configuration developed by
Facebook AI Research. This model, pre-trained on the COCO
dataset, combines the Mask R-CNN architecture with a
ResNet-50 backbone and Feature Pyramid Network (FPN)
for high-performance, multi-scale object detection.
To evaluate the projected area of each tomato from images,
a dataset was constructed, including individual images of
tomatoes, their actual weight in grams, the total number
of pixels in the image, the number of pixels corresponding
tomato (obtained by semantic segmentation), and the total
area of the image in square meters, obtained by camera
calibration.</p>
          <p>The estimation of the projected area took place in two
steps: first, the segmentation mask allows us to calculate
the area in pixels occupied by the tomato in the image.
Then, a camera calibration converted this pixel area into
an actual metric area, using a coin as a reference object.
By photographing the tomatoes under the same conditions
as the reference piece, the resulting conversion factor was
used to convert the pixel area of each fruit into a measure
of its actual projected area in metric units. This method
uses a rule of three, where the actual surface area of the
tomato (  ) is estimated based on the number of pixels
corresponding to the tomato in the image (  ), using
the conversion factor established during calibration:   .
 
 
=  
×
 
 
(1)</p>
          <p>With this method, we were able to estimate the real
surface area of each tomato in physical space from
segmentation in image space, thanks to precise calibration using a
reference object.</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.2.3. Tomato Mass Estimation</title>
          <p>To estimate the weight of the tomatoes based on their
projected surface area, we tested several regression models,
including Simple Linear Regression (SLR), Multiple Linear
Regression (MLR), and Partial Least Squares Regression
(PLSR). These models aimed to establish a mathematical
relationship between the surface area (independent
variable) and the weight (dependent variable) of the tomatoes.
The performance of each model was evaluated on a
validation set consisting of 20% of the total dataset, collected
under real-world conditions. Standard metrics, such as Root
Mean Square Error (RMSE) and the Coeficient of
Determination ( 2), were employed to assess model accuracy.
We also applied 10 -fold cross-validation to each model to
reduce the likelihood of overfitting.</p>
          <p>Figure 1 depicts the summary of the methodology adopted
in this study.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and Discussion</title>
      <sec id="sec-3-1">
        <title>3.1. Results</title>
        <p>to enhance performance.</p>
        <p>The projected surface area of each fruit was derived from
the segmented mask by calculating the pixel area, then
converting it to real-world units using camera calibration
information as defined in Equation 1. This method achieved a
precision of approximately 95.</p>
        <p>For tomato weight estimation, a subset of the dataset
containing real-world images was used, which included precise
data on both the actual weight of each tomato and their
projected surface area. A mathematical relationship between
the weight and projected area was established through the
evaluation of several regression methods. The algorithms
tested included Least Squares Regression (LSR), Multiple
Linear Regression (MLR), and Support Vector Machines (SVM),
and their performance was compared using cross-validation
and Mean Square Error (MSE) as the evaluation metric.</p>
        <p>Table 4 highlights the performance metrics of the tested
models.</p>
        <p>Among the evaluated models, Lasso Regression achieved
the best performance, with a MAE of 5,776 and an MSE of
62.99.</p>
        <p>The corresponding model equation is:</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Discussion</title>
        <p>
          The study employed a multi-step methodology to estimate
tomato fruit weights from images. First, a Mask R-CNN
model, using the mask_rcnn_R_50_FPN_3x configuration,
was trained on a dataset of 180 images containing 1043
tomato instances. After detection and segmentation, the
projected surface area of each tomato was estimated
using a calibrated conversion from pixel area to metric units,
achieving approximately 95% accuracy. For weight
estimation, several regression models were evaluated on a subset
of real-world images with known weights and projected
areas. Among the regression models evaluated, the Lasso
Regression algorithm demonstrated superior performance
in estimating tomato weights. This model achieved a Mean
Absolute Error (MAE) of 5.776 grams and a Mean Squared
Error (MSE) of 62.99 grams2̂. Our model outperformed the
approach described by Lee et al. [
          <xref ref-type="bibr" rid="ref1">9</xref>
          ], which reported an
MAE of 7.09 grams for a similar tomato weight estimation
task.
        </p>
        <p>When applied to the test dataset, this model achieved
a relative error of 7.09% in estimating the total weight of
tomatoes. These results demonstrate the potential of this
combined approach for automated tomato yield estimation,
although the ideal conditions of the study (fully visible fruits)
suggest that further research is needed to address real-world
challenges such as occlusion.</p>
        <p>While this study yielded promising results, it’s
important to acknowledge its primary limitation: the experiments
were conducted under idealized conditions that do not fully
represent real-world agricultural environments. All
tomatoes in the study were fully visible and unobstructed, which
rarely occurs in actual fields where fruits are often partially
hidden by leaves, branches, or other fruits. This idealization
may lead to overly optimistic performance estimates.</p>
        <p>To bridge this gap and enhance the model’s practical
applicability, future research will focus on developing
robust occlusion handling techniques, such as implementing
advanced image processing algorithms for reconstructing
partially obscured fruits or using ellipse fitting methods to
estimate the full shape of partially visible tomatoes.</p>
        <p>Additionally, creating more representative datasets that
reflect the challenging conditions found in real agricultural
settings, including various levels of occlusion and diverse
growth stages, will be crucial. By addressing these
limitations and training on more diverse and challenging datasets,
future iterations of this system could significantly improve
in accuracy and robustness, making it a more reliable tool
for automated agricultural yield estimation in real-world
scenarios.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This study successfully introduced an innovative approach
for accurately assessing tomato crop yields through the
use of advanced image processing, computer vision, and
artificial intelligence techniques. The results align closely
with the objectives of estimating both the quantity and total
weight of fruits, highlighting the practical benefits of this
methodology for farmers.</p>
      <p>Looking ahead, future enhancements will focus on
reifning the approach by integrating multispectral imaging
to improve data acquisition. Additionally, algorithmic
advancements, including image generation and ellipse fitting
techniques, will be employed to tackle challenges related to
occlusion. These developments will enhance the model’s
scalability and robustness, facilitating large-scale
deployment in real-world agricultural settings. The anticipated
implementation of this approach in automated systems that
utilize drones and ground-based robots presents exciting
opportunities for digital agriculture, paving the way for
precise, eficient, and automated yield estimation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Nazki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Baek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hong</surname>
          </string-name>
          , M. hun
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence approach for tomato detection and mass estimation in precision agriculture</article-title>
          ,
          <source>Sustainability</source>
          (
          <year>2020</year>
          ). URL: https://api.semanticscholar.org/ CorpusID:228852288.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Nyalala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Okinda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mecha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Korohou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nyalala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jiayu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kunjie</surname>
          </string-name>
          ,
          <article-title>Weight and volume estimation of single and occluded tomatoes using machine vision</article-title>
          ,
          <source>International Journal of Food Properties</source>
          <volume>24</volume>
          (
          <year>2021</year>
          )
          <fpage>818</fpage>
          -
          <lpage>832</lpage>
          . URL: https://doi.org/10.1080/10942912.
          <year>2021</year>
          .
          <volume>1933024</volume>
          . doi:
          <volume>10</volume>
          .1080/10942912.
          <year>2021</year>
          .
          <volume>1933024</volume>
          . arXiv:https://doi.org/10.1080/10942912.
          <year>2021</year>
          .
          <volume>1933024</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , L.
          <string-name>
            <surname>J. O'Donnell</surname>
          </string-name>
          , Chapter 7
          <article-title>- support vector regression</article-title>
          , in: A.
          <string-name>
            <surname>Mechelli</surname>
          </string-name>
          , S. Vieira (Eds.),
          <source>Machine Learning</source>
          , Academic Press,
          <year>2020</year>
          , pp.
          <fpage>123</fpage>
          -
          <lpage>140</lpage>
          . URL: https://www.sciencedirect. com/science/article/pii/B9780128157398000079. doi:
          <volume>10</volume>
          .1016/B978- 0
          <source>- 12- 815739- 8</source>
          .
          <fpage>00007</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [12]
          <string-name>
            <surname>K. O'Shea</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Nash</surname>
          </string-name>
          ,
          <article-title>An introduction to convolutional neural networks</article-title>
          ,
          <year>2015</year>
          . URL: https://arxiv.org/abs/1511. 08458. arXiv:
          <volume>1511</volume>
          .
          <fpage>08458</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Basak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Paudel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. E.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. C.</given-names>
            <surname>Deb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. G.</given-names>
            <surname>Kaushalya Madhavi</surname>
          </string-name>
          , H. T. Kim,
          <article-title>Non-destructive estimation of fruit weight of strawberry using machine learning models</article-title>
          ,
          <source>Agronomy</source>
          <volume>12</volume>
          (
          <year>2022</year>
          ). URL: https://www.mdpi.com/2073-4395/12/10/2487. doi:
          <volume>10</volume>
          . 3390/agronomy12102487.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [1] Food and
          <string-name>
            <given-names>Agriculture</given-names>
            <surname>Organization</surname>
          </string-name>
          , Agricultural production statistics, n.d. Retrieved from https://www.fao. org/3/cc3751en/cc3751en.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dorais</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ehret</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Papadopoulos</surname>
          </string-name>
          ,
          <article-title>Tomato (solanum lycopersicum) health components: From the seed to the consumer</article-title>
          ,
          <source>Phytochemistry Reviews</source>
          <volume>7</volume>
          (
          <year>2008</year>
          )
          <fpage>231</fpage>
          -
          <lpage>250</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11101- 007- 9085- x.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[3] WordAtlas, The world's leading tomato producing countries</article-title>
          , n.d. Retrieved from https://www.worldatlas.com/articles/ which
          <article-title>-are-the-world-s-leading-tomato-producing-countries</article-title>
          . html.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Yamamoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yoshioka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ninomiya</surname>
          </string-name>
          ,
          <article-title>On plant detection of intact tomato fruits using image analysis and machine learning methods</article-title>
          ,
          <source>Sensors</source>
          <volume>14</volume>
          (
          <year>2014</year>
          )
          <fpage>12191</fpage>
          -
          <lpage>12206</lpage>
          . URL: https://www.mdpi.com/ 1424-8220/14/7/12191. doi:
          <volume>10</volume>
          .3390/s140712191.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [5]
          <string-name>
            <surname>M. I. Sari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fajar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gunawan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Handayani</surname>
          </string-name>
          ,
          <article-title>The use of image processing and sensor in tomato sorting machine by color, size, and weight</article-title>
          , JOIV :
          <source>International Journal on Informatics Visualization</source>
          (
          <year>2022</year>
          ). URL: https: //api.semanticscholar.org/CorpusID:250542375.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Opencv</surname>
          </string-name>
          :
          <article-title>Open source computer vision library</article-title>
          , https: //opencv.org/, n.d. Accessed:
          <fpage>2024</fpage>
          -10-02.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [7]
          <string-name>
            <surname>T. van Daalen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Peller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Balendonck</surname>
          </string-name>
          ,
          <article-title>Determining fresh tomato weight using depth images from an ar headset</article-title>
          ,
          <source>IFAC-PapersOnLine</source>
          <volume>55</volume>
          (
          <year>2022</year>
          )
          <fpage>119</fpage>
          -
          <lpage>123</lpage>
          . URL: https://www.sciencedirect.com/ science/article/pii/S2405896322027586. doi:
          <volume>10</volume>
          .1016/ j.ifacol.
          <year>2022</year>
          .
          <volume>11</volume>
          .125.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Microsoft</surname>
            <given-names>hololens</given-names>
          </string-name>
          , https://www. microsoft.com/fr-fr/hololens?msockid= 1255574f41cb6082275f4248408c611d, n.d. Accessed:
          <fpage>2024</fpage>
          -10-02.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>