<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Conference on Digital Technologies in Education, Science and
Industry, December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Flower Detection and Counting Using CNN for Thinning Decisions in Apple Trees</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nikolay Kiktev</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexey Kutyrev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Federal Scientific Agroengineering Center VIM, Department of Technologies and Machines for Horticulture</institution>
          ,
          <addr-line>Viticulture and Nursery, Moscow, 109428</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University of Life and Environmental Sciences of Ukraine, Department of Automation and Robotic Systems</institution>
          ,
          <addr-line>Kyiv, 03041</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Taras Shevchenko National University of Kyiv, Department of Intelligent Technologies</institution>
          ,
          <addr-line>Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>6</fpage>
      <lpage>07</lpage>
      <abstract>
        <p>The article presents a system for monitoring apple trees in orchards during the flowering period. To control the technological operation of thinning trees during the flowering period, monitoring of this operation is necessary to select the most effective method and assess the quality of its implementation. The purpose of the study is to develop a method for monitoring apple blossoms on the tree crown based on machine learning algorithms to control the quality of the technological operation of thinning apple blossoms. A deep-learning neural network has been developed to recognize apple flowers and buds on received camera frames. The analysis of the results indicated that the YOLOv8 model categorized the "rosebud" class with an mAP (mean Average Precision) metric of 0.448, and the "Flowering" class with an mAP metric of 0.691.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Monitoring</kwd>
        <kwd>deep learning</kwd>
        <kwd>apple trees</kwd>
        <kwd>flowering period</kwd>
        <kwd>neural network</kwd>
        <kwd>classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The process of flower bud formation involves careful planning of measures to thin out flowers
and stimulate the formation of ovaries [1]. Many years of research have proven that thinning
flowers is an important stage in the cultivation of intensive apple orchards, which allows you to
balance the number and size of fruits, and also ensures a sufficient number of flower buds for the
next year without damaging the current year's harvest. Timely flower thinning allows you to
avoid excessive fruiting and subsequent depletion of the tree’s carbohydrate reserves, which
negatively affects the overall health of the tree [2]. Existing methods, including manual, chemical
and mechanical, show high efficiency in thinning operations. However, the main difficulty in their
use remains the significant degree of unpredictability of the final result. Typically, flower thining
is based on a visual assessment of flower load by expert agronomists, which may be inaccurate
and error-proof. In work [3] by american researchers W. Yuan et al. an algorithm developed an
algorithm for mapping the density of apple flower using point clouds reconstructed using RGB
images and photogrammetry obtained from unmanned aerial vehicles (UAVs).</p>
      <p>In intensive gardening, there are many methods for calculating the optimal number of fruits
and flowers per tree. The number of fruits on the tree (in pieces) after it enters the fruiting period
should be equal to the distance between the trees in the row (in centimeters), with about 30
leaves per fruit. There should be no more than 4-6 fruits per 1 centimeter of shoot circumference
four weeks after flowering, while the distance between the fruits on the branch should be about
15-20 centimeters (5-6 kg of fruits per 1 cm of the cross-section of the tree trunk). The optimal
norm of crop load (number of fruits/tree, pcs.), depending on the age of the plantings in intensive
gardens, is in the first year - 5-6 fruits, in the second year - 20-30 fruits, in the third year - 40-50
fruits, in the fourth year – 60-80 fruits, the fifth year 80-100 or more fruits. In this case, the
number of fruits ranges from 4% to 8% of the number of flowers on the tree. In this regard, it is
necessary to monitor the thinning operation to select the optimal method and assess the quality
of the technological operation.</p>
      <p>The purpose of the study is to develop a method for monitoring apple blossoms on the tree
crown based on machine learning algorithms to control the quality of the technological operation
of thinning apple blossoms.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and methods</title>
      <sec id="sec-2-1">
        <title>2.1. Selecting a neural network architecture</title>
        <p>Most developed computer vision systems for apple flower and fruit recognition focus on
cluster detection and segmentation. Before the widespread use of deep learning methods, cluster
segmentation was performed by automatically extracting color and texture features from RGB
images using techniques such as intensity thresholding and morphological image processing
(Krikeb et al., 2017 [4], Aggelopoulou et al., 2011 [5], Hocevar et al. 2014 [6]). An analysis of the
studies has shown that cluster segmentation algorithms have low performance due to variability
in lighting conditions, biological variability of crowns, mutual overlap of objects (for example,
branches, leaves, flowers and ovaries) and changes in the shape, size, color and other parameters
of objects.</p>
        <p>The use of machine learning methods and convolutional neural networks (CNN) to detect and
segment apple flowers and fruits has significantly improved the performance of their
classification and recognition. For semantic segmentation, high probability cluster regions are
first extracted using various CNN based architectures such as, Inception (GoogLeNet), ResNet
(Residual Networks), VGG (Visual Geometry Group), DenseNet followed by classification or
region detection methods such as support vector machines (SVM), region growth refinement
(RGR) and shape-constrained level generation method (Dias, P. A. et al., 2018) [7,8]. In addition
to semantic segmentation, methods for cluster detection and segmentation of individual flower
clusters have been studied using widely accepted approaches such as Faster R-CNN (Farjon et al.,
2019) [9] and Mask RCNN (Bhattarai et al., 2020) [10]. An analysis of research by famous
scientists has shown that using the YOLO (You Only Look Once) algorithm to detect objects in
images and videos provides high speed and accuracy when performing the task of object
recognition in real time. The YOLO algorithm splits an image into a grid of cells, using many
anchors (predefined boxes) to predict the objects in each cell. This method allows you to detect
multiple objects and determine the coordinates, classes, and object certainties for each detected
object. This method is actively used in computer vision to solve various problems, such as object
detection on roads for autonomous cars, medical image analysis, video surveillance and other
applications where accurate and fast object detection is required. (Wu et al., 2020) [11] and
MaskScoring R-CNN (Tian et al., 2020) [12]. The YOLO model has many different architectures and
versions, each of which improves detection performance and accuracy.</p>
        <p>To develop models for apple blossom recognition and classification, the modern YOLOv8
model was used, which uses new features to improve performance and optimize training
hyperparameters. The YOLOv8 model compared to previous models (YOLOv2-YOLOv7) has
higher speed and accuracy.</p>
        <p>YOLOv8 provides support for various artificial intelligence tasks, including detection,
segmentation, pose estimation, tracking, and classification. Thanks to this versatility, users can
leverage the capabilities of YOLOv8 in various application domains, including science and
technology [13]. The visualization of the YOLOv8 network architecture in relation to the problem
of recognizing flowers and buds of an apple tree is presented in Figure 1. The YOLOv8 model can
be divided into two main components: the backbone and the head. The backbone is a modified
version of the CSPDarknet53 architecture, comprising 53 convolutional layers and employing
partial inter-stage connections to enhance information flow between layers. The YOLOv8 head
includes multiple convolutional layers followed by fully connected layers. These layers are
responsible for predicting bounding boxes, objectness scores, and class probabilities for objects
detected in an image.</p>
        <p>A key feature of YOLOv8 is the incorporation of a self-control mechanism in the head network.
This mechanism allows the model to focus on different areas of an image and assign importance
to different features based on their relevance to the task at hand. Another important
characteristic of YOLOv8 is its ability to perform multi-scale object detection. The model utilizes
a feature pyramid network for detecting objects of different sizes and scales in an image,
consisting of multiple layers capable of detecting objects at various scales, enabling the model to
efficiently detect small objects such as flowers.</p>
        <p>YOLOv8 is an anchor-free model, meaning it predicts the center of an object directly rather
than the offset from a known anchor box. This approach reduces the number of predicted
bounding boxes, thereby accelerating the Non-Maximum Suppression (NMS) process a complex
post-processing step that filters candidate detections after inference. In comparison to previous
versions, YOLOv8 incorporates mosaic augmentation, involving the fusion of four images. This
allows the model to learn objects in new spatial contexts with partial overlap and in the presence
of surrounding pixels.</p>
        <p>The YOLOv8 model delivers state-of-the-art performance, striking a balance between accuracy
and speed, making it well-suited for real-time object detection tasks.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Preparing data for training a neural network</title>
        <p>To collect a set of data, images of flowers and apple fruits, to train the neural network model,
a Sony Alpha ILCE-7M3 digital camera was used using a Sony FE 24-240mm lens. Shooting
parameters included an aperture of f/7.1, a lens focal length of 24mm, and an image resolution of
4000x2672 pixels. The total number of images was 3000 pieces. These images were evenly
divided into two categories, each containing 1500 images, respectively representing flowers in
the pink bud stage and flowers in the active bloom stage. The data set was collected in the
research and production testing department of the Federal State Budgetary Institution Federal
Scientific Center for Horticulture (Moscow region, Mikhnevo settlement).</p>
        <p>To train the convolutional neural network model, a data set (images) was prepared. The
RoboFlow web service was used to perform the image tagging process. The following classes
were specified for the classification and recognition of objects: the “rosebud” class (flowers in the
pink bud stage) and the “flowering” class (flowers in the blooming stage) (Fig. 2).</p>
        <p>Marking of objects is carried out using rectangles - this is the process of outlining the objects
of interest to us in the image with rectangles indicating the corresponding class. A JSON markup
file is used to store class information, storing attributes such as "class" (object class), "x_center"
and "y_center" (center coordinates), and "width" and "height" (width and object height). To
expand the data set and improve the performance of the model, the augmentation method was
used. The tools used are Flip, Rotation ±15º, Hue ±25º, Noise 5%, Blur 2.5 px, Brightness ±25%.
Flip and Rotation help deal with the problem of non-invariance under rotations and reflections.
Changing Hue helps the model better generalize color information. Adding noise and blur helps
the model become more robust to artifacts in the data (Figure 3).</p>
        <p>As a result, the dataset was expanded to 6000 images. To objectively evaluate the performance
of the model, the data set is divided into training, test and validation sets in the ratio of 70% (4200
images), 30% (1200 images) and 10% (600 images), respectively.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Convolutional neural network training</title>
        <p>
          To train a convolutional neural network, the Transfer Learning method is used, in which a
pretrained model is used to solve a new problem. A configuration file has been created that contains
information about the paths to the training and test images, as well as the paths to the markup
files. In the model configuration file, learning parameters are defined, such as the number of
epochs (Epoch) of training, data batch size (batch size), learning rate (Learning Rate). These
parameters play a key role in shaping the model's training process. The optimal values of these
parameters are determined empirically as a result of testing and tuning through a series of
experiments. This approach involves an iterative process in which multiple training runs are
conducted with varying values for each parameter. We used pre-trained weights on the COCO
(Common Objects in Context) data set, which allows us to speed up the process of training the
neural network on its own data. The COCO dataset is a rich set of images with labels containing a
variety of objects in different scenes. The training process included minimizing the cumulative
loss function, as well as optimizing model weights using gradient descent and weighting
coefficients. To update the weighting coefficients during model training using gradient descent,
formula (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) is used:
ωnew = ωold. − αΔL(ωold)
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
where ωnew – new weighting factor,
ωold – old weighting factor, α –learning rate,
ΔL(ωold.) – a vector that indicates the direction of the fastest increase in the loss function ΔL
relative to the weighting coefficient ωold..
where S – mesh size, B – quantity of anchor box,
λcoord – weighting coefficient for loss of coordinates,
1obj – indicator showing whether an object belongs box i anchor,  ,
ij
xi, yj – coordinates of the predicted center box,
x̂i, ŷj, – the corresponding coordinates of the true box
        </p>
        <p>The Box Loss formula includes weights to balance the importance of different loss components
in model training. After completion of training, the performance of the models was assessed using
a test sample (data set not used in the training process). To quantitatively assess the performance
of the developed models in recognizing and classifying objects (apple flowers and fruits), the
wellknown metrics of accuracy (Precision), completeness (Recall), F1-score (F-score) and mean
Average Precision (mAP) were used. found using formulas 3-6.</p>
        <p>
          The learning rate (α) determines the step size in the direction of the gradient. This process is
repeated over several iterations (epochs) until convergence or a specified stopping criterion for
model training is achieved. As a result of this process, the trained convolutional neural network
model is gradually tuned to the training data, improving its ability to detect objects. To assess the
accuracy of predicting the coordinates of the bounding box (box) for objects in the image, the
YOLOv8 model training algorithm uses the Box Loss function, found using formula (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ):
Precision=
        </p>
        <p>Recall=</p>
        <p>∑
1
С
1
С
∑</p>
        <p>C</p>
        <p>C</p>
        <p>TPi
i=1 TPi + FPi</p>
        <p>TPi
i=1 TPi + FNi
F1 (score) =</p>
        <p>∑
1
С</p>
        <p>C 2 · Precisioni · Recalli
i=1 Precisioni + Recalli
Averageaccuracy(mAP) =</p>
        <p>∑
1
С</p>
        <p>C
i=1</p>
        <p>APi
where С - total number of classes,
  - number of correctly classified positive examples for the class  ,
  - number of false positive examples for a class  ,
  - number of false negative examples for a class  ,
  - area under the precision-recall curve for a class  .</p>
        <p>
          In these formulas, TPi, FPi and FNi refer to the true positive, false positive and false negative
examples for class i, respectively. Area under the curve   The precision-recall curve for class i
is calculated using the Precision-Recall curve for different Classification Thresholds. The
classification threshold determines the probability value above which an object is considered to
belong to one of the given classes (“rosebud”, “flowering”). The Confidence indicator is defined as
the maximum probability of an object belonging to one of the classes according to formula 7:
(Confidenc)e = maxiP(ci|х)
where  (  |х) – the model's predicted probability that object x belongs to the class  ,   - the
operation of selecting the maximum value among all probabilities of objects belonging to different
classes.
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Hardware of the training system</title>
        <p>To conduct the research, we used a computer system equipped with an Intel Core i9-10900X
processor with twenty virtual threads. The model was trained using GPUs, using two NVIDIA
GeForce RTX 2080 Ti video cards. The motherboard used is GIGABYTE X299 UD4 Pro. A 1 TB Intel
PCI-E SSD was used for data storage. The system's RAM capacity was 32 GB using Kingston DDR4
DIMMs.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Research results</title>
      <p>The Box loss-Epoch graph, constructed during the model training process, made it possible to
determine the optimal number of training epochs, which achieves the best quality in determining
the coordinates of the bounding boxes of objects, which amounted to 124 (Fig. 4).</p>
      <p>To assess changes in precision and recall indicators depending on the epoch during the model
training process, Precision-Epoch and Recall-Epoch curves were constructed. To assess the
change in the average accuracy of the model depending on the number of epochs during the
model training process, the mAP-Epoch curve was constructed. Analysis of the Precision-Epoch,
Recall-Epoch, mAP-Epoch curves made it possible to determine the number of epochs when the
model achieves the best combination of accuracy and completeness and to select the best
hyperparameters to achieve the best performance of the model and maximum accuracy in
detecting the “rosebud” and “flowering” classes, highlighting areas on strawberry leaves with
calcium deficiency, which amounted to 94. The total training time for the YOLOv8 model using
the CPU was 8 hours 35 minutes 25 seconds.</p>
      <p>Examples of recognition of the “rosebud” class of the “flowering” class in images of the test
data set using a trained model with the selection of affected areas in a frame are presented in
Figure 5.</p>
      <p>To assess the obtained values of accuracy and completeness when changing the threshold for
making a decision in the classification problem being solved, the F1-score – Confidence, Precision
– Confidence, Precision – Recall and Recall-Confidence curves were constructed (Fig. 6).</p>
      <p>Analysis of the Precision-Recall plot resulted in a classification threshold of 0.56, which
provides the best trade-off between precision and recall. The Precision-Confidence and
RecallConfidence curves reflect the dependence of the accuracy and completeness of model predictions
on the level of confidence used to decide about the presence of an object in the image. Analysis of
the curves made it possible to estimate the optimal confidence level for the model, which was
0.52. The indicator provides optimal accuracy and completeness of class predictions with a
minimum number of false positives of the neural network model. The resulting F1-Confidence
graph allowed us to evaluate how changing the model’s confidence level affects the combined
metrics of accuracy and recall, the ability to correctly classify objects and choose the optimal
threshold for making a classification decision, which was 0.48. The F1-Confidence graph shows
how the model responds to different levels of noise or the presence of outliers in the data. To
evaluate the performance of the machine learning model, Confusion Matrix was built (Fig. 7). The
matrix contains the main categories that reflect how the model classified the objects. The matrix
allowed us to evaluate model errors and tune model parameters to improve its performance for
recognizing the classes “flowering” and “rosebud”. Table 1 presents the coefficients of the three
main metrics for each individual class and for the overall data set.</p>
      <p>Analysis of the results showed that the YOLOv8 model classified the “rosebud” class with an
mAP metric of 0.448, and the “Flowering” class with 0.691. Analysis of the research results
showed that timely monitoring of apple flowers on an industrial plantation, carried out using a
wheeled robotic platform using the YOLOv8 convolutional neural network for processing the
received data, will allow recognizing and classifying apple flowers with high accuracy of up to
92.5%.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>Neural networks are a powerful modern tool used in robotic platforms for various gardening
operations [14,15,16]. Research on the use of machine learning to identify apples during harvest
is described in [17]. Neural networks and other artificial intelligence systems are also used for
other operations in agriculture, for example, in greenhouses when managing energy flows [18],
monitoring the condition of agricultural crops [19], including nitrogen in wheat crops [20].</p>
      <p>In further research, we set the task of counting the number of flowers for each row of plantings
and eliminating duplication of their counting.</p>
      <p>For this purpose, it is planned to develop a method for tracking (tracking process) objects or
several objects on a video/set of frames by assigning a unique identifier (ID) to each recognized
object. Tracking will allow you to determine the movement and changes in the position of flowers
in images over time. Further expansion of the dataset and training of the model on new data,
including images of fruits and ovaries, will allow monitoring the full cycle of the orchard
production process in order to accurately predict yields.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Analysis of the obtained graphs made it possible to establish the optimal settings for the YOLOv8
convolutional neural network and select the confidence threshold at which the model shows
optimal accuracy and completeness balanced with the number of detected objects. The
configuration of the machine learning algorithm of the YOLOv8 model for recognizing apple
flowers in the pink bud stage and apple flowers in the flowering stage has been determined:
learning rate – 0.01 LR (learning rate), number of epochs – 130, mini-batch size (batch size) – 16.</p>
      <p>The research used the method of training a convolutional neural network under conditions of
a limited training sample volume obtained in the field using an RGB camera. The result showed
that artificially increasing the volume of the training sample (images of apple blossoms), using
tools such as Flip, Rotation ±15º, Hue ±25º, Noise 5%, Blur 2.5 px, Brightness ±25% can
significantly improve the quality of neural network training, helps to adapt the system to real
conditions, increases the accuracy of class feature detection by 16% compared to the data set
without increasing the sample size. The conducted research shows the prospects for using the
YOLOv8 convolutional neural network as part of a decision support system for monitoring and
planning the thinning of flowers and the formation of ovaries.</p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
      <p>
        [10] Bhattarai, U., Bhusal, S., Majeed, Y., and Karkee, M. (2020). Automatic blossom detection in
apple trees using deep learning. IFAC-PapersOnLine, 53(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ):15810{15815.
[11] Wu, D., Lv, S., Jiang, M., and Song, H. (2020). Using channel pruning-based Yolo v4 deep
learning algorithm for the real-time and accurate detection of appleowers in natural
environments. Computers and Electronics in Agriculture, 178:105742.
[12] Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Instance segmentation of apple flowers using the
improved mask R–CNN model. Biosyst. Eng. 2020, 193, 264–278.
[13] YOLOv8 model [Electronic resource]. Ultralytics YOLOv8 Docs. – URL:
https://docs.ultralytics.com/models/yolov8/ (access date 25.11.2023).
[14] A. Kutyrev, D. Khort, I. Smirnov, N. Kiktev, O. Opryshko and D. Komarchuk, "Robotic Device
for Identifying and Picking Apples," 2022 IEEE 9th International Conference on Problems of
Infocommunications, Science and Technology (PIC S&amp;T), Kharkiv, Ukraine, 2022, pp. 415-420,
doi: 10.1109/PICST57299.2022.10238646.
[15] D. Khort, A. Kutyrev, R. Filippov, N. Kiktev and D. Komarchuk, "Robotized Platform for Picking
of Strawberry Berries," 2019 IEEE International Scientific-Practical Conference Problems of
Infocommunications, Science and Technology (PIC S&amp;T), Kyiv, Ukraine, 2019, pp. 869-872, doi:
10.1109/PICST47496.2019.9061448.
[16] Kutyrev, A.; Kiktev, N.; Jewiarz, M.; Khort, D.; Smirnov, I.; Zubina, V.; Hutsol, T.; Tomasik, M.;
Biliuk, M. Robotic Platform for Horticulture: Assessment Methodology and Increasing the
Level of Autonomy (2022). Sensors, 22, 8901. https://doi.org/10.3390/s2222890 .
[17] Kutyrev, A., Kiktev, N., Kalivoshko, O., Rakhmedov, R. (2022). Recognition and Classification
Apple Fruits Based on a Convolutional Neural Network Model. "Information Technology and
Implementation" (IT&amp;I-2022). Conference Proceedings. Kyiv, Ukraine, November 30 - December
02, 2022. CEUR Workshop Proceedings, 3347, 90–101.
[18] V. Lysenko, T. Lendiel, I. Bolbot and I. Nakonechnyy, "Neural Network Structures for
Energyefficient Control of Energy Flows in Greenhouse Facilities," 2022 IEEE 9th International
Conference on Problems of Infocommunications, Science and Technology (PIC S&amp;T), Kharkiv,
Ukraine, 2022, pp. 21-26, doi: 10.1109/PICST57299.2022.10238512.
[19] Hnatiienko H.; Domrachev V.; Saiko V. Monitoring the condition of agricultural crops based
on the use of clustering methods, (2021) 15th International Conference Monitoring of
Geological Processes and Ecological Condition of the Environment, Monitoring 2021, doi:
10.3997/2214- 4609.20215K2049.
[20] V. Lysenko, O. Opryshko, D. Komarchuk, N. Pasichnyk, N. Zaets and A. Dudnyk, "Usage of
flying robots for monitoring nitrogen in wheat crops," 2017 9th IEEE International Conference
on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications
(IDAACS), Bucharest, Romania, 2017, pp. 30-34, doi: 10.1109/IDAACS.2017.8095044.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Gudkovsky</surname>
            <given-names>V. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kozhina</surname>
            <given-names>L. V.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Nazarov</given-names>
            <surname>Yu</surname>
          </string-name>
          .
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Balakirev</surname>
          </string-name>
          <string-name>
            <surname>A. E.</surname>
          </string-name>
          <article-title>Physiological damage to leaves and fruits of apple, pear and their mineral composition: collection of articles</article-title>
          .
          <source>Scientific Foundations of Effective Gardening. Trudy VNIIS im.</source>
          <string-name>
            <given-names>I. V.</given-names>
            <surname>Michurina</surname>
          </string-name>
          . Voronezh, Kvarta,
          <year>2006</year>
          .
          <fpage>47</fpage>
          -
          <lpage>64</lpage>
          . (In Russ.).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Robinson</surname>
            ,
            <given-names>T. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lakso</surname>
            ,
            <given-names>A. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greene</surname>
            ,
            <given-names>D. W.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hoying</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Precision crop load management</article-title>
          .
          <source>NYFruit Quarterly</source>
          ,
          <volume>21</volume>
          (
          <issue>2</issue>
          ):3{
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ; Hua,
          <string-name>
            <surname>W.</surname>
          </string-name>
          ; Heinemann,
          <string-name>
            <given-names>P.H.</given-names>
            ;
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. UAV</surname>
          </string-name>
          <article-title>Photogrammetry-Based Apple Orchard Blossom Density Estimation and Mapping</article-title>
          .
          <source>Horticulturae</source>
          <year>2023</year>
          ,
          <volume>9</volume>
          , 266. https://doi.org/10.3390/horticulturae9020266.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Krikeb</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alchanatis</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crane</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Naor</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Evaluation of apple flowering intensity using color image processing for tree-specific chemical thinning</article-title>
          .
          <source>Advances in Animal Biosciences</source>
          ,
          <volume>8</volume>
          (
          <issue>2</issue>
          ):
          <fpage>466</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Aggelopoulou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bochtis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fountas</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Swain</surname>
            ,
            <given-names>K. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gemtos</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Nanos</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Yield prediction in apple orchards based on image processing</article-title>
          .
          <source>Precision Agriculture</source>
          ,
          <volume>12</volume>
          (
          <issue>3</issue>
          ):
          <volume>448</volume>
          {
          <fpage>456</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Hocevar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sirok</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Godesa</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Stopar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Flowering estimation in apple orchards by image analysis</article-title>
          .
          <source>Precision Agriculture</source>
          ,
          <volume>15</volume>
          (
          <issue>4</issue>
          ):
          <volume>466</volume>
          {
          <fpage>478</fpage>
          .)
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Dias</surname>
            ,
            <given-names>P. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tabb</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Medeiros</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2018a</year>
          ).
          <article-title>Appleower detection using deep convolutional networks</article-title>
          .
          <source>Computers in Industry</source>
          ,
          <volume>99</volume>
          :
          <fpage>17</fpage>
          {
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Dias</surname>
            ,
            <given-names>P. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tabb</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Medeiros</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2018b</year>
          ).
          <article-title>Multispecies fruit ower detection using a re ned semantic segmentation network</article-title>
          .
          <source>Ieee robotics and automation letters</source>
          ,
          <volume>3</volume>
          (
          <issue>4</issue>
          ):
          <volume>3003</volume>
          {
          <fpage>3010</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Farjon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krikeb</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hillel</surname>
            ,
            <given-names>A. B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Alchanatis</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Detection and counting of owers on appletrees for better chemical thinning decisions</article-title>
          .
          <source>Precision Agriculture</source>
          , pages
          <volume>1</volume>
          {
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>