<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Weighted 2-Means Split Algorithm For Under-segmentation Reduction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kornelija Magylaitė</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lukas Arlauskas</string-name>
          <email>lukas.arlauskas@ktu.lt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Karolis Ryselis</string-name>
          <email>karolis.ryselis@ktu.lt</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Information Systems Department, Kaunas University of Technology</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Software Engineering Department, Kaunas University of Technology</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Human body segmentation is utilized in various applications as an intermediate step. This problem is best solved using supervised machine learning solutions, however, they require annotated data for training. Unfortunately, annotating data for segmentation is an extremely labor-intensive task. It could be improved by using semi-automatic segmentation algorithms, however, their accuracy tends to be low for complex scenes. The errors made by such algorithms can be manually corrected by a human. This process could be more efficient when automatic correction tools are added to the data processing pipeline. This research aims to improve the final semi-automatic segmentation accuracy by improving an existing random forest classifier for correcting point cloud segmentation based on metrics of recursive 2-Means split. We replace the K-Means clustering with a weighted K-Means clustering and optimize the weights. Experiments have revealed that the segmentation accuracy of 62.4% is improved to 66.1% with a weight ratio of 0.4. Since higher accuracy leads to less manual labor, this is a sought-after improvement that reduces the time to prepare datasets for human body segmentation. weighted K-Means, human body segmentation, random forest Proceedings</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Depth data processing has been an active research subject in the recent times, because of its’ plentiful
application ways. The spatial data is being used in various spheres from depth cameras to lidars.
However, machine learning models used for data processing, require a lot of depth data. The depth data
can be extracted from depth sensing devices, however manual segmentation and annotation is rather
repetitive process and includes a huge amount of manual labour. For example, “Kinect” sensor produces
30 depth frames per second. Therefore, it is not possible to effectively segment the data using manual
methods. Trying to alleviate the problem, there has been research in trying to make the process as
automatized as possible. The research conducted by one of the authors of this article [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] includes
implementation and experimentation with Point cloud library [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The experiment was previously made
using specifically K-Means algorithm and a random forest classifier. While the classifier achieved a
quite high success rate, the accuracy of bounding-box-based segmentation was significantly lower.
Implementation of the 1:1 cuts methodology resulted in a subsequent improvement. Nevertheless, the
algorithm exhibited poor efficacy when presented with complex and combined datasets, as it failed to
accurately trim the edges of the image. As such, the goal of this research is to improve the existing
solution’s accuracy in recognizing complex poses while maintaining non-worsening accuracy in
detecting simple poses. The paper is structured in a following format. Section II discusses principles of
weighted K-Means algorithm. Section III describes the problem and research methodology as well as
      </p>
      <p>2023 Copyright for this paper by its authors.
CEUR</p>
      <p>ceur-ws.org
algorithm modifications in detail. Section IV provides the results of accuracy and discusses the
evaluation of the algorithm. Finally, Section V concludes the article.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
    </sec>
    <sec id="sec-3">
      <title>2.1. Random Forest Classifier</title>
      <p>
        Random forest is a supervised machine learning technique [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It is a collection of decision trees that
vote on the final decision. It is meant for classification problems. Random forests can be applied to
segmentation tasks. However, in this case, features for training are required. These features are extracted
in different ways and can be hand-crafted or computed by other machine-learning systems.
Handcrafted features are wildly various in the state of the art. They are mostly domain-specific features like
elevation data [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], coordinates in space [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] or pixel intensity [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, they are always different
and depend on the task at hand.
      </p>
      <p>2.2.</p>
    </sec>
    <sec id="sec-4">
      <title>Weighted K-Means Algorithm</title>
      <p>
        K-Means algorithm is an unsupervised learning technique for clustering [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. It has a hyperparameter
k which represents the number of clusters in the output. However, the original algorithm uses equal
weights for all centroids. There are many variations of the algorithm that try to introduce different
weights for the clusters. They are successfully applied for image segmentation for RGB image
segmentation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] or image clustering [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This algorithm is useful in the cases where different classes
of objects should have different sensitivities. It translates to assigning weights to the centroids. On the
other hand, it is only possible when the area of application is known in advance as the weights are
derived from the area of application.
      </p>
      <p>
        There are several challenges when applying the weighted K-Means algorithm. First, the centroids
must have different weights based on the area of application. It is solved very differently in the state of
the art – different features may be pre-selected [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], variable relative relevance may be estimated [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],
the weights may even be acquired using machine learning solutions [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Unfortunately, there is no
single best way to solve this problem.
      </p>
    </sec>
    <sec id="sec-5">
      <title>2.3.Random Forest Classifier for Correcting Point Cloud Segmentation</title>
    </sec>
    <sec id="sec-6">
      <title>Based on Metrics of Recursive 2-Means Split</title>
      <p>
        Semi-automatic segmentation algorithms tend to make under-segmentation errors. One of the ways to
solve this problem is cutting parts of the resulting point cloud. This can be achieved by using K-Means
algorithm adaptation suggested in the state of the art [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The research proposes the following workflow:
1. The segmented output is split into two clusters using K-Means algorithm with hyperparameter
k = 2;
2. 8 metrics are computed for the resulting split:
a) All mean distances between the combinations of both centroids and three clusters
(initial cluster and the new sub-clusters) (six distances);
b) Sizes of the new clusters (two sizes);
3. A random-forest-based classifier predicts the quality of the split based on the computed metrics
and outputs the probability that rejecting one of the clusters improves the segmentation quality;
4. If the quality is improved, the second cluster is rejected and the process is recursively repeated
from step 1.
      </p>
      <p>The first step involves a modification of K-Means algorithm. The initial points are selected by a
human. After the first points are being set, it is known that they are already correct. These provided
points are fixed centroids that never move. The point that is the furthest away from the first centroid is
chosen as a second centroid. Now that the second centroid is being positioned, it is being updated until
it converges, but the improvement is limited up to 10 times due to the long runtime prevention.</p>
      <p>The metrics used in the second step were chosen based on the hypothesis that their combined values
can potentially indicate different cluster types. For instance, if the original segmentation identifies a
single object, the distances between both point clouds and centroids will be significantly lower than
if the two clusters represent two distinct objects. Similarly, if the entire background is captured, the
false cluster will be much larger than the true cluster. If two very distinct objects are in the cluster, the
average distances between subclusters and their centroids will be considerably smaller than the average
distance between the sub-clusters and the other centroid. The advantage of these metrics is that they can
be computed relatively quickly.</p>
      <p>The classifier of the third step is trained on the computed metrics. The data consists of the metrics
and a flag that indicates whether the split improves the accuracy or not. The original research utilizes a
self-acquired and self-annotated dataset to generate the training data. This dataset was also available
during the experiments presented in this article.</p>
    </sec>
    <sec id="sec-7">
      <title>3. Methodology</title>
    </sec>
    <sec id="sec-8">
      <title>3.1. The Problem</title>
      <p>While the presented K-Means algorithm achieves 95% accuracy for the classifier, the primary
objective is to enhance the accuracy of the segmentation process. Currently, the bounding-box-based
segmentation approach achieves a 33% accuracy rate. By implementing the 1:1 cuts methodology, the
accuracy rate has improved to 55%. However, further enhancements are needed beyond this level of
improvement. It is currently incapable of trimming smaller edges of the image. In the worst-case
scenarios, the entire scene is enclosed in a single segment, while the human element only comprises
roughly 10% of the pixels. If cuts are made using a 1:1 ratio, the first cut will leave behind 50% of the
scene, the second cut - 25%, and the third cut would only succeed in an ideal scenario where the entire
human element fits into a single cluster. In instances where the human is positioned closer to the center,
only the edges need to be trimmed, and there is no single split with a 1:1 sensitivity that can accomplish
this. As shown in Fig. 1, two examples represent outputs of primary algorithm. Red color depicts
undersegmentation, green – over-segmentation errors, yellow – correct segmentation output.
3.2.</p>
    </sec>
    <sec id="sec-9">
      <title>Application of Weighted K-Means</title>
      <p>The purpose of assigning different weights to the centroids is to improve the partitioning of data
points. Decreasing weight of the centroid makes the other centroid more sensitive and, consequently,
more data points are assigned to the centroid with lower weight, which makes the algorithm more
efficient and adaptable to different datasets. For instance, in cases where the human is positioned closer
to the center, only the edges require trimming, and using different sensitivity ratios may produce better
results. However, a lower sensitivity ratio implies that more splits are necessary, which demands higher
classification accuracy as there will be more classifications needed. The main challenge is to determine
the most optimal sensitivity ratio that achieves the best accuracy.</p>
      <p>3.3.</p>
    </sec>
    <sec id="sec-10">
      <title>Finding Proper Weight</title>
      <p>To determine the required ratio of sensitivity, an optimal proportion of weight had to be identified.
The objective was to find a proper ratio of sensitivities for centroids in order to achieve higher accuracy.
The ratio of sensitivities can be described as variable relative relevance, as the variables are the
centroids selected by a human and the potential background centroid. The examination of variable
relevance is crucial, given the previous algorithm’s inadequate performance with specific datasets.</p>
      <p>The ratio of weights was chosen from the range of (0; 1) with a step of 0.1. To decrease the sensitivity
of a centroid, a lower weight was assigned to it. This indicates that the multiplication between the
calculated distance and the fixed weight resulted in a lower distance for the centroid. As a result, it is
more probable that the point will be assigned to the centroid with lower weight. For instance, if the ratio
of the weights is 1:2, it implies that a point must be twice as close to the second centroid to be assigned
to it. Otherwise, it will be assigned to the first one.</p>
      <p>However, the presented example is purely theoretical. The weights ratio may vary, as well as the
selection of centroids with varying levels of sensitivity. To identify the optimal ratio that achieves the
desired accuracy, an experiment was performed. The process involved testing the algorithm using
several ratios within a predetermined range.</p>
      <p>3.4.</p>
    </sec>
    <sec id="sec-11">
      <title>Workflow of Testing</title>
      <p>The presented methodology in Fig. 2 illustrates the procedural steps involved in the testing process
for a modified algorithm. The process comprises four primary actions aimed at effectively testing the
algorithm’s performance.</p>
      <p>Initially, the desired weights were set to determine the centroid sensitivities. This was achieved by
considering previous weights and their corresponding outcomes. Additionally, the goal was to test all
possible combinations to identify the most optimal solution.</p>
      <p>
        Subsequently, centroid sensitivities were set, and data was generated for the classifier. The data used
for the generation consists of self-acquired datasets containing depth images of people. The datasets
have been acquired using “Kinect” sensor. They have been labelled semi-automatically with the help
of solution without random forest accuracy improvements [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. After the generation a dataset was created
that would be utilized to train and evaluate the performance of a random forest classifier algorithm. The
data generation was implemented using the Java programming language (OpenJDK 14).
      </p>
      <p>The generated data was then used for training, as mentioned previously. The random forest classifier
was selected as the algorithm of choice since it typically produces superior outcomes compared to a
single tree. The implementation of this random forest classifier was carried out using the Python
programming language, specifically utilizing the sklearn library.</p>
      <p>The segmentation accuracy finally assessed by obtaining mean accuracies for simple, complex and
combined datasets.</p>
    </sec>
    <sec id="sec-12">
      <title>4. Experimental Evaluation of the Algorithm</title>
      <p>The algorithm was evaluated experimentally using the techniques described in the related work
section. The results constitute mean segmentation accuracy. Mean segmentation accuracy is measured
by calculating a cross-set intersection coefficient (see (1)):</p>
      <p>=  (  (⋂ ) ) ×  (  (⋂ ) ) (1)
where A is the set of points selected during the segmentation, G is the set of ground truth points.</p>
      <p>For the hypothesis to be confirmed it was important that when using weighted centroids, the results
with simple dataset would not worsen and would improve over baseline (id est the variant with second
centroid weight of 1) with complex and combined datasets. The experiment involved assigning lower
weight to both centroids at different times. The results indicated that superior results were obtained
when assigning lower weight to the second centroid. When comparing the results obtained using
different second centroid weights (see Table I), the best results were a mean accuracy of 82.94% (0.5%
improvement over baseline) while using simple dataset, mean accuracy of 59.62% (7.8% improvement
over baseline) while using complex dataset and combined dataset mean accuracy of 66.05% (5.75%
improvement over baseline). The best results for simple dataset were achieved with second centroid
weight of 0.5, while the best accuracy for the complex and combined mean accuracy results were
obtained with the second centroid weight of 0.4. These solutions use desensitized first centroid which
means more points get attributed to the second centroid. Interestingly, more than half of the results
obtained with the simple dataset were rather close to the best solution (weights of 0.1 through 0.4),
something that cannot be said about the complex and combined datasets accuracy results.</p>
      <p>These findings are further reinforced when comparing the results of random forest classification
reports with different second centroid weights (see Table II). The best precision, recall and F1-score for
correct cuts are 0.96, 0.97 and 0.96 respectively while the best results using the same metrics for
incorrect cuts are 0.98, 0.97 and 0.98 respectively. The best accuracy achieved was 0.97. The
consistently triumphant variant was the one with second centroid weight of 0.4, although the variant
with 0.5 is also close call.
Correct cuts – weight of 1
Correct cuts – weight of 0.4
Correct cuts – weight of 0.5
Incorrect cuts – weight of 1
Incorrect cuts – weight of 0.4
Incorrect cuts – weight of 0.5
Accuracy – weight of 1
Accuracy – weight of 0.4
Accuracy – weight of 0.5</p>
      <p>
        When taking a look at box plots, it becomes apparent that when using 0.4 as second centroid
weight, median accuracy of the algorithm nears 100% in specific cases and further solidifies the
findings that algorithm accuracy didn’t worsen by using weighted centroids in the case of simple
dataset (see Fig. 3 and Fig. 4). The accuracy box plot while using complex dataset reveals such
comparative differences: the accuracy improved as the accuracy data contained in the second and
third quartiles got pushed up and the upper half of data is contained within less space. The front poses
accuracy got improved as well as adding weights got rid of all outliers at the cost of more sparsely
spaced out predictions. Accuracy improvements can be observed in the side parts accuracy (see Fig. 5
and Fig. 6).
Figure 7 demonstrates several mistakes made by the proposed algorithm. Red color depicts
undersegmentation, green – over-segmentation errors, yellow – correct segmentation output. In all cases the
under-segmented area is positioned around the human. The last three examples show that the cluster
weights should be very different to cut such small pieces on either side of the human body.
Unfortunately, this is limited by the classifier accuracy – very low weight ratio means many splits and
many predictions. Since the classifier accuracy is currently 97%, making 20 splits would mean that the
probability that all of them are correct is only about 54%. Despite that, the manual work required to
remove the red areas of the images is lower than presented in the original research [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The second image shows an error that occurs due to the centroid positioning. The algorithm decided
that cutting the legs of the human adds accuracy compared to leaving too many pixels in the output,
which is true – this leads to different but smaller mistake.</p>
    </sec>
    <sec id="sec-13">
      <title>5. Conclusions</title>
      <p>In the experiments performed in the paper, several weighted K-means variants consistently
outperformed the baseline unweighted one. Using 0.5 as second centroid weight, simple dataset
accuracy of 82.94% was achieved. The complex and combined datasets benefited the most from 0.4 as
second centroid weight. The further research on classification report then revealed that the precision,
recall and f1-score were tied between the 0.4 and 0.5 variants on the correct cuts, however as far as the
incorrect cuts and overall accuracy is concerned, the variant with 0.4 weight prevailed in all the metrics.</p>
      <p>Our research demonstrated that for human body segmentation, it is better to use weighted K-means
algorithm as it seemingly gets rid of outlier data as revealed by the box plots and the weighted variant
can be further applied without sacrificing any important qualities of the baseline algorithm. The further
research that could be done is further optimization of the algorithm for the complex and combined
datasets with the aim of improving the accuracy of segmentation.</p>
    </sec>
    <sec id="sec-14">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ryselis</surname>
          </string-name>
          , “
          <article-title>Random Forest classifier for correcting point cloud segmentation based on metrics of recursive 2-means splits</article-title>
          ,
          <source>” in Information and Software Technologies: 28th International Conference, ICIST</source>
          <year>2022</year>
          , Kaunas, Lithuania,
          <source>October 13-15</source>
          ,
          <year>2022</year>
          , Proceedings. Springer,
          <year>2022</year>
          , pp.
          <fpage>90</fpage>
          -
          <lpage>101</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Rusu</surname>
          </string-name>
          , “
          <article-title>Semantic 3d object maps for everyday manipulation in human living environments,”</article-title>
          <source>Ph.D. dissertation</source>
          , Computer Science department, Technische Universitaet Muenchen, Germany, 10
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Breiman</surname>
          </string-name>
          , “Random forests,”
          <article-title>Machine learning</article-title>
          , vol.
          <volume>45</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>5</fpage>
          -
          <lpage>32</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , “
          <article-title>Classification of als point cloud with improved point cloud segmentation and random forests,” Remote Sensing</article-title>
          , vol.
          <volume>9</volume>
          , no.
          <issue>3</issue>
          , p.
          <fpage>288</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pinto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Mendrik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Correia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Silva</surname>
          </string-name>
          , “
          <article-title>Automatic brain tissue segmentation in MR images using random forests and conditional random fields</article-title>
          ,
          <source>”Journal of neuroscience methods</source>
          , vol.
          <volume>270</volume>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>123</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Misra</surname>
          </string-name>
          , “
          <article-title>Intelligent image segmentation for organic-richshales using random forest, wavelet transform</article-title>
          , and hessian matrix,
          <source>” IEEE Geoscience and Remote Sensing Letters</source>
          , vol.
          <volume>17</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>1144</fpage>
          -
          <lpage>1147</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>J. MacQueen</surname>
          </string-name>
          , “
          <article-title>Classification and analysis of multivariate observations,” in 5th Berkeley Symp</article-title>
          . Math. Statist. Probability. University of California Los Angeles LA USA,
          <year>1967</year>
          , pp.
          <fpage>281</fpage>
          -
          <lpage>297</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shmmala</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Ashour</surname>
          </string-name>
          , “
          <article-title>Color based image segmentation using different versions of k-means in two spaces</article-title>
          ,
          <source>” Global Advanced Research Journal of Engineering, Technology and Innovation</source>
          , vol.
          <volume>1</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>030</fpage>
          -
          <lpage>041</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          <string-name>
            <surname>Velastin</surname>
            , and
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Yin</surname>
          </string-name>
          , “
          <article-title>Automatic grading of apples based on multi-features and weighted k-means clustering algorithm</article-title>
          ,” Information Processing in Agriculture, vol.
          <volume>7</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>556</fpage>
          -
          <lpage>565</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Ayech</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Ziou</surname>
          </string-name>
          , “
          <article-title>Terahertz image segmentation using k-means clustering based on weighted feature learning and random pixel sampling</article-title>
          ,
          <source>” Neurocomputing</source>
          , vol.
          <volume>175</volume>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>264</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Seal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Khanna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Krejcar</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Yazidi</surname>
          </string-name>
          , “
          <article-title>Aw k s: adaptive, weighted kmeans-based superpixels for improved saliency detection,” Pattern Analysis and Applications</article-title>
          , vol.
          <volume>24</volume>
          , pp.
          <fpage>625</fpage>
          -
          <lpage>639</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ohana-Levi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bahat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Peeters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shtein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Netzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Ben-Gal</surname>
          </string-name>
          ,
          <article-title>“A weighted multivariate spatial clustering model to determine irrigation management zones,” Computers and Electronics in Agriculture</article-title>
          , vol.
          <volume>162</volume>
          , pp.
          <fpage>719</fpage>
          -
          <lpage>731</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>