<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>V. Romanuke);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>DBSCAN parallelization by spatial dataset division and hyperparameter adjustment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vadim Romanuke</string-name>
          <email>romanukevadimv@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergii Pavlov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Svitlana Yaremko</string-name>
          <email>svitlana_yaremko@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Artem Guralnyk</string-name>
          <email>artem.guralnyk@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Smetaniuk</string-name>
          <email>smetaniuk@vntu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Vinnytsia Institute of Trade and Economics of State University of Trade and Economics</institution>
          ,
          <addr-line>Soborna str., 87, 21050, Vinnytsia</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vinnytsia National Technical University</institution>
          ,
          <addr-line>Khmelnytske sh., 95, 21021, Vinnytsia</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1977</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>An approach to parallelizing the DBSCAN algorithm without losing in accuracy is presented to further improve quality of arbitrary-shape clustering. The dataset is divided into a number of subsets, one of which should be labeled to adjust the DBSCAN hyperparameters - the neighborhood radius and minimum-neighbors number. The neighborhood radius, set to a minimum, is adjusted first, where the original DBSCANis applied to a subset of points with known ground truth labels. While the current accuracy of clustering is not improved, the neighborhood radius is increased by an increment step. The original DBSCAN algorithm is applied with the best neighborhood radius to the remaining subsets. The minimum-neighbors number, set to 3, is adjusted second by the best neighborhood radius. Compared to the performance of the original DBSCAN algorithm, the parallelized DBSCAN performs more accurately being extremely fast. As the dataset size increases and the number of clusters increases, one by one or together, the parallelized DBSCAN outperforms the original DBSCAN more. However, the DBSCAN parallelization by spatial dataset division and hyperparameter adjustment is less efficient on datasets with more complex structure. Nevertheless, the average speedup exceeds 80 times for planar datasets and exceeds 300 times for three-dimensional datasets.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;arbitrary-shape clustering</kwd>
        <kwd>DBSCAN</kwd>
        <kwd>parallelization</kwd>
        <kwd>hyperparameter adjustment</kwd>
        <kwd>neighborhood radius</kwd>
        <kwd>spatial dataset division1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>DBSCAN is one of the most commonly used clustering algorithms whose purpose is to reveal
clusters of arbitrary shape by domain minimal knowledge [1, 2]. DBSCAN groups points
based on the notion of -neighborhood of a point and a specified minimum number of
neighboring points</p>
      <p>, which further are supplemented with the notions of core points,
directly reachable points, non-directly reachable points, and outliers [3, 4]. Besides, to rectify
the non-symmetric relation of the reachability [5, 6], a symmetric notion of
densityconnectedness is added [7, 8]. The density is defined by the neighborhood radius and
number , where a cluster contains at least points. The -neighborhood is defined as
a set of points falling within radius by a distance function [5, 9, 10]. Apart from DBSCAN
hyperparameters and , the distance function is seen as an additional parameter of the
DBSCAN algorithm [7, 11, 12].</p>
      <p>For points in a dataset, the DBSCAN average runtime complexity is ,
although the worst case of</p>
      <p>is not excluded [3, 4, 7], especially when there is probable
incompleteness in data [9, 10, 13] or the neighborhood radius can vary in a wide range [5, 12,
14]. Whereas the overall performance of DBSCAN is quite satisfactory, and it successfully
reveals outliers and identifies nonglobular shapes (contrary to other commonly used
clustering algorithms like, e. g., k-means [9, 15, 16]), it fails to handle datasets whose clusters
are of much varying densities [17, 18]. The second drawback is the neighborhood radius
selection [8, 11, 14]. While datasets of planar points or points in the three-dimensional space
are visualized, the neighborhood radius is well understood and assessed, but higher
dimensions are problematic to “see” a meaningful distance [19, 20]. Moreover, quality of
features may be pretty different and may change through a domain which is clustered [8].
This hinders in obtaining accurately partitioned datasets by coarsely applying DBSCAN [17].
There are domains whose factual clusters have multiple meaningful radii of the neighborhood
[11, 13, 20].</p>
      <p>Both the weaknesses of DBSCAN are addressed by OPTICS [21, 22], which is another
algorithm whose purpose is to detect meaningful clusters in data of varying density [23, 24].
OPTICS also requires two hyperparameters and , although the neighborhood radius is
not necessary. The radius can be set to the maximum possible value, but this results in
runtime close to</p>
      <p>[25]. Besides, OPTICS is mainly intended for hierarchical clustering,
and OPTICS is reported to have an actual constant slowdown factor of 1.6 compared to
DBSCAN [21, 26].</p>
      <p>Another DBSCAN drawback, being reported less frequently, is its poorer performance on
large datasets [27]. This especially concerns high-dimensional data, where it is difficult to find
an appropriate value for due to adjusting this hyperparameter takes significant amounts of
time and computational resources. Overall, tunability of DBSCAN hyperparameters (as well in
other clustering algorithms) worsens as the dataset enlarges [28, 29].</p>
      <p>Nevertheless, when a large dataset is assumed to have tightly connected or not sparsely
scattered clusters, it is possible to divide the dataset into multiple subdatasets in order to
apply the DBSCAN algorithm to every subdataset separately. This resembles
divide-andconquer approach successfully applied to solving large combinatorial optimization problems
like the traveling salesman problem with thousands of nodes and larger [30]. In particular, the
traveling salesman problem nodes are grouped into a definite number of clusters, each of
which corresponds to an open-tour traveling salesman subproblem subsequently solved
independently of solving other subproblems [31, 32].</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statement</title>
      <p>In cluster analysis, spatial dataset division is effective when possible clusters are not sparsely
scattered. Otherwise, the problem of clustering tends to be almost trivial — the distinctly
distanced groups of objects can be more efficiently separated by the support vector machine
(linearly inseparable data are projected onto a higher dimensional space to become linearly
separable [33, 34]) or other similar approaches including k-means with evaluating the optimal
number of clusters using the silhouette clustering evaluation criterion [15, 35, 36]. To the
contrary, potential clusters closely packed together are totally indistinguishable without
considering spatial densities if, for instance, the clusters have at least a partially hierarchical
structure [2, 37, 38]. A marginal example of such a structure is a dataset consisting of
nonoverlapping enclosed hyperspheres or hyperellipsoids (for planar objects, they are
nonoverlapping enclosed circles or ellipses), or objects of similar form being irregularly distorted
or nonconvex [29, 39]. Another major example frequently occurred in practice is a dataset
consisting of cluster bands, where serpentine-like clusters have a resembling structure [40].</p>
      <p>Naturally, the DBSCAN algorithm could have been easily sped up if it was generally
parallelizable [1, 2, 7]. Unfortunately, the algorithm is of two loops, and there is the nested
loop which cannot be made independent of the outside loop [1, 4]. The outside loop, working
with each point in a given dataset, is based on the range query which can be implemented
using database indexing for better performance rather than a slow linear scanning. Although
some efforts have been made in this direction to parallelize the outside loop, the gain does not
seem to have a significant impact [2, 17, 19].</p>
      <p>A follow-up DBSCAN algorithm named HDBSCAN [2, 17] was an attempt to rectify the
DBSCAN flat-result drawbacks rather its poor parallelizability. Neither did GDBSCAN
advances in parallelizability [5, 7], which only moved the original DBSCAN algorithm
parameters and to the predicates.</p>
      <p>Issuing from unsatisfactory runtime complexity of the DBSCAN algorithm for large
datasets, the goal is to suggest an approach to DBSCAN parallelization without losing much in
accuracy. The approach should be algorithmized by automatically adjusting neighborhood
radius to the most efficient value. A series of computational experiments is to be carried out
in order to see the performance of the parallelized DBSCAN algorithm with the
hyperparameter adjustment compared to the performance of the original DBSCAN algorithm.
The comparative results are then to be discussed by underlining limitations and drawbacks of
the suggested DBSCAN parallelization.</p>
    </sec>
    <sec id="sec-3">
      <title>3. DBSCAN parallelization</title>
      <p>To adjust DBSCAN hyperparameters automatically, consider an ultimate repetition of trying
to improve the accuracy. As there are more possible versions of the neighborhood radius than
those of the minimum number of neighboring points, the neighborhood radius is adjusted
first. For this, denote the maximum number of accuracy improvement fails by . While
the minimum number of neighboring points is adjusted, the maximum number of accuracy
improvement fails denoted by is used. These hyperparameters are adjusted as
follows:
1. Divide the dataset into subsets.</p>
      <p>2. Set to a minimum and apply the DBSCAN algorithm to a subset of points with known
ground truth labels.</p>
      <p>3. While the current number of accuracy improvement fails is fewer than
, increase
by . Once the number of outliers and redundant labels becomes fewer, set the current
number of accuracy improvement fails to 0, and store the current value of as the best one;
otherwise, increase the current number of accuracy improvement fails by 1.</p>
      <p>4. Denote the best neighborhood radius by .
5. Apply the DBSCAN algorithm to the remaining subsets.</p>
      <p>6. Set to 3 and apply the DBSCAN algorithm to the subset by the best neighborhood
radius</p>
      <p>.
7. While the current number of accuracy improvement fails is fewer than
,
increase</p>
      <p>by 1. Once the number of outliers and redundant labels becomes fewer, set the
current number of accuracy improvement fails to 0, and store the current value of as the
best one; otherwise, increase the current number of accuracy improvement fails by 1.</p>
      <p>Obviously, running the DBSCAN algorithm on the remaining subsets can be
executed in parallel. Moreover, as the hyperparameters are adjusted (on a labeled subset), the
size of a labeled subset can be set to a few different versions, and the adjustment can be
executed in parallel on these differently sized subsets [16, 37, 38]. Differently sized subsets are
obtained by changing number .</p>
      <p>First of all, the DBSCAN parallelization is expected to significantly reduce the computation
time. Secondly, the clustering accuracy is believed to remain comparable to the DBSCAN
algorithm itself, if even the adjusted hyperparameters are used in it. The correctness of these
hypotheses is going to be ascertained or falsified by computational experiments. A specificity
of such experiments is that the dataset can be visualized, with better or worse extent of cluster
appearance and distinguishability.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Computational experiments</title>
      <p>The performance of the parallelized DBSCAN algorithm with the hyperparameter adjustment
is estimated by running the above-stated algorithm on planar datasets whose potential
clusters are closely packed together. There are two types of such datasets: developing and
enveloping. Serpentine-like clusters form a developing dataset (Figure 1). Non-overlapping
enclosed circles or ellipses modulated by sinusoidal functions form an enveloping dataset
(Figure 2). For abbreviation, these clusters (datasets) will be called serpentine and circular
clusters (datasets), respectively.</p>
      <p>
        is randomly generated using values
,
,
,
of normally distributed random variables with zero mean and unit variance for point
belonging to cluster , where
and
,
,
i. e., , the dataset size is , and is the number of clusters of size each.
Besides, values and of two additional normally distributed random variables with zero
mean and unit variance are used for this dataset. Thus, the horizontal component is
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
by (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), where
component is
is chosen randomly between 0.1 and 1 with a step of
. The vertical
by (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
,
,
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
are independent being chosen randomly between 3 and 4 with a
      </p>
      <p>are independent being chosen randomly between 0.01 and 0.1
;</p>
      <p>is chosen randomly between 0.0005 and 0.005 with a step of
, whereupon it is set negative if
;
is varied between 0.1 and 0.5 with a step
of 0.1; is chosen randomly between 0.0005 and 0.005 with a step of , whereupon
it is set negative if .</p>
      <p>
        The serpentine dataset by (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) is generated with three to 18 clusters,
, for five versions of noise magnitude,
where each cluster contains 1000 to 10000 points with a step of 1000 points,
      </p>
      <p>Clusters contain the same number of points . The dataset is divided into
subsets so that each cluster would contain the same number of points within every subset,
where
and subset
is clustered,
if</p>
      <p>is even and
if is odd. Therefore, for a one generation of the serpentine dataset, there are 10 versions of
the dataset size, 16 versions of the dataset cluster number, five versions of noise magnitude,
and five versions of the dataset division. Applied to a dataset, the parallelized DBSCAN
algorithm returns the best neighborhood radius , the best minimum number of neighboring
points
, the number of outliers
(labeled by
), the number of points incorrectly
labeled as non-existing clusters (their labels are greater than , as if there are some
other one or a few clusters), and the number of points incorrectly labeled as existing clusters
(being between 1 and , their labels are factually incorrect or confused). The total
number of missed (incorrectly labeled) points is
whose percentage is</p>
      <p>The serpentine dataset generation is repeated for 100 times, whereupon the results are
averaged over the 100 repetitions. The averages are presented as a array whose
entry is a set of
,</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
,
,
,
,
,
, .
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
      </p>
      <p>
        Another array presents averages of the computation time. To compare
results (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) with those produced by the DBSCAN algorithm itself, the parallelized DBSCAN
algorithm with the hyperparameter adjustment is also used, but with and . The
original DBSCAN algorithm results are stored as a array with (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ). Another
array presents averages of the DBSCAN computation time.
      </p>
      <p>Comparative computation time statistics for the planar serpentine dataset is presented in
Table 1, where every entry is an averaged ratio of the original DBSCAN algorithm
computation time to the parallelized DBSCAN algorithm computation time. Ratio
is a computation time function of the dataset size and the number of dataset
clusters, which is a
matrix preliminarily averaged over noise;
is a
matrix preliminarily averaged over dataset size;
is a
matrix preliminarily
averaged over the number of dataset clusters. The advantage of the parallelized DBSCAN
algorithm is quite obvious — it is much faster, while, prudently speaking, the speedup
minimum is above 10 times and the speedup maximum exceeds 100 times. On overall average,
the parallelized DBSCAN is 97.21 times faster.
,</p>
      <p>,
Statistics</p>
      <sec id="sec-4-1">
        <title>Minimum 13.78549 52.49892 22.08231</title>
      </sec>
      <sec id="sec-4-2">
        <title>Maximum 157.3137 117.2529 139.2775</title>
        <p>
          Comparative performance statistics for the planar serpentine dataset is presented in Table
2, where rows with
contain percentages, “Five intervals” stands for the parallelized DBSCAN with five versions of
the dataset division, and “Best interval” stands for the parallelized DBSCAN with the version
at which percentage (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) of missed (incorrectly labeled) points is minimal. Herein and below,
the DBSCAN accuracy is decomposed into rates for (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) and (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ). The parallelized DBSCAN
misses (loses) points at a rate of about 7 %, whereas it is above 26 % by the original DBSCAN.
Points misidentified as outliers are lost at almost equal rates, although a little advantage of the
parallelized DBSCAN exists whose rate is below 1.95 % (while the original DBSCAN loses
above 2.1 % of points misidentified as outliers). The original DBSCAN incorrectly assigns
above 3.1 % of points to non-existing clusters, but this rate is as 2.5 times as lower for the
parallelized DBSCAN. The original DBSCAN confuses clusters even more exceeding 21 % of
points incorrectly assigned to existing clusters, while this rate for the parallelized DBSCAN is
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
DBSCAN
        </p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>
          Best
interval
slightly above 3.8 %. When the best interval is selected, the mentioned rates for (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) and (
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
drop below 0.21 %, 0.26 %, 0.73 %, 1.19 %, respectively. However, the worst cases of the rates
for (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) and (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) are not excluded even by the parallelized DBSCAN with the best interval
(Figure 3).
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>A circular dataset point is randomly generated using values , of</title>
        <p>
          normally distributed random variables with zero mean and unit variance and values
Average
0.93991
,
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          )
0
-10
-20
-30
-40
-50
and
by
and values and of two additional normally distributed random variables with zero mean
and unit variance, wherein is chosen randomly between 0.1 and 3 with a step of ;
if and if ; is chosen randomly between 2 and 18 with a step of
where each cluster contains 1000 to 10000 points with a step of 1000 points,
,
;
if
and
; is varied between 0.2 and 1 with a step of 0.2.
        </p>
        <p>
          The planar circular dataset by (
          <xref ref-type="bibr" rid="ref11">11</xref>
          ), (
          <xref ref-type="bibr" rid="ref12">12</xref>
          ), (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) is generated with three to eight clusters,
, for five versions of noise magnitude,
        </p>
        <p>
          The remaining parameters of the circular dataset and its processing including (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) — (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) are
the same as for the serpentine dataset, except for the array size that is now (the
second dimension has 6 entries, which influences the subsequent arrays upon averaging,
minimizing, and maximizing).
        </p>
        <p>Presented in Table 3, comparative computation time statistics for the planar circular
dataset resemble that in Table 1 for the planar serpentine dataset. Thus, the parallelized
DBSCAN speedup minimum is, prudently speaking, above 15 times and the speedup
maximum exceeds 125 times, by roughly the same standard deviations. On overall average,
the parallelized DBSCAN is 99.67 times faster (particularly, it is due to the maximum number
of clusters here is fewer than that for the planar serpentine dataset).</p>
        <p>
          Presented in Table 4, comparative performance statistics for the planar circular dataset
confirms the advantage of the parallelized DBSCAN algorithm. The parallelized DBSCAN
loses points at a rate of about 7.12 % (slightly higher than that for the planar dataset), whereas
it is 11.85 % by the original DBSCAN being 2.25 times lower than that for the planar dataset.
Points misidentified as outliers are lost at rates of 1 % and 1.42 % by the original and
parallelized DBSCAN algorithms, respectively. So, the parallelized DBSCAN may lose more
circularly located points than the original DBSCAN. The original DBSCAN incorrectly assigns
above 5.59 % of points to non-existing clusters, but this rate is as 4.26 times as lower for the
parallelized DBSCAN. The cluster confusion is at similar rates — it is 5.25 % and 4.39 % by the
original and parallelized DBSCAN algorithms, respectively. When the best interval is selected,
the mentioned rates for (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) and (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) drop below 0.7 %, 0.3 %, 1.97 %, 2.96 %, respectively. These
rates are clearly higher than those for the planar serpentine dataset. Nevertheless, the planar
circular dataset worst case (Figure 4) is not a complete fail like the planar serpentine dataset
worst case is.
        </p>
        <p>It is worth noting that the dataset worst case in Figure 4 has far much more variety in its
structure than that in the worst case in Figure 3. Despite this, the best-interval parallelized
DBSCAN performs relatively much better than the original DBSCAN. The latter has 73.7 % of
missed points versus 44.9 % of points missed by the parallelized DBSCAN with
(Figure 5). The percentage of outliers, however, is 13.1 % by the parallelized DBSCAN,
whereas the original DBSCAN leaves here 7.225 % of points as outliers. Meanwhile, the
original DBSCAN assigns 59.925 % of points to non-existing clusters, and this rate is just 8.3 %
for the parallelized DBSCAN (the black-square marked points are clearly seen in Figure 5).
Due to this, the cluster confusion is lower at the original DBSCAN — it is 6.55 % versus 23.5 %
at the parallelized DBSCAN.</p>
        <p>
          In this particular example, the parallelized DBSCAN performance significantly worsens as
the dataset is divided into fewer subsets. Thus, and the rates for (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) are 6.7625 %,
13.1625 %, 28.15 % with , but and the rates for (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) are 5.9875 %, 26.9125 %,
30.7625 % with (Figure 6). Furthermore, these rates are 52.9625 %, 1.1875 %, 3.2625 %,
48.5125 % with , and 69.25 %, 0.7875 %, 1.0125 %, 67.45 % with , respectively
(Figure 7), i. e. the rate of the cluster confusion grows while the rate of outliers drops. The rate
of points assigned to non-existing clusters becomes lower as the subset is made larger (or, in
other words, number is taken fewer).
        </p>
        <p>
          For creating three-dimensional serpentine-like clusters, a circular dataset point
is randomly generated using values
of normally distributed random variables with zero mean and unit variance and values
of uniformly distributed random variables on interval
by (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), where
,
        </p>
        <p>
          ,
,
,
,
,
,
,
,
,
,
,
(
          <xref ref-type="bibr" rid="ref14">14</xref>
          )
(15)
(16)
(17)
(18)
by (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ),
and values , , of three additional normally distributed random variables with zero mean
and unit variance, wherein is chosen randomly between 0.1 and 3 with a step of ;
if
and
if
;
        </p>
        <p>is chosen randomly between 2 and 18 with a step of
1;
is chosen randomly between 0 and
with a step of
;
if
and</p>
        <p>if ; if and if ; is varied between 0.1 and 0.5 with a
step of 0.1. Three-dimensional serpentine-like clusters form a developing dataset whose
visualization is far much more complicated than that of the planar circular dataset (Figure 8).</p>
        <p>
          The three-dimensional serpentine dataset by (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) — (16), (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ), (17), (18) is generated
with three to eight clusters, , for five versions of noise magnitude,
where each cluster contains 1000 to 10000 points with a step of 1000 points,
,
        </p>
        <p>
          The remaining parameters of the three-dimensional serpentine dataset and its processing
including (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) — (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) are the same as for the planar circular dataset.
        </p>
        <p>Presented in Table 5, comparative computation time statistics for the three-dimensional
serpentine dataset does not resemble that in Tables 1 and 2 for the planar datasets. Here, the
parallelized DBSCAN speedup minimum is, prudently speaking, above 40 times and the
speedup maximum exceeds 600 times, although standard deviations are higher than those for
the planar datasets. On overall average, the parallelized DBSCAN over three-dimensional
serpentine datasets is 342 times faster.</p>
        <p>Eventually presented in Table 6, comparative performance statistics for the
threedimensional serpentine dataset again confirms the advantage of the parallelized DBSCAN
algorithm, but if the best subset size (best interval) is selected. The parallelized DBSCAN loses
points at a higher rate (roughly, 12.5 %) than the original DBSCAN does (roughly, 8.6 %), but
the best-interval parallelized DBSCAN misses points at a rate of below 4.8 %. The original
DBSCAN rarely “sees” outliers having their average rate below 0.1 % and its maximum below
12.5 %, but it “sees” non-existing clusters at a rate of 2.97 % and confuses clusters at a rate of
5.57 %. The respective rates of non-existing clusters and cluster confusion for the parallelized
DBSCAN are 5.37 % and 4.39 %, whereas they drop to 2.086 % and 2.065 % with the
bestinterval parallelized DBSCAN.</p>
        <p>The worst three-dimensional case is in Figure 8, where 76.725 % of points are missed by the
best-interval parallelized DBSCAN with the dataset division into 100 subsets. Meanwhile, the
original DBSCAN performs at a rate of 24.25 % on this dataset. It is worth noting that in this
case the noise magnitude is minimal. It appears to be paradoxical, but at greater noise
magnitudes the best-interval parallelized DBSCAN performs even worse on some instances of
the three-dimensional dataset with the maximum of clusters. The particular dataset in Figure
8 turns out to be the worst case due to the original DBSCAN performs far better on it (it could
be even acceptable in some practical situations), and thus the difference between the original
and best-interval parallelized DBSCAN algorithms is paradoxically huge.</p>
        <p>DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
DBSCAN</p>
        <p>Five
intervals</p>
        <p>Best
interval
0.31
Comparative performance statistics for the three-dimensional serpentine dataset</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion and conclusion</title>
      <p>Compared to the performance of the original DBSCAN algorithm, the parallelized DBSCAN
performs more accurately being extremely fast. The latter is straightforwardly inferred from
Tables 1, 3, 5, where the average speedup exceeds 80 times for planar datasets and exceeds 300
times for three-dimensional datasets. The gain in accuracy is not that unambiguous. While the
parallelized DBSCAN outperforms the original DBSCAN on planar datasets (Tables 2 and 4),
even without selecting the best size of the subset (called the best interval due to planarity), it
underperforms on three-dimensional datasets without selecting the best-interval result. Only
the best-interval parallelized DBSCAN outperforms the original DBSCAN on
threedimensional datasets. It is likely reasoned by a higher complexity of the structure of such
datasets (see Figure 8) compared to the structure of planar datasets (see Figures 1 — 4). In
other words, the DBSCAN parallelization by spatial dataset division and hyperparameter
adjustment is less efficient on datasets with more complex structure. However, the series of
computational experiments has exposed a possibility of the parallelized DBSCAN
underperformance for simple-structured datasets also (see Figure 3 with the worst case of the
planar serpentine dataset, although its structure is far much simpler than the structure of
planar circular dataset in Figure 4).</p>
      <p>The growing number of clusters may additionally affect the parallelized DBSCAN
performance. Nevertheless, as the dataset size increases and the number of clusters increases,
one by one or together, the parallelized DBSCAN outperforms the original DBSCAN more.
The gain in accuracy ranges from 4.9072 to 276.8096 times for the planar serpentine dataset,
when an averaged over noise ratio of the original DBSCAN accuracy to the parallelized
DBSCAN accuracy is considered as a function of the dataset size and the number of clusters.
The accuracy gain drastically drops for the planar circular dataset ranging from 1.4636 to
5.5163 times. For the three-dimensional serpentine dataset, the accuracy gain ranges from
0.0004 to 10.9631, i. e. it is below 1 for three and four clusters (it is just 0.1007 and 0.3179 for
three and four clusters, respectively). For five clusters and more, along with the dataset size
increasing, the parallelized DBSCAN outperforms the original DBSCAN. The poorer
performance on the fewest number of clusters is an obvious limitation of the parallelized
DBSCAN.</p>
      <p>Another limitation is the poorer performance on datasets with circularity. Dividing into
intervals is uncommon for circular datasets, whose natural division would be circular-sector,
especially for three-dimensional datasets. Moreover, it is hard to divide so that each cluster
would contain (approximately) the same number of points within every subset. In addition,
the DBSCAN parallelization efficiency is impossible to estimate without known ground truth
labels.</p>
      <p>The DBSCAN parallelization efficiency expectedly deteriorates when density is changeable.
The dataset in Figure 3 is seemingly the case, where each serpentine has significantly thinner
and thicker intervals observable via zoom-in. This is the most common drawback of
densitybased clustering methods, although they all, and the DBSCAN algorithm in particular, are
intended to handle varying densities of points. However, the datasets used for the
computational experiments have density-modulated regions (i. e., varying density of regions
of changeable densities of points), which have served as a close-to-the-worst-case scenario in
order to reveal weaknesses of the suggested DBSCAN parallelization.</p>
      <p>
        Overall, it is a promising approach to speed up arbitrary-shape clustering without losing in
accuracy. Posed as a local optimization versus global optimization, the parallelized DBSCAN
performs well on serpentine datasets but not limited only to them. The hyperparameters,
neighborhood radius and minimum-neighbors number, are gradually adjusted with some
steps, which are tunable also (basically, the neighborhood radius increment step). Apart from
the subsets of the divided dataset, the suggested modification of the DBSCAN algorithm can
be run in parallel for multiple labeled subset sizes, as well as for a few versions of the
neighborhood radius increment step.
Pedrycz, R. Tadeusiewicz, J. M. Zurada (Eds.), Artificial Intelligence and Soft Computing,
ICAISC 2021, Lecture Notes in Computer Science, vol. 12854, Springer, Cham, 2021.
doi:10.1007/978-3-030-87986-0_32.
[15] J. A. Hartigan, M. A. Wong, Algorithm AS 136: A k-means clustering algorithm, Journal
of the Royal Statistical Society, Ser. C 28 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (1979) 100–108. doi:10.2307/2346830.
[16] M. E. Celebi, H. A. Kingravi, P. A. Vela, A comparative study of efficient initialization
methods for the k-means clustering algorithm, Expert Systems with Applications 40 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(2013) 200–210. doi:10.1016/j.eswa.2012.07.021.
[17] R. J. G. B. Campello, D. Moulavi, A. Zimek, J. Sander, Hierarchical density estimates for
data clustering, visualization, and outlier detection, ACM Transactions on Knowledge
Discovery from Data 10 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (2015) 1–51. doi:10.1145/2733381.
[18] H.-P. Kriegel, P. Kröger, J. Sander, A. Zimek, Density-based clustering, Wiley
Interdisciplinary Reviews: Data Mining and Knowledge Discovery 1 (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) (2011) 231–240.
doi:10.1002/widm.30.
[19] Y. Zuo, Z. Hu, S. Yuan et al., Identification of convective and stratiform clouds based on
the improved DBSCAN clustering algorithm, Advances in Atmospheric Sciences 39 (2022)
2203–2212. doi:10.1007/s00376-021-1223-7.
[20] G. Habib, S. Qureshi, Convolutional neural networks (CNN) and DBSCAN clustering for
SARs-CoV challenges: complete deep learning solution, in: D. Gupta, A. Khanna, S.
Bhattacharyya, A. E. Hassanien, S. Anand, A. Jaiswal (Eds.), International Conference on
Innovative Computing and Communications, Lecture Notes in Networks and Systems,
vol. 471, Springer, Singapore, 2023. doi:10.1007/978-981-19-2535-1_35.
[21] M. Ankerst, M. M. Breunig, H.-P. Kriegel, J. Sander, OPTICS: Ordering points to identify
the clustering structure, in: ACM SIGMOD International Conference on Management of
Data, ACM Press, 1999, pp. 49–60.
[22] R. J. G. B. Campello, D. Moulavi, A. Zimek, J. Sander, A framework for semi-supervised
and unsupervised optimal extraction of clusters from hierarchies, Data Mining and
Knowledge Discovery 27 (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) (2013) 344. doi:10.1007/s10618-013-0311-4.
[23] Z. F. Wang, P. Y. Yuan, Z. Y. Cao et al., Feature reduction of unbalanced data classification
based on density clustering, Computing 106 (2024) 29–55. doi:10.1007/s00607-023-01206-5.
[24] Z. Qi, H. Wang, Z. Dong, Density-Based Clustering for Incomplete Data, in: Dirty Data
Processing for Machine Learning, Springer, Singapore, 2024.
doi:10.1007/978-981-99-76577_5.
[25] H. Zhang, B. Liu, P. Cui, Y. Sun, Y. Yang, S. Guo, An outlier detection algorithm for
electric power data based on DBSCAN and LOF, in: Q. Liu, X. Liu, L. Li, H. Zhou, H. H.
Zhao (Eds.), Proceedings of the 9th International Conference on Computer Engineering
and Networks, Advances in Intelligent Systems and Computing, vol. 1143, Springer,
Singapore, 2021. doi:10.1007/978-981-15-3753-0_110.
[26] D. Rangaprakash, T. Odemuyiwa, D. Narayana Dutt et al., Density-based clustering of
static and dynamic functional MRI connectivity features obtained from subjects with
cognitive impairment, Brain Informatics 7 (2020) 19. doi:10.1186/s40708-020-00120-2.
[27] H. T. Nguyen, T. H. Phan, L. T. T. Pham et al., Clustering-based visualizations for
diagnosing diseases on metagenomic data, Signal, Image and Video Processing 18 (2024)
5685–5699. doi:10.1007/s11760-024-03264-4.
[28] P. Sarang, Density-Based Clustering, in: Thinking Data Science, The Springer Series in
      </p>
      <p>
        Applied Machine Learning, Springer, Cham, 2023. doi:10.1007/978-3-031-02363-7_12.
[29] Y. R. Pan, Y. H. Xia, L. J. Long et al., Power-line extraction and modelling from 3D point
clouds data based on K-D tree DBSCAN algorithm, Journal of Electrical Engineering &amp;
Technology 19 (2024) 3587–3597. doi:10.1007/s42835-023-01641-6.
[30] C. L. Valenzuela, A. J. Jones, Evolutionary divide and conquer (I): A novel genetic
approach to the TSP, Evolutionary Computation 1 (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) (1993) 313–333.
doi:10.1162/evco.1993.1.4.313.
[31] V. V. Romanuke, Traveling salesman problem parallelization by solving clustered
subproblems, Foundations of Computing and Decision Sciences 48 (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) (2023) 453–481.
doi:10.2478/fcds-2023-0020.
[32] V. V. Romanuke, Deep clustering of the traveling salesman problem to parallelize its
solution, Computers &amp; Operations Research 165 (2024) 106548.
doi:10.1016/j.cor.2024.106548.
[33] C. Cortes, V. Vapnik, Support-vector networks, Machine Learning 20 (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) (1995) 273–297.
      </p>
      <p>doi:10.1007/BF00994018.
[34] A. Ben-Hur, D. Horn, H. Siegelmann, V. Vapnik, Support vector clustering, Journal of</p>
      <p>Machine Learning Research 2 (2001) 125–137.
[35] V. V. Romanuke, Optimization of a dataset for a machine learning task by clustering and
selecting closest-to-the-centroid objects, Herald of Khmelnytskyi national university.</p>
      <p>
        Technical sciences 6 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (2018) 263–265.
[36] V. V. Romanuke, Optimal partitioning of an initial dataset into subdatasets to be clustered
for getting rid off the dataset superfluities for a machine learning task, Herald of
Khmelnytskyi national university. Technical sciences 6 (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (2018) 213–215.
[37] V. V. Romanuke, Random centroid initialization for improving centroid-based clustering,
Decision Making: Applications in Management and Engineering 6 (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (2023) 734–746.
doi:10.31181/dmame622023742.
[38] V. V. Romanuke, Parallelization of the traveling salesman problem by clustering its nodes
and finding the best route passing through the centroids, Applied Computer Systems
28 (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (2023) 189–202. doi:10.2478/acss-2023-0019.
[39] K. C. Bhupathi, H. Ferdowsi, Sharp curve detection of autonomous vehicles using
DBSCAN and augmented sliding window techniques, International Journal of Intelligent
Transportation Systems Research 20 (2022) 651–671. doi:10.1007/s13177-022-00317-1.
[40] Y. Zack, Cluster analysis for multidimensional objects in fuzzy data conditions, System
Research and Information Technologies 2 (2021) 18–34.
doi:10.20535/SRIT.23088893.2021.2.02.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-P.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sander</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>A density-based algorithm for discovering clusters in large spatial databases with noise</article-title>
          ,
          <source>in: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96)</source>
          , AAAI Press,
          <year>1996</year>
          , pp.
          <fpage>226</fpage>
          -
          <lpage>231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. J. G. B.</given-names>
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Moulavi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sander</surname>
          </string-name>
          ,
          <article-title>Density-based clustering based on hierarchical density estimates</article-title>
          , in: J.
          <string-name>
            <surname>Pei</surname>
            ,
            <given-names>V. S.</given-names>
          </string-name>
          <string-name>
            <surname>Tseng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Cao</surname>
          </string-name>
          , H. Motoda (Eds.),
          <source>Advances in Knowledge Discovery and Data Mining</source>
          , Vol.
          <volume>7819</volume>
          , Springer Berlin Heidelberg,
          <year>2013</year>
          , pp.
          <fpage>160</fpage>
          -
          <lpage>172</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -37456-2_
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gou</surname>
          </string-name>
          ,
          <article-title>A fast DBSCAN algorithm using a bi-directional HNSW index structure for big data</article-title>
          ,
          <source>International Journal of Machine Learning and Cybernetics</source>
          <volume>15</volume>
          (
          <year>2024</year>
          )
          <fpage>3471</fpage>
          -
          <lpage>3494</lpage>
          . doi:
          <volume>10</volume>
          .1007/s13042-024-02104-8.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Schubert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sander</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xu</surname>
          </string-name>
          , DBSCAN Revisited,
          <article-title>revisited: Why and how you should (still) use DBSCAN</article-title>
          ,
          <source>ACM Transactions on Database Systems 42 (3)</source>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          . doi:
          <volume>10</volume>
          .1145/3068335.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sander</surname>
          </string-name>
          ,
          <article-title>Generalized Density-Based Clustering for Spatial Data Mining</article-title>
          , Herbert Utz Verlag, München,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wan</surname>
          </string-name>
          et al.,
          <article-title>Load spectra extrapolation by bandwidth-optimized kernel density estimation based on DBSCAN algorithm</article-title>
          ,
          <source>Journal of Vibration Engineering &amp; Technologies</source>
          <volume>12</volume>
          (
          <year>2024</year>
          )
          <fpage>1445</fpage>
          -
          <lpage>1456</lpage>
          . doi:
          <volume>10</volume>
          .1007/s42417-023-00919-3.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sander</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-P.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Density-based clustering in spatial databases: The algorithm GDBSCAN and its applications, Data Mining and Knowledge Discovery 2 (2) (</article-title>
          <year>1998</year>
          )
          <fpage>169</fpage>
          -
          <lpage>194</lpage>
          . doi:
          <volume>10</volume>
          .1023/A:
          <fpage>1009745219419</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <article-title>An improved DBSCAN algorithm for hazard recognition of obstacles in unmanned scenes</article-title>
          ,
          <source>Soft Computing</source>
          <volume>27</volume>
          (
          <year>2023</year>
          )
          <fpage>18585</fpage>
          -
          <lpage>18604</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00500-023-09319- x.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Romanuke</surname>
          </string-name>
          ,
          <article-title>Speedup of the k-means algorithm for partitioning large datasets of flat points by a preliminary partition and selecting initial centroids</article-title>
          ,
          <source>Applied Computer Systems 28 (1)</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .2478/acss-2023-0001.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Romanuke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Merinova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Yehoshyna</surname>
          </string-name>
          ,
          <article-title>Optimized centroid-based clustering of dense nearly-square point clouds by the hexagonal pattern</article-title>
          ,
          <source>Electrical, Control and Communication Engineering</source>
          <volume>19</volume>
           (1) (
          <year>2023</year>
          )
          <fpage>29</fpage>
          -
          <lpage>39</lpage>
          . doi:
          <volume>10</volume>
          .2478/ecce-2023-0005.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <article-title>Bus station location selection method based on DBSCAN-DPC clustering algorithm</article-title>
          , in: Y. Qu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Niu</surname>
          </string-name>
          , W. Fu (Eds.),
          <source>Proceedings of 3rd 2023 International Conference on Autonomous Unmanned Systems (3rd ICAUS</source>
          <year>2023</year>
          ),
          <source>ICAUS 2023, Lecture Notes in Electrical Engineering</source>
          , vol.
          <volume>1177</volume>
          , Springer, Singapore,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-97-1103-1_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Mwakapesa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          et al.,
          <article-title>Assessment of landslide susceptibility using DBSCAN-AHD and LD-EV methods</article-title>
          ,
          <source>Journal of Mountain Science</source>
          <volume>19</volume>
          (
          <year>2022</year>
          )
          <fpage>184</fpage>
          -
          <lpage>197</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11629-020-6491-7.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <article-title>Density-Based Clustering for Incomplete Data</article-title>
          ,
          <source>in: Dirty Data Processing for Machine Learning</source>
          , Springer, Singapore,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-99-7657-
          <issue>7</issue>
          _
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Starczewski</surname>
          </string-name>
          ,
          <article-title>A Novel Approach to Determining the Radius of the Neighborhood Required for the DBSCAN Algorithm</article-title>
          , in: L.
          <string-name>
            <surname>Rutkowski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Scherer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Korytkowski</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>