<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Light Invariant Lane Detection Method Using Advanced Clustering Techniques</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksandr Karavaev</string-name>
          <email>alexkaravaev@protonmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rami AlN-aim</string-name>
          <email>rami.naim2010@yandex.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITMO University</institution>
          ,
          <addr-line>Saint-Petersburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-In this paper we propose a novel approach to detecting road lanes from video stream in bad-light road scenarios. The main focus of this article is given to the introduction new image binarization method in non-common color space followed by improved density hierarchical clustering algorithm called HDBSCAN. These techniques allow to detect lane boundaries even in low-light scenarios with robust and parameter-free setup.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Keywords—Lane detection, Computer vision, HDBSCAN,
DBSCAN, Image Thresholding</p>
      <p>I.</p>
      <p>INTRODUCTION</p>
      <p>The aim of this paper is to design low-cost in terms of
processing time pipeline for robust road marking detection in
low-level of light road scenarios.</p>
      <p>Self-driving cars are actively introduced into people live s.
Their number and complexity of software of on-board
computers are increasing [1]. Cars of only one Waymo company
managed to drive twenty million miles in self-driving mode 1.
Nowadays most of companies are relying on solutions with
deep neural networks for perception module of the car [2].
Moreover most of them are using radars and lidars, which
allow to perceive the environment even in dim and dark road
scenarios unlike to usual camera [3]. Despite the fact that
these approaches are dominant in the field, they have major
limitations that restrict them from full implementation in the
industry and interfere with scalability to more users [4].</p>
      <p>There is a need for training deep neural networks on
a powerful machine equipped with a lot of video cards
for achievement of good precision and recall of an output
detections. Furthermore, even after successful training and
deploying such big model developers need to install high
performance computers on the car because real-time executi on
is essential in the case of self-driving car. This solution i s
more difficult to scale, not to mention the trend to small-siz e
components and reducing their cost. Interest in single-boa rd
computers is gradually rising because their best qualities —
compactness and price [5].</p>
      <p>A lot of disputes in the community of self-driving cars are
ongoing right now. Some researchers consider advantages and
disadvantages of using cameras or lidars [6]. The experience of
1K. Wiggers, ”Waymos’ autonomous cars have driven 20 million mile s
on public roads”, 6 Jan 2020,
https://venturebeat.com/2020/01/06/waymosautonomous-cars-have-driven-20-million-miles-on-public -roads/
Tesla company shows, that only-camera solution is possible 2.
Main disadvantage of lidar is its price, that is comparable in
some cases to the price of the car itself. And most of the
solutions require a lot more than one lidar on the car (up to 6
small and big lidars, that are mounted on various sides of the
car). Some researches and forecasters insist that the price of
lidars will eventually fall down3. However, other ones compare
camera with human visual cortex, which can reliably identify
and detect distance to various objects4.</p>
      <p>And this is our motivation why we focus on developing
system, that detects road lane markings with the cameras.
We believe that such a solution should meet the following
requirements:
•
•
•
•
•
•
•</p>
      <p>System should be cheap for maintaining and
producing for the purpose of high scalability.</p>
      <p>System should be robust to rapidly changing light
environment.</p>
      <p>System should be fast for not powerful computers and
shouldnt’ require a lot of computational power.</p>
    </sec>
    <sec id="sec-2">
      <title>Main proposals of our paper:</title>
      <p>Image processing in color space CIE L*a*b, that is
decreasing light impact on the scene and image.</p>
    </sec>
    <sec id="sec-3">
      <title>New formula for image binarization.</title>
      <p>Using HDBSCAN — more recent method of density
clustering instead of DBSCAN.</p>
    </sec>
    <sec id="sec-4">
      <title>Using color information in clustering.</title>
      <p>Combined together, these methods provide a robust
pipeline with few parameters that need to be configured. Thus,
the time for configuring is reduced, which seems very useful.
By the term pipeline here and after we mean a set of data
processing elements connected in series, where the output of
one element is the input of the next one5.
LIDAR:</p>
      <p>That?”,
description,</p>
      <p>The paper is organized as follows. Section 2 is giving
brief review of other papers in the field. Section 3 describes
proposed approach in details. Section 4 compares proposed
approach with other ones. Finally, Section 5 gives the
conclusion.</p>
      <p>II.</p>
      <p>RELATED WORK</p>
      <sec id="sec-4-1">
        <title>A. Neural Network approach</title>
        <p>In recent years neural networks have become a common
tool in image processing tasks. In the field of driving an
autonomous car, neural networks are often used to detect road
markings, obstacles and road signs. The article [7] describes
a method for detecting road marking lines based on neural
networks. The authors provide experimental data indicating
a high accuracy of detection of road marking lines and a
high image processing speed. However, the experiments were
conducted on equipment, the cost of which is approximately
equal to $2,000, and such a price may not be acceptable. The
article [8] presents experimental data for several algorithms for
detecting lanes using neural networks. All algorithms presented
in this article use either expensive or specialized equipment
for image processing. This can lead to a significant increase
in cost, or narrow the scope of the possible application of the
system.</p>
      </sec>
      <sec id="sec-4-2">
        <title>B. Binarization and Processing in a Different Color Spaces</title>
        <p>
          Materials used for carriageway marking lines often have
bright colours which differ a lot from the rest the roads’
surface. Consequently, some algorithms for road marking line
detection utilize this quality and use prior knowledge of
colours for thresholding [9], [
          <xref ref-type="bibr" rid="ref15">10</xref>
          ]. For binarization based on
beforehand estimated colours’ thresholds the HSV color spa ce
is widely used. It shows a good results, however, with different
illumination of a scene the colors detected by the camera
are different [11]. Thus, with changes in the illumination the
chromatic values of the pixels on the image may also differ,
which leads to a significant amount of noise on the binarized
image (Fig. 1). As a result, the further applied algorithms for
lane detection, such as Hough transformation [12], may not
give an expected result and fail in finding the lanes. Such
behavior can be handled by setting the algorithms’ paramete rs,
but this often decreases robustness and reduces number of
scenarios in which the algorithm can be used.
        </p>
        <p>
          Some road lane detection methods use density clustering
for finding lane on the road image. For example, authors in
paper [
          <xref ref-type="bibr" rid="ref18">13</xref>
          ] use similar approach for clustering points belonging
to lane marking. However, they use pretty old and outdated
clustering algorithm DBSCAN. Besides that, authors use
different threshold operation for the initial steps of processing the
image and Otsu binarization [14], which can fail or give bad
result in some cases of road scenarios.
        </p>
        <p>Aside from this paper, clustering in the lane marking
scenario is used in the work [15]. In this paper authors use simple
hierarchical clustering and new method of post-processing
clustering results. For every calculated cluster they calculate
the slope line of it, after that they count the intersections of
these slopes with other clusters. The clusters with the most
number of intersections are used later. This improvement deals
with filtration of clusters that are elongated more on the y axis
instead of x axis and utilized the geometrical feature of almost
every road marking, which are located mostly one on the top of
another on one line. This improvement work well on straight
road scenarios, nonetheless it can fail on turns or curb road
environment.</p>
        <p>III.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>ROAD MARKING DETECTION</title>
      <p>In this section we examine our algorithm in details. The
full block diagram of the approach can be found on Fig. 2</p>
    </sec>
    <sec id="sec-6">
      <title>Data: Input video stream</title>
      <p>Result: Clustered image points
initialization;
while not end of video do</p>
      <p>Read current color frame;
Convert image into CIE Lab color space;
Normalize L-channel using MIN-MAX method;
Calculate mean and standard deviation of the
L-image;
Calculate threshold value;
Binarize L-channel;
Image &lt; − Select pixels from initial color frames,
where thresholded image != 0;
Initalize HDBSCAN with selected parametrs;
Downscale image;
Cluster Labels, Labels Probabilites &lt; − Clusterize
image with HDBSCAN algorithm;
Discard labels with low probability(&lt; 0.75);
Mask labels to image;</p>
      <p>Resize image with labels to initial size;
end</p>
    </sec>
    <sec id="sec-7">
      <title>Algorithm 1: Full proposed algorithm</title>
      <sec id="sec-7-1">
        <title>A. Binarization in CIELab Color Space</title>
        <p>It is necessary to define two assumptions in order to choose
a threshold value of binary mask with road marking from an
image.</p>
        <p>Assumption 1: Road marking lines takes about 5% of an
image.</p>
        <p>Assumption 2: Road marking lines brighter than majority
of objects on a road image (road surface, roadside, cars, etc.).</p>
        <sec id="sec-7-1-1">
          <title>Input video stream</title>
        </sec>
        <sec id="sec-7-1-2">
          <title>Converting to CIE lab colorspace</title>
          <p>Adaptive
thresholding</p>
          <p>Combining thresholded
binary mask and color mask</p>
          <p>HDBSCAN hierarchical
clustering</p>
          <p>Result</p>
          <p>The image with road segment is captured by camera in
RGB color space. In order to reduce noise on the image
we preprocess the image by applying Gaussian blur [16]. In
experiments the kernel with size 15 was chosen to filter
highfrequency noises on the image.</p>
          <p>The next step is converting the image from RGB to
desired color space. We propose to use one of the perceptually
uniform color space — CIELab [17]. In this color space each
pixel is encoded with three values: L, a and b. L describes
the brightness of a pixel, in other words, characteristic of
luminance. a and b values describe chromatic characteristics,
from green to red and from blue to yellow, respectively.</p>
          <p>Further, we propose to calculate histogram of the L channel
of the image in CIELab color space and normalize it.
Considering aforementioned assumptions, pixels of road marking
lines are located in the upper right part of the distribution. For
the histogram equalization the max-RGB like method is used
[18]. This approach utilize the fact that humans perceive color
relative to the contrast of the full image, i.e. difference between
brightest and darkest point of the image. As a result, the
histogram is normalized in such a way that its low boundary
is a minimum value of a non-zero value of a brightness of the
image in L channel, and upper boundary is maximum value
of a brightness of the image in L channel.</p>
          <p>Pixels of road marking lines on the image are located at
the rightmost side of the normalized histogram (5% of the
whole distribution). In order to choose appropriate threshold
value the property of a Gaussian normal distribution and 3σ
rule is used [19]. We calculate the mean μ and standard
deviation σ for the normalized histogram of the image and
use Gaussian distribution to estimate the threshold value. This
value bounds 5% of the distribution of the brightest pixels
which represent road marking lines. Example of approximation
of a normalized histogram can be seen at Fig. 3, where red
line represents normal distribution. The following equation is
used for threshold value estimation:
t = μ + σ(k +
σ
2σu
),
where t — threshold value; μ — mean value from normalized
histogram; σ — standard deviation from normalized histogram;
σu — standard deviation of an uniform distribution; k —
scaling coefficient.
(1)
Histogram</p>
          <p>Normalized Histogram</p>
          <p>
            From (1) it is clear that the threshold value lies in the
half-interval (2σ; 3σ] and the exact value depends on standard
deviation. The standard deviation of the uniform distribution
is used to normalize the standard deviation of the histogram
and to accurately determine where the threshold value lies
between the mentioned interval. The scaling parameter k is
chosen depending on how much area of an image is covered
by road marking lines. For real road application we propose
to use k = 2. To illustrate the interval in which the threshold
value lies the image from Duckietown Project is used (Fig. 4)
[
            <xref ref-type="bibr" rid="ref25">20</xref>
            ]. Since road marking lines on this image cover a larger
area compared to real road images, scaling coefficient is set to
1. Group of pixels on the right side of normalized histogram
represent road marking lines. They are inside of the threshold
marked by a rectangle on the plot.
          </p>
          <p>If a scene of image is well-illuminated, its contrast is high .
Thus, values on the histogram that correspond to the pixels of
the road will be located near each other at the left side of the
distribution, while values with intensities of a road marking
lines will be located at the most right part of the histogram.
In that case standard distribution will be rather small and the
ratio of σ/σu consequently also will be small. Because of this
threshold value will be close to 2σ, which guarantees that it
will be less than values of the pixels of bright road marking
lines. Contrariwise, when the scene of image is dark, histogram
might not have distinct groups of pixels of the road or road
marking lines. The standard deviation of such distribution will
be greater than in the case described before, so the ratio of
σσu will be relatively large. As a result, calculated threshold
value will be close to 3σ, so it might be grater than values of
the road marking lines, but it reduces amount of fake pixels
marked as road marking lines (pixels of background or road
itself).</p>
          <p>The results of image binarization using described algorithm
can be seen in Fig. 5. As a method for comparison Otsu
binarization was chosen because it is one of the most
popular method of threshold value calculation. Our approach for
threshold value calculation shows better results: binary mask
contains less noise and registers even yellow lines of road
marking. This is possible because of those two assumptions
at the beginning of this subsection.</p>
        </sec>
      </sec>
      <sec id="sec-7-2">
        <title>B. Hierarchical Clustering</title>
        <p>After the initial processing stage (binarization) it is
necessary to obtain information from the black-and-white bina ry
image about which pixels belong to road marking and which
are just road or other miscellaneous noise or even error from
binarization algorithm used before. The proposed algorithm is
as follows.</p>
        <p>Given that we have the black and white binary mask and
the original color image, we combine them into one resulting
image. Pixels that were previously white receive color from
the original image, and pixels that were plain black remain
black.</p>
        <p>This procedure is aimed to improve the next clustering
part. This increases the useful information for the algorithm,
because we will have not only information about the intensity
of the pixels, but also the color, and therefore, it is easier to
divide the points into meaningful clusters. For example, we can
divide into separate groups the road lane and asphalt, which
surrounds lane on all sides but has a completely different RGB
color.</p>
        <p>
          In this paper we selected HDBSCAN [
          <xref ref-type="bibr" rid="ref10">21</xref>
          ] as the main
clustering algorithm which is declared as an improved
hierarchical version of more older algorithm DBSCAN [22]. Main
improvements over DBSCAN are as follows:
        </p>
        <p>The algorithm has much fewer parameters that need to
be configured, because during data processing
HDBSCAN selects the best parameters according to its own
indicators, for example, the epsilon parameter.</p>
        <p>The algorithm can find clusters with densities varying
within the cluster area.</p>
        <p>The clustering procedure not only assigns a cluster
number to each input point, but also calculates vector
of probabilities where each probability reflects how
probable a point belongs to each cluster or noise.</p>
        <p>Now it is worth to emphasize why the main advantages of
the clustering algorithm are critically important in lane finding
scenario. Even on one route a car might encounter different
light conditions, from completely dim (e.g tunnel) to sunny and
bright, therefore, there is a need to develop a robust pipeline
with several hyper-parameters to configure.</p>
        <p>The quality of road markings in reality might be poor
and the markings might be partially erased, therefore, it is
necessary to be able to detect road markings with holes inside
and at the same time to detect them as single cluster.</p>
        <p>In the task of autonomous driving the safety comes first, so
we select only the points with the lowest probability of noise.
The result of this step of the pipeline is showed at Fig. 6.</p>
        <p>IV.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>RESULTS</title>
      <sec id="sec-8-1">
        <title>A. Comparison with Other Approaches</title>
        <p>The Fig. 7 represents visual part of comparison of the
clustering algorithms. The parameters for clustering algorithms
were chosen as follows:
◦
◦
◦
◦
◦
◦</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>For K-Means:</title>
      <p>n clusters = 6,
random state = 0,</p>
    </sec>
    <sec id="sec-10">
      <title>For DBSCAN:</title>
      <p>eps = 10,
min samples = 200,</p>
    </sec>
    <sec id="sec-11">
      <title>For HDBSCAN:</title>
      <p>min cluster size = 500,
min samples = 200.</p>
      <p>Interpretation of the visual result is as follows. The
KMeans algorithm completely misses the actual clusters of the
road and segments them only by y-value (Fig. 8b).</p>
      <p>The result of DBSCAN is actually not so poor, since it
clustered almost all the lanes into separate groups, with the
exception of two lanes, which are located extremely right and
left (Fig. 8c).</p>
      <p>HDBSCAN showed better results than other algorithms,
finding all the lanes on the image, although he found only
part of the lane on the right (Fig. 8d). It is clear that further
we can fit this points with the spline or parabola and get the
actual lane.</p>
      <p>Implementations of algorithms in the Python language were
chosen. For HDBSCAN we used native authors’
implementations, DBSCAN and K-Means were used from scikit-learn
library [23].</p>
      <p>The bechmark was conducted on Raspberry Pi 4 model B
with 64-bit ARM Cortex A72 CPU @ 1.5GHz and 4 GB of
RAM. The procedure of the experiment was the following. The
video with road was loaded and every clustering algorithm was
consecutively run on each frame. Frame had the shape (120,
1200, 3) — height of 120 px, width of 1200 px and 3 RGB
colors. The full results of the comparison can be found in Table</p>
      <sec id="sec-11-1">
        <title>B. Experiments</title>
        <p>In order to test mesurable accuracy the proper dataset for
testing should have be chosen. We have chosen Unsupervised
Llamas (The unsupervised labeled lane markers dataset) [24].
II and I.</p>
        <p>The results showed that the second worst result in terms
of performance was shown by HDBSCAN with 12.5 FPS
(frames per second). However, it is worth noting that the
implementations of the algorithms are taken from different
libraries, and because of this, the comparison is not very fair.</p>
        <p>By using downscaled image there is a significant trade-off
between time and precision, which is going to be shown in
following section.
The dataset consists of over 100 000 annotated images, so we
took only 536 images from various parts of the dataset in order
to test the proposed algorithm.</p>
        <p>Overall, we achieve presion of 49% and AUC of 36%.
As can be seen from Fig. 8 proposed solution is sometimes
segments white car as road(c) and some artifacts are still
present(d). Nonetheless, most of the images are good results
such as (a) or (b). Decrease in precision and AUC can be
explained by two things:
1)
2)</p>
        <p>Firstly, we must specify ROI(Region Of Interest) on
our own and farther parts of the road are removed
from ROI and therefore are not being detected, which
is okay, because there is no need to detect road
markings from 200 meters from a car. But this markings
are marked in the dataset and so in testing phase they
are marked as non-detected.</p>
        <p>Secondly, algorithm is not finding lanes that are on
the sides, because they are harder to detect and have
low-color intensity.</p>
        <p>Future work can be focused on eliminating bad results such
as (c) or (d) in Fig.8 by designing post-processing filter ste ps
such as curv-fitting and discarding curves, that are incline d
more horizontally, than vertically. Especially this step will be
effective in eliminating noise that is coming from white car
detection, because curve points are grouped in a form of a car,
hence grouped more horizontally.</p>
        <p>In order to understand how downscaling images beforehand
affects the precision two tests were conducted. Test results
with original size image and downscaled by factor of 0.3 is
presented in III. As we can see, the decline of precision is not
linearly proportional to decline of image size and this can be
used to process images.</p>
        <p>(a) Initial cropped road image
(b) Result of K-Means
(c) Result of DBSCAN
(d) Result of HDBSCAN
Fig. 7. Comparison between most popular clustering methods</p>
      </sec>
    </sec>
    <sec id="sec-12">
      <title>CONCLUSION</title>
      <p>In this article we have proposed a novel approach for
detecting lanes on roads that is used for dim and low-light
scenarios. Proposed method was successfully tested on the
real road images taken from highway. The proposed method
of clustering binarized images gave better results than others.
As a topic for the future researches we would like to study
possibility of using methods of illumination correction and
histogram processing for more accurate calculating threshold
value.</p>
      <p>(a) Good result. Scale 0.3
(b) Good result. Scale 1
(c) Bad result. Car detection. Scale 0.3
(d) Bad result. Road artifacts. Scale 0.3</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Coppola</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Silvestri</surname>
          </string-name>
          , 1
          <article-title>- Autonomous vehicles and future mobility solutions</article-title>
          , P. Coppola and
          <string-name>
            <given-names>D.</given-names>
            <surname>Eszterga´</surname>
          </string-name>
          r-Kiss, Eds. Elsevier,
          <year>2019</year>
          . [Online]. Available: http://www.sciencedirect.com/science/article/ pii/B9780128176962000019
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jana</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Ray</surname>
          </string-name>
          , “Deeptest:
          <article-title>Automated testing of deep-neural-network-driven autonomous cars</article-title>
          ,”
          <source>in Proceedings of the 40th international conference on software engineering</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>303</fpage>
          -
          <lpage>314</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>H.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kwon</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          , “
          <article-title>Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications</article-title>
          ,” in
          <source>International Conference on Cryptographic Hardware and Embedded Systems</source>
          . Springer,
          <year>2017</year>
          , pp.
          <fpage>445</fpage>
          -
          <lpage>467</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>D.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rosenbaum</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Dietmayer</surname>
          </string-name>
          , “
          <article-title>Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection</article-title>
          ,
          <source>” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC)</source>
          . IEEE,
          <year>2018</year>
          , pp.
          <fpage>3266</fpage>
          -
          <lpage>3273</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Shiddieqy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. I.</given-names>
            <surname>Hariadi</surname>
          </string-name>
          , and T. Adiono, “
          <article-title>Implementation of deeplearning based image classification on single board computer</article-title>
          ,” in
          <source>2017 International Symposium on Electronics and Smart Devices (ISESD).</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          ,
          <year>2017</year>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>137</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Lazar</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Shyam</surname>
          </string-name>
          , “
          <article-title>Agile development of automated driving system: A study on process and technology</article-title>
          ,” Masters' thesis, Chalmers University of Technology,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          , “
          <article-title>Robust lane detection from continuous driving scenes using deep neural networks</article-title>
          ,
          <source>” IEEE Transactions on Vehicular Technology</source>
          , vol.
          <volume>69</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>54</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>M. M. Yusuf</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Karim</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. F. M. S.</given-names>
            <surname>Saif</surname>
          </string-name>
          , “
          <article-title>A robust method for lane detection under adverse weather and illumination conditions using convolutional neural network,”</article-title>
          <source>in Proceedings of the International Conference on Computing Advancements, ser. ICCA</source>
          <year>2020</year>
          . New York, NY, USA: Association for Computing Machinery,
          <year>2020</year>
          . [Online].
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          Available: https://doi.org/10.1145/3377049.3377105
          <string-name>
            <given-names>K</given-names>
            <surname>.-B. Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Song</surname>
          </string-name>
          , “
          <article-title>Real time road lane detection wit h ransac and hsv color transformation,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Inform</surname>
          </string-name>
          . and
          <string-name>
            <surname>Commun</surname>
          </string-name>
          .
          <source>Convergence Engineering</source>
          , vol.
          <volume>15</volume>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[21] [22] [23] [24] Robotics in the Makers Era</source>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Alimisis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Moro</surname>
          </string-name>
          , and E. Menegatti, Eds. Cham: Springer International Publishing,
          <year>2017</year>
          , pp.
          <fpage>104</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>L. McInnes</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Healy</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Astels</surname>
          </string-name>
          , “hdbscan:
          <article-title>Hierarchical density based clustering</article-title>
          ,”
          <source>The Journal of Open Source Software</source>
          , vol.
          <volume>2</volume>
          ,
          <issue>03</issue>
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Ester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-P.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sander</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Xu</surname>
          </string-name>
          , “
          <article-title>A density -based algorithm for discovering clusters in large spatial databases with noise,” in KDD'96: Proceedings of the Second International Conferenc e on Knowledge Discovery and Data Mining</article-title>
          . AAAI Press,
          <year>1996</year>
          , pp.
          <fpage>226</fpage>
          -
          <lpage>231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderplas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Passos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cournapeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brucher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Perrot</surname>
          </string-name>
          , and E. Duchesnay, “
          <article-title>Scikit-learn: Machine learning in Python,”</article-title>
          <source>Journal of Machine Learning Research</source>
          , vol.
          <volume>12</volume>
          , pp.
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>K.</given-names>
            <surname>Behrendt</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Soussan</surname>
          </string-name>
          , “
          <article-title>Unsupervised labeled lane marker dataset generation using maps,”</article-title>
          <source>in Proceedings of the IEEE International Conference on Computer Vision</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Lim</surname>
          </string-name>
          , “
          <article-title>Lane recognition algorithm using lane shape and color features for vehicle black box</article-title>
          ,” in 2018 International Conference on Electronics, Information, and
          <string-name>
            <surname>Communication</surname>
          </string-name>
          (ICEIC),
          <year>Jan 2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>M. S. Drew</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wei</surname>
            , and
            <given-names>Z.-N.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
          </string-name>
          , “
          <article-title>Illumination-invarian t image retrieval and video segmentation,” Pattern Recognition</article-title>
          , vol.
          <volume>32</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>1369</fpage>
          -
          <lpage>1388</lpage>
          ,
          <year>1999</year>
          . [Online]. Available: http://www.sciencedirect.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          com/science/article/pii/S003132039800168X A. A.
          <string-name>
            <surname>Assidiq</surname>
            ,
            <given-names>O. O.</given-names>
          </string-name>
          <string-name>
            <surname>Khalifa</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          <string-name>
            <surname>Islam</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Khan</surname>
          </string-name>
          , “
          <article-title>Real time lane detection for autonomous vehicles</article-title>
          ,” in 2008 International Conference on Computer and Communication Engineering, May
          <year>2008</year>
          , pp.
          <fpage>82</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hong</surname>
          </string-name>
          , and L. Gong, “
          <article-title>Lane detection algorithm based on density clustering and ransac,” in 2018 Chinese Control And Decision Conference (CCDC)</article-title>
          ,
          <year>June 2018</year>
          , pp.
          <fpage>919</fpage>
          -
          <lpage>924</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Otsu</surname>
          </string-name>
          , “
          <article-title>A threshold selection method from gray-level h istograms</article-title>
          ,
          <source>” IEEE Transactions on Systems, Man, and Cybernetics</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>66</lpage>
          ,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Hota</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Syed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bandyopadhyay</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Krishna</surname>
          </string-name>
          , “
          <article-title>A simple and efficient lane detection using clustering and weighted regression</article-title>
          ,”
          <source>in Proceedings of the 15th International Conference on Management of Data</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Eswar</surname>
          </string-name>
          , “
          <article-title>Noise reduction and image smoothing using gaussian blur</article-title>
          .
          <source>” Ph.D. dissertation</source>
          , California State University, Northridge,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <given-names>Electrical</given-names>
            <surname>Engineering</surname>
          </string-name>
          &amp; Applied Signal Processing Series. CRC Press,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Land</surname>
          </string-name>
          , “
          <article-title>The retinex theory of color vision</article-title>
          ,”
          <article-title>Scientific american</article-title>
          , vol.
          <volume>237</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>129</lpage>
          ,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Bovik</surname>
          </string-name>
          , The Essential Guide to Image Processing.
          <source>Elsevier Science</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Paull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Zuber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>How</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leonard</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Censi</surname>
          </string-name>
          , “
          <article-title>Duckietown: An innovative way to teach autonomy,” in Educational</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>