=Paper= {{Paper |id=Vol-2391/paper12 |storemode=property |title=A new real-time method for finding temporary and permanent road marking and its applications |pdfUrl=https://ceur-ws.org/Vol-2391/paper12.pdf |volume=Vol-2391 |authors=Roman Dosaev,Konstantin Kiy }} ==A new real-time method for finding temporary and permanent road marking and its applications == https://ceur-ws.org/Vol-2391/paper12.pdf
A new real-time method for finding temporary and
permanent road marking and its applications

                R V Dosaev1, K I Kiy1


                1
                 Keldysh Institute of Applied Mathematics of RAS, Miusskaya square 4, Moscow, Russia,
                145047


                e-mail: konst.i.kiy@gmail.com


                Abstract. In this paper, a new real-time method for finding temporary and permanent road
                marking is proposed. The method is based on the geometrized histograms method for
                segmenting and describing color images. This method is able to deal with both rectilinear and
                curvilinear marking, as well as with color temporary and permanent road marking. It also
                makes it possible to distinguish temporary road marking from white permanent road marking.
                The developed method is stable under illumination and is able to work even for partially
                disappearing road marking, typical for late winter and early spring. In contrast to many other
                methods, this method does not require any information about camera parameters and
                calibration and is able to find road marking in images taken under unknown conditions. The
                proposed method has been implemented by a program written in C++, operating under
                Windows and Linux. The program operation has been tested on video records shot on typical
                Russian roads during different seasons and under diverse whether and illumination conditions.
                The processing speed is about 20 fps for a standard modern computer. Using parallel
                computing, this speed is reduced considerably. The results of program operation are presented
                and discussed. The developed program is a part of the computer vision component of the
                control system of the AvtoNiva pilotless vehicle.


1. Introduction
The problem of finding road markings is very important due to applications in advanced driver
assistance systems (ADAS) and vision systems of autonomous vehicles. Efficient algorithms for
finding road marking implemented in ADAS can prevent many road accidents and save many lives.
The first efficient methods for real-time lane (road marking) detection were proposed and
implemented by E.D. Dickmanns and his colleagues [1, 2]. These methods were dated back to the late
1980s and employed the so-called 4D approach proposed by E.D. Dickmanns [1]. Many methods
based on Hough transform, RANSAC algorithm, and learning approaches, including deep learning of
convolutional networks, have been proposed (see [3–7] and the references in these papers) since then.
The most recent review of the results obtained in this area can be found in [6]. In spite of the serious
progress in investigations on this topic, vital problems connected with lane marking, disappearing in
early spring and late winter, wet places and occlusion still take place. It deserves special mentioning
that temporary lane markings may have extremely high curvature, which may pose additional
problems to the existing methods. Despite of the impressing results reached by application of
convolutional networks, there are certain problems that arise in using them, such as overfitting, a large


                    V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




amount of training data required, the high computational cost leading to using expensive SIMD
architectures. In addition, as it has been stressed in recent publications [6], there are problems in
finding a colored temporary lane marking in real time and distinguishing it from a permanent white
lane marking. In [7], the histograms of two components of the color coordinate system (Y, Cb, Cr) were
used for finding white and yellow road markings in real time. It was mentioned that the previous
attempts of applying color suffered from high computational complexity [8, 9]. However, the approach
proposed [7] is sensitive to color blobs caused by dirty places and other objects in the frame. It
becomes especially clear when the road marking starts rather far from the bottom of the frame on its
left or right boundary (this may occur in sharp turns). In this connection, it seems to be reasonable to
apply the real-time geometrized histograms method for segmentation and description of color images
[10–13] for finding and distinguishing road markings. This method makes it possible to find both
white road marking of arbitrary shape and colored road making (for example, temporary one) and
provides the technique for distinguishing them even under difficult illumination condition. The
geometrized histograms method is based on a method for approximate description of the value
distribution of a scalar (intensity) or vector function (color) giving the image. This description is called
a geometrized histogram. Geometrized histograms make it possible to find even very small connected
objects in the image plane (with diameter up to three pixels) and to construct from them continuous
objects like lane markings. Geometrized histograms can distinguish objects with very small local
contrasts and automatically adapt themselves to very small local differences in intensity and color of
connected objects.
    In this paper, we propose an image understanding system that can find road markings based on the
real-time geometrized histograms method for complete image segmentation. This method allows us to
use only standard computational facilities and provides adequate results even for images taken by
cheap cameras like those employed in camcorders. In section 2, we remind briefly the main points of
the geometrized histograms method and explain the main ideas in order to apply them to finding road
markings. In section 3, we present the algorithms for finding road markings. Section 4 is devoted to
the comparison of the proposed method with methods proposed earlier. Several examples of finding
road markings, demonstrating the ability of the method, are presented in section 5 and discussed.

2. A brief description of the geometrized histograms method
A detailed description of the geometrized histograms method can be found in [10—13]. To construct
the geometrized histogram for a color or grayscale image, this image is divided into strips Sti, i = 1, …,
n, of the same small width W with sides parallel to the horizontal (or vertical) axis Os of the image
plane. Suppose that we deal with horizontal strips. The case of vertical strips is considered in a similar
way. To describe approximately the image in a chosen narrow image strip Sti, it is necessary to
describe approximately the distribution of values of the vector function specifying it. The vector
functions (R, G, B), (H, S, I), or (G/(G+B), G/(G+R), I), introduced by the author, can be examples of
this vector function. This approximate description will be called the geometrized histogram of the
image in Sti. Let us explain first how to construct the geometrized histogram for a scalar function f(x,
y), giving a grayscale image. The geometrized histogram describes approximately the level sets Lz of
f(x, y), i.e., the sets of points (x, y) of the strip Sti, where f(x, y) = z. Since we deal with a discrete
representation of the image, the projection of Lz onto Os is a union of intervals (segments) Ikz in this
axis Pr(Lz) = ∪k Ikz. For each segment Ikz, its cardinality is the number of the points of the level set Lz
in the strip Sti that are projected onto this interval. It is clear that the set of cardinalities of the intervals
Ikz for all possible z determines the classical histogram of f(x, y) in the strip Sti. The collection of
intervals Ikz approximately describes Lz, since the set of level z belongs to the preimage of ∪k Ikz, Lz ⊂
Pr –1(∪k Ikz), and the strip is narrow. The union of Ikz for all z determines the space of intervals on Os
with the scalar function of cardinality on them. Note that intervals Ikz for different z may have a
nonempty intersection on Os. This occurs when the intervals correspond to different objects in the
strip and one object lies over another in it. The space of intervals ∪kz Ikz is called the local geometrized
histogram (HGn) of f(x, y) in Sti.



V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                          87
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




    An example of the geometrized histogram of the grayscale component of a strip in a color image of
a road scene can be found in Fig. 1. The ox axis in the figure presents the value of the grayscale
function, which varies from 0 to 63. The oy axis presents the coordinates of pixels Os. The coordinate
on this axis varies from 0 to DimX – 1, where DimX is the horizontal dimension of the image (we
consider a horizontal strip). The geometrized histogram of this strip in the grayscale version describes
the geometry of intensity distribution in the strip. In the place of intersection of a strip with the lane
marking, one can see a local burst of intensity. This burst is selected by a red circle. This means that
the geometrized histogram of the grayscale component can be employed as a road marking detector.
However, not only white road marking is important for automatic driving vehicles. Yellow and orange
road markings are also of great interest. To detect yellow (orange) road marking, it is necessary to
segment and describe color images in real-time. For this purpose, the construction of the geometrized
histogram is generalized for vector functions representing color images. We deal with the
representation of the color image by the function (G/(G+B), G/(G+R), I) [10]. Note that the
component I means here the grayscale intensity (it is equal to Y in the (Y, Cb, Cr) coordinates [6]). Let
us introduce a characteristic function CF. If the hue of the point belongs to the yellow part of the color
triangle, then CF coincides with G/(G+B). When passing to the next range (green, blue, red), the value
of G/(G+B) is shifted by M, where M is the number of grades of the function G/(G+B). The
geometrized histogram of CF, supplemented for each interval Ikz by the classical histogram of the
other color component G/(G+R), is called the geometrized histogram of the color image in Sti.




      Figure 1. A road scene and the geometrized histogram of the grayscale component of the i-th
                                           horizontal strip.

    Using this data for each of its members Ikz, we determine the localization interval Intkz = [begkz,
endkz] on Os, the range and the mean value of hue ∆Hkz = [Hminkz, Hmaxkz] and Hmean kz, the range and the
mean value of saturation ∆Skz = [Sminkz, Smaxkz] and Smeankz, and the range and the mean value of
grayscale intensity ∆Ikz = [Iminkz, Imaxkz] and Imeankz [7]. In addition, each interval of the geometrized
histogram has the cardinality Cardkz. Figure 2 presents the geometrized histogram of a strip of a color
image of a road scene. The ox axis in the figure presents the value of the characteristic function CF,
which varies within its range of values. As in Fig. 1, the oy axis presents the coordinates of pixels
along Os. The coordinate on this axis varies from 0 to DimX – 1, where DimX is the horizontal
dimension of the image (we consider a horizontal strip). The geometrized histogram in the colored
version of this strip describes the geometry of color distribution in the strip.
    The red closed curve in the right part of the figure demonstrates the intervals of the geometrized
histogram corresponding to the part of the yellow marking in the chosen strip. Two yellow squares in
the upper left-hand corner of the right part of the figure show the color of this group of intervals.
Usually, there are too many intervals of the geometrized histogram of a grayscale or color image Ikz to
solve complex real problems. To reduce the number of them, a clustering procedure is introduced [10,
12], which joins intervals Intkz that are close as intervals on Os and have close intensity (for the
geometrized histogram of the grayscale image) and intensity-color characteristics (for the geometrized
histogram of a color image). The joined intervals are called grayscale or color bunches. Each strip Sti
is described by the set of grayscale or color bunches Bi. Each grayscale bunch b∈Bi is characterized by
the following parameters:


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                   88
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




   1. the localization interval intb = [begb, endb], belonging to Os;
   2. ∆Ib = [Iminb, Imaxb] and Imeanb – the range and the mean value of the grayscale intensity;
   3. the cardinality Cardb (approximately, the number of points in the strip Sti whose coordinate x
belongs to the localization interval [begb, endb] that have grayscale characteristics belonging to the
range ∆Ib of the grayscale bunch).




 Figure 2. A road scene and the geometrized histogram of the color image of the 7-th horizontal strip.

    Each color bunch b∈Bi is characterized by the following parameters:
    1. the localization interval intb = [begb, endb], belonging to Os;
    2. ∆Hb = [Hminb, Hmaxb] and Hmean b – the range and the mean value of the hue of b;
    3. ∆Sb = [Sminb, Smaxb] and Smeanb – the range and mean value of saturation;
    4. ∆Ib = [Iminb, Imaxb] and Imeanb – the range and the mean value of the grayscale intensity;
    5. the cardinality Cardb (approximately, the number of points in the strip Sti whose coordinate x
belongs to the localization interval [begb, endb] that have color characteristics belonging to the ranges
∆Hb, ∆Sb, and ∆Ib of the color bunch).
    Here, the I component is equal to Y in (Y, Cb, Cr). Using the methods developed in [10—13], we
can attach to each color image the graph of color bunches STG (STructural Graph). Suppose that we
deal with horizontal strips. Each strip Sti is described by the set of color bunches Bi. B = ∪ Bi is the set
of nodes of STG. Two color (grayscale) bunches b1∈Bi and b2∈Bi+1 in the adjacent strips are connected
by an edge if their localization intervals have a nonempty intersection. A similar construction can be
produced for grayscale bunches. Color bunches b1 and b2 lying in the same strip are called adjacent if
their localization intervals intb1 and intb2 are adjacent. Color bunches lying in the adjacent strips are
called adjacent if their localization intervals have a nonempty intersection. Edges of STG join all
adjacent color bunches. Informally, each bunch describes a certain part of a real object in the strip, its
projection on Os and the description of the numerical characteristics of this part of the object. The
graph STG can be interpreted geometrically by overlaying localization intervals of its bunches ([begb,
endb]) in the vicinity of the middle lines of the corresponding strips. Figure 3 demonstrates the
representation of an image by the STG graph. Color bunches of each strip are superimposed on or near
(slightly below) the middle lines of the corresponding strips. There are two types of color or grayscale
bunches. Color or grayscale bunches of the first type are called dominating. Dominating bunches are
the bunches that in some places of their localization interval intb have the maximum density densb =
Cardb/l(intb), where l(intb) is the length of the interval intb. The second type of non-dominating
bunches is generated by color or grayscale bunches that do not have maximum density at any point of
their localization interval intb. Non-dominating bunches are also important in image understanding.
For example, as a rule, braking signal zones of distant vehicles in front on the road are non-dominating
because of their small size (parts of the vehicle body are dominating in this case). To understand
clearly the future behavior of the vehicle in front, we have to know the state of its signal zones to
determine precisely the moment when it starts to brake.




V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                     89
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




         Figure 3. A road scene and the corresponding image of color bunches of the STG graph.

   Figure 3 demonstrates a set of color bunches for a road image taken by a usual cheap camcorder on
a Russian road. Color bunches corresponding to parts of the road almost always give dominating color
bunches. Color bunches corresponding to the road marking provide dominating bunches only in the
nearest strips. The same is valid for grayscale bunches corresponding to road parts and parts of the
road marking. Figure 3 clearly shows that the parts of road marking in the sequence of strips going to
the upper part of the image generate in a sense a continuous sequence of dominating and non-
dominating color (grayscale) bunches. As a rule, the bunches corresponding to parts of road marking
have locally a maximum intensity and are small relative to adjacent dominating color bunches. In the
visualization, the localization intervals of dominating bunches are put on the entire middle line, while
non-dominating bunches locate slightly below middle lines. The construction and numerous
experiments with images have shown that color bunches represent any connected color or grayscale
object in the real image with the size greater than three pixels. The description of a color or grayscale
image by color bunches compresses the information on images from millions of pixels to several
hundreds of bunches. However, this image description contains all important features of the image,
including a description of the geometry of objects belonging to it. It is important to note that the main
operations of the construction of STG are the same for all strips. This means that the computation of
STG can be parallelized using particular processors in multiprocessor computers. Computational
experiments on modern personal computers have shown that STG can be constructed in real time for
HD video frames.

2.1. Problem statement and facilities for its solution
The problem of finding road marking in the images of video sequences of images of road scenes is
investigated in this paper. Both temporary and permanent road markings are of interest. The aim of
this investigation is to introduce a new technique for solving this problem in real time. Moreover, the
technique will provide tools for distinguishing temporary and permanent road markings when they are
both present on the road. Since any connected real object in the image generates a corresponding
formal object in STG [10—13], we will investigate images of real road makings in this graph. For this
purpose, the geometrized histogram of the color image and the geometrized histogram of its grayscale
component will be employed.
    At the first step of the algorithms, in each strip, objects that can represent a part of a road marking
in this strip are described and found. We do not employ here any information about the camera
position and calibration in order to take into account complex situations on sharp turns and to improve
the portability of the software. In this case, it can be installed even in rather cheap ADAS systems not
requiring the camera calibration. At the second stage, continuous sequences of the found local objects
located in different adjacent strips are constructed. The construction of continuous sequences of local
objects is based on a modification of the concept of left and right contrast curves (germs of left and
right global objects) introduced in [11, 12]. The main specific feature of the method is that we
construct the image of road markings in the graph STG. This makes it possible to avoid laborious
operations on the image array and to provide real-time mode of the method. In the next subsection,
the procedure for finding local objects in the strip that can represent different types of road markings is
presented. In section 3, the algorithms for constructing the continuous sequences of local objects will
be proposed.


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                    90
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




2.2 Local constructions
The procedure for constructing local grayscale or color bunches in a narrow strip is described in detail
in [12]. Let us recall its main steps. To start the construction, we select among intervals of the
geometrized histogram the following ones: 1. dominating intervals that have a maximum density in
some places; 2. intervals with maximum grayscale in some places (for grayscale and color
geometrized histograms); 3. intervals with maximum saturation (for the color geometrized histogram).
It is supposed that the density of the selected intervals is greater than some constant. We take these
intervals as seeds in a clustering procedure. We add to the cluster determined by a selected seed the
intervals of HGi that have a rather big intersection with the corresponding seed interval [12] and close
intensity or intensity-color characteristics to those of the seed. However, intervals with small densities
may be not included in this clustering procedure. This situation is typical for intervals corresponding
to the road marking in distant strips. Small groups of intervals of the geometrized histogram of the
grayscale component of the image, similar to those presented in Fig. 1, are called burst bunches or
simply bursts. Informally, bursts are generated by groups of rather small intervals that have close
intensities and are mutually close as intervals and have an essential difference in intensity with the
basic dominating bunches in a certain neighbourhood of their union on Os.
    Let us present an algorithm for finding bursts. We label all intervals of the geometrized histogram
as free or occupied. At the initial stage of the algorithm all intervals are free. A free interval becomes
occupied (belonging to a burst), if it is chosen as a seed or belongs to the cluster of some seed. We
choose intervals Ikz with a highest local intensity z in HGi as seeds for burst bunches. For each seed,
we define on the axis Os a neighborhood [a, b] ⸧ Ikz. Let [Imin, Imax] be the interval limiting the
intensities of the dominating intervals intersecting the neighborhood [a, b]. We require that the
distance of the intensity z from the intensity interval [Imin, Imax] be more than a constant Inot making z
noticeable against the basic dominating bunches whose intervals intersect [a, b]. Starting from each
seed interval Intkz = [begkz, endkz], we grow the burst b by adding intervals Ily, y < z, [a, b] ⸧ Ily, and
Intkz∩ Intly is nonempty. For each new step, we obtain a new boundary intensity interval [imin, imax] for
the constructed burst b. The condition [Imin, Imax] ∩ [imin, imax] is empty is a necessary condition for
continuing the process. Moreover, it is required that the gap between these intervals has to be more a
certain constant. In addition, by establishing the maximum gap between intensities of the added
interval of the geometrized histogram and the boundary of the interval [Imin, Imax], we introduce a stop
condition: if the ratio of the length of the maximum gap d1 to the length of the gap to the closest
dominating bunch d2 (Fig. 4) in HGi is greater than some value (e.g. 2), we stop the growing
algorithm.




                                    Figure 4. Burst bunches growing algorithm.

    Due to relative thresholding, the stop condition is independent of the average intensity of the
marking and its vicinity. Thus, it makes it possible to adapt the construction to different types of
illumination and to efficiently find burst bunches. For a color geometrized histogram, the procedure
for finding local objects for constructing a lane marking starts from detecting rather small color
bunches whose color varies in a certain range about the yellow one. This is explained by the fact that
in real images, due to dust, wet places, illumination, the color of permanent or temporary yellow road
markings may vary from orange to slightly green. The localization intervals of the bunches

V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                    91
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




corresponding to a yellow road marking, as well as localization intervals of bursts, have a limitation on
their lengths. In both cases (grayscale bursts and color bunches) the limitation on the length of
localization intervals are not very strong. They are selected so that we can construct candidates for
parts of road marking in strips without any information on the camera position and calibration.
Because of this, the algorithms can operate successfully with a wide range of photos. Similar to the
previous case, there are conditions on the intensity ranges of color bunches corresponding to road
markings, discriminating them from the intensity ranges of the adjacent dominating color bunches.
    To formulate these conditions, we need to recall the concept of search lattice on STG [13]. As it
was proved in [13], it is possible to construct in each strip a linearly ordered sequence of adjacent
dominating bunches whose localization intervals cover the middle line of the strip. In this sense, these
bunches generate a basis in the strip. All linear ordered basic subsets of bunches, joined for all strips,
generate on the image a “search lattice” SearchLat (STG) [13]. In the constructed SearchLat (STG)
bunches are numbered with preservation of the adjacency relation. The search lattice allows one to
formulate the relations of a chosen bunch with its main neighbors. Consider a bunch b that has suitable
color characteristics. There are two possible cases: 1. b is included in SearchLat (STG); 2. b is a non-
dominating bunch. In the first case, we formulate a system of rules that connect the color, density, and
geometry characteristics of the bunch with those of the adjacent bunches. The rules describe the
differences and ratios of the characteristics of the bunch and its adjacent members of SearchLat (STG),
keeping in mind that one of or both adjacent bunches belong to parts of the road. In a way, they give a
system of axiomatic description of the adjacency of a road marking bunch and the adjacent road
bunches. In the second case, we find a subsequence of bunches of SearchLat (STG) covering the
localization interval intb and formulate a similar system of rules. All these conditions detect in each
strip color bunches that can represent yellow lane markings. All color bunches of such a type are
labelled by a special array for each of the strips. It is necessary to note that in close strips color
bunches corresponding to road marking may be dominating, while distant bunches are not dominating
as a rule. This can be clearly seen from Fig. 3. Figure 5 demonstrates color bunches that are candidates
for the yellow lane markings chosen by the outlined systems of rules. Yellow color bunches that may
be parts of road markings selected based on local reasoning described above are superimposed on the
grayscale components of the corresponding color images. Since the program does not have any
information about the camera calibration and position, there are some false candidates in different
parts of images. For example, false candidates are located on vehicle bodies, roadsides, and in other
places. However, real candidates generate continuous chains with decreasing sizes of candidate’s
localization intervals.




                        Figure 5. Candidates for parts of yellow road markings in strips.

3. Construction of continuous systems of bursts and color bunches
The problem of finding lane markings is reduced to construction of continuous systems of local
objects (bursts or color bunches) in STG. The method for constructing continuous objects in STG (the
so-called left or right germs of global objects) was developed in [11, 12]. This method was
implemented as a program operating in real-time. However, this method was aimed at constructing
global objects, such as a road, the sky [15], etc. and deals with rather big dominating bunches. This
imposed certain limitations in the systems of rules for finding the next object in the sequence. As a
rule, local objects for finding road marking are usually not dominating and rather small. The concept
and the structure of the program have to be seriously modified to produce a new system for


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                   92
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




constructing road marking. The following subsections are devoted to describing the main points of
these modifications.

3.1 Finding road markings based of grayscale bursts
Now we can construct chains of bursts going over strips of the image. Any chain is a sequence of
adjacent bunches bk, bk+1, …, bn, located on adjacent strips Stk, Stk+1, …, Stn respectively. The chain
can represent a real road marking line or its part, so it is worthwhile to find it.
   We construct chain in the upward direction, beginning from its lowest strip, finding a
corresponding adjacent burst on the next strip for the burst in the previous one. To find the
corresponding next burst, the following two expected properties are employed: 1. a limited size and
density of next bursts, and 2. the smoothness criterion for the left and right boundary curves of the
chain. According to the first rule, we filter out long horizontal bursts, which cannot be a part of the
road marking for obvious reasons. For this purpose, approximate thresholds for the length and density
are established. For the closest strips, they are chosen to be suitable for a wide range of camera
characteristics. For the next strips, as a rule, the lengths of candidates for extension cannot increase
(in the ideal case, they depend only on the inclination of the line of the lane marking) and their
densities should decrease. The smoothness criterion means that the absolute values of the differences
of abs(endb(k+1) − endbk) (the right curves) or abs(begb(k+1) − begbk) (the left curves) for the adjacent
bursts are bounded by a constant connected with the width of the strip. Introducing these constraints,
we eliminate the effect of a sharp change in the shape of the boundary curve.
   If there are no adjacent bursts in the strip to extend the chains, we terminate the process of chains
construction. At the next stages, using the geometric characteristics of the chains, we can join some of
them in order to obtain an intermittent lane marking. It is worth noting that the consideration of only
burst bunches for chains construction makes our algorithm resistant to shadows and allows us to take
into account possible jumps of intensity near the border of illuminated and shadowed places. All these
rules make our algorithm rather robust for changing situations in the road scene and significantly
improve the quality of lane marking detection.

3.2 Finding road markings based on color bunches
Temporary road markings convey very important information, and it is difficult to drive a car in the
road without clear recognition of them. It is also necessary to distinguish permanent and temporary
road markings in the case when they both present on the road. This makes it necessary to use color in
detecting and recognizing road marking. To detect a colored lane marking, we employ the technique
for constructing left and right germs of contrast global objects developed in [11, 12]. This technique is
adapted to dealing with non-dominating color bunches and with color bunches satisfying additional
conditions on their size, density, and color, similar to those formulated in the previous subsection. To
avoid false extensions and to be able to deal with curvilinear lane markings, we use conditions
imposed on the relations between the directions of the segments joining the previous bunch and a
possible next bunch D1, the first bunch of the chain and bunch constructed at the previous step D2, and
the first bunch of the chain and the possible next bunch D3 of the chain. Since the curvature of the road
marking is bounded, reasonable relations between D1, D2, and D3 allow us to avoid false extensions.
This means that we do not suppose that the line of road marking investigated is just a straight line. In
this way, we can find a lane marking of a curvilinear shape as well.

3.3 Using global image analysis for system improvement
The global image analysis developed in [13] makes it possible to solve in real-time such tasks as
detection of the boundary of the vegetational and ground roadsides, finding the road and its parts in the
case of occlusion caused by other participants of the traffic, and sky region detection. These tasks are
solved based on the analysis of STG without operations at the pixel level. All such operations are
performed in constructing STG. It was stressed in the previous papers on the subject of road marking
detection that finding road boundaries and the vanishing point is highly desirable ([7, 14]). Since in
our method the problem of finding road marking is solved together with the problems listed above and
does not practically affect the total computation time, we use the opportunity to reduce the region of

V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                  93
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




interest, decreasing the computational effort and eliminating a lot of noisy objects (like lights on a wet
road). For example, we employ the information about the lower bound of the sky region.

4. Comparison with other methods for finding road markings
As was mentioned above, the most recent review of methods for finding road markings can be found
in [6]. In that paper, all known methods are described. The areas where they can give appropriate
results are also determined. We will also use the very useful discussion of related methods found in
[7]. In that paper, three types of methods were separated and discussed. The main drawbacks of the
methods of the selected groups were described in detail. We will not reply these arguments and refer
to [7] for necessary details. We resume only that methods of the first two classes (the first one is based
on edges and gradients, and the second one leans on the use of geometric information about the
camera and lane) suffer from different illumination conditions, from difficulties in precise definition of
geometric parameters (descends and ascends in the road), and occlusion and edges that arise due to the
presence of other vehicles. In one or another way, methods of the third class apply color information
of the image. It is clear that our method belongs to the third class, and it is reasonable to compare it
with the other methods of this class. Typical methods that use color information are described in [7-9].
It was mentioned in [7] that the existing methods that use complete color information (e.g. complete
HSI information) suffer from high computational complexity and cannot be implemented in real time
(e.g. methods of [8, 9]).
    In [7] to overcome the problems connected with high computational complexity, particular
components (Y-component and Cb-component) of (Y, Cb, Cr) coordinates [6] are applied, and ordinary
histograms of them in a certain neighborhoods of road marking locations are employed. It was stressed
in [6] that Y-component is the most used component for finding white lane markings. Actually, we use
it for finding bursts. Since colored points of the yellow road marking determine a small peak of the
histogram, the decision on the color of the road marking is made if this peak exists. Note that this
decision is very sensitive to blobs of dirt that are always present on and near the road marking in
winter and early spring in northern countries (like Russia). Moreover, at this time, the color of the road
marking is seriously deteriorated by the climatic conditions. Even white lane markings may have
yellow color characteristics. To discriminate the white and yellow road marking, we need a detailed
analysis of color-intensity characteristics of the road marking and the adjacent parts of the road,
including the analysis of the color of the precise parts of the road marking. The geometrized
histograms method applied in our approach makes it possible to take into account all specific features
of the posed problem. It allows us to perform the joint analysis of complete color-intensity
characteristics of the road marking and those of the adjacent parts of the road using complete HSY
information. The necessity of this analysis is stressed by the fact that we have images (taken under
winter and early spring conditions) in which the yellow marking cannot be distinguished in grayscale
components, while in the color image it can be clearly seen. In this case, our technique is very useful.
The possibility of the local-global joint analysis of the road marking and its vicinity is provided by the
global image analysis [13], using SearchLat (STG). This allows us to design reasoning systems to
judge whether this marking is considered by human vision as a yellow one. In addition, temporary
road marking may have a very high curvature, which it makes difficult to find it, using the binary
image and the Hough transform, as in [7]. Our approach makes it possible to deal with road markings
with any curvature. We also do not use any information about the camera calibration and position,
which makes it possible to the road marking to start at any row of the image array. This is very useful
in sharp turns, crossings, on ascends, and descends of the road. This also improves the portability of
the software, not imposing requirements on the camera calibration.

5. Software implementation, demonstration of the results and discussion
The algorithms for finding lane markings have been implemented by programs written in C++ and
operating under Windows and Linux environments. These programs process video sequences in real
time on standard computers with processors I3-I7 and records the results for each frame of the video
sequence tested. For color frames of resolution 640x480, the programs solve the complex of tasks,
such as finding the sky, the road, the vegetation surrounding, the road body, and road markings with

V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                   94
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




operation speed about 20 fps when all operations of the code are performed sequentially. The first
experiments with the use of parallel computations of the mentioned processors with four cores and
hyper-threading technology have shown that the operation speed in computing STG can be reduced a
factor of eight. This means that the program can work in real time for high resolution video. At
present we are working on simultaneous calculations of STG for divisions of the image in both
horizontal and vertical strips. This is important for finding in the image all types of road marking, as
well as other vehicles in the road, moving in the same direction, entering the road, and moving on
crossings in perpendicular directions.




                                 Figure 6. An example of finding a road marking.

   The programs have been tested on many real images taken from cars on different Russian roads
under different seasons, times of the day, under different illumination conditions. Figures 6, 7 present
two examples from records of the results.




           Figure 7. An example of finding a road marking in an image from a video sequence.

    In these figures, the road marking bunches are superimposed on the image and are painted red.
These results were obtained for a division of the image into horizontal strips. However, to find stop
lines and pedestrian crossings, it is necessary to divide images into vertical strips. It is clear that for
full frame analysis, we need to divide the image into both vertical and horizontal strips. It is also clear
that these processes can be used in a parallel manner, using several cores and threads for computing.
The results of the joint analysis of two types of STG (obtained for divisions of the image into vertical
and horizontal strips) in order to find different types of road marking will be the subject of our next
publication. We are also applying the developed methods to traffic sign detection. In this connection,
getting acquainted with the results of [16] motivated us to start this research.

6. References
[1] Dickmanns E D 2007 Dynamic vision for perception and control and control of motion (Berlin:
      Springer-Verlag Ltd.)
[2] Dickmanns E D 2002 Vision for ground vehicles: history and prospects Int. J. of Vehicle
      Autonomous Systems 6(3) 1-44
[3] Wu P Ch, Chang Ch U and Lin Ch H 2014 Lane-mark extraction for automobiles under


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                    95
Image Processing and Earth Remote Sensing
R V Dosaev, K I Kiy




        complex conditions Pattern Recognition 47(8) 2756-2767
[4]     Tian Y, Gelernter J and Wang X 2018 Lane marking detection via deep convolutional neural
        network Neurocomputing 280 46-55
[5]     Mathibela B, Newman P and Posner I 2015 Reading the road: road marking classification and
        interpretation IEEE Transactions on Intelligent Transportation Systems 16(4) 2072-2081
[6]     Norote S P, Bhujbal P N, Norote A S and Dhane D M 2018 A review of recent advances in lane
        detection and departuture warning system Pattern Recognition 73 216-234
[7]     Son J, Yoo H, Kim S and Sohn K 2015 Real-time invariant lane detection for lane departure
        warning system Expert Systems with Applications 42 1816-1824
[8]     Chiu K-Y and Lin S-F 2005 Lane detection using color-based segmentation Proc. IEEE
        Intelligent Vehicles Sym. 706-711
[9]     Sun T-Y, Tsai S-J and Chan V 2006 HIS color model based lane-marking detection Proc. IEEE
        Intelligent Transportation Conf. 1168
[10]    Kiy K I 2010 A new real-time method for description and generalized segmentation of color
        images Pattern Recognit. Image Anal. 20(2) 169-178
[11]    Kiy K I 2015 Segmentation and detection of contrast objects and their application in robot
        navigation Pattern Recognit. Image Anal. 25(2) 338-346
[12]    Kiy K I 2015 A new real-time method of contextual image description and its application in
        robot navigation and intelligent control Computer Vision in Control Systems-2 Innovations in
        Practice Intelligent Systems Reference Library 75 (Berlin: Springer) 109-133
[13]    Kiy K I 2018 A new method of global image analysis and its application in understanding road
        scenes Pattern Recognit. Image Anal. 28(3) 483-494
[14]    Liu W, Li Sh and Huang X 2014 Extraction of lane markings using orientation and vanishing
        point constraints in structured road scenes International Journal of Computer Mathematics
        91(11) 2359-2373
[15]    Demonstration site URL: https://www.facebook.com/100004887018729/videos
[16]    Yakimov P Y 2015 Tracking traffic signs in video sequences based on a vehicle
        velocity Computer Optics 39(5) 795-800 DOI: 10.18287/0134-2452-2015-39-5-795-800

Acknowledgments
This work was partially supported by the Russian Foundation for Basic Research, projects 18-07-
00127 and 19-08-01159.
The authors thank Professor E.D. Dickmanns for his interest to the work, careful reading of the
preliminary materials, and useful remarks and recommendations.




V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)             96