=Paper= {{Paper |id=Vol-1856/p16 |storemode=property |title=Investigation on road-sign recognition |pdfUrl=https://ceur-ws.org/Vol-1856/p16.pdf |volume=Vol-1856 |authors=Sam Gilvine Samuvel,Praywin Moses Dass Alex }} ==Investigation on road-sign recognition== https://ceur-ws.org/Vol-1856/p16.pdf
                   Investigation on road-sign recognition

                  Sam Gilvine Samuvel                                                      Praywin Moses Dass Alex
            Department of Software Engineering                                                  EinNel Technologies
             Kaunas University of Technology                                                           India
                    Kaunas, Lithuania
              e-mail: gilvine21@gmail.com


   Abstract— Traffic sign recognition is an important                    scenarios. Although there are many number road signs, there
research topic for enabling autonomous vehicle driving                   is a possibility that the driver cannot perceive the road sign. In
system. This paper presents an investigation of the road                 such places, if driver supportive systems are used, the chances
and traffic sign detection and recognition system. Road                  of accidents occurring could be minimized.
sign detection systems is kind of system which uses                       In this type system, the driver will be user of the system and
computer based application related to vision. It also allows             will acquire visual warnings when a traffic sign is identified in
the drivers to follow the rules by imposing restriction and              front of the user. As per the research, it is said that the road
makes the driver follow and obey the rules and regulations               sign detection is still at wide research that needs to be worked
through the use of image processing technique. The aim is                on. Here, we input a road sign to the system, this road sign is
to review systems of road sign detection, to present                     then compared with the other road signs already available in
implementations of road sign recognition algorithms, and                 the database. The image is converted from RGB to gray scale,
compare four distinct algorithms (Scale invariant feature                in order for better processing of the image. The comparison of
transform, Support vector machine and Histogram of                       these images starts with, first detection of the key features.
oriented gradients, Gabor, Blob analysis) to find which                  Once the key features are detected, the next step that the
algorithm works the best to detect road sign in terms of                 system does is by extracting the key features. Here, we use 4
speed and robustness of the method. The results show the                 different sets of algorithm for the extraction of key features.
blob analysis algorithm works better for these both speed                The 4 sets are:
and robustness of the methods.                                           1) SIFT Algorithm [13]

  Keywords—Recognition; detection; SIFT, SVM, HOG,                       2) Histogram of oriented gradients(HOG) and Support vector
GABOR, BLOB analysis                                                     machines(SVM) [3]
                                                                         3) GABOR and NEURAL NETWORK [11]
               I. INTRODUCTION (HEADING 1)
                                                                         4) BLOB ANALYSIS NEURAL NETWORK [5]
The shapes of the road signs, such as triangle, circle and
rectangle, are generally difficult for the observer in nature.                               II. RELATED WORK
Nowadays, there is a demand for road safety, where the safety
of driving should not only rely on the driver, but also on the           A. Traffic Sign and Recognition
computer system, which the vehicle uses. It can be said that,            Comprehension and detection of traffic signs have been quite
there are many reasons on how a road accident may occur. As              a recurring area of research [1], have reported that public
per general knowledge, over speeding is one of the main                  datasets have reached to only a fairly good number and a
reasons on why the road accidents occur.                                 proportion to studies relating to empirical basis.
To stop an accident from occurring, there are different sets of
road sign placed on the side of the roads. This can pose
warnings to the driver at the intended location. Sometimes,
because of lack of concentration, lack of visibility or lack of
knowledge on the road sign by the driver may cause his to
override the traffic rules and drive the vehicles based on their
need, because of which, the driver puts himself in a life-
threatening situation. There are many researches made, like
the obstacle detection and path detection, which can give very
useful information to the driver and help him avoid danger,
                                                                                   Figure 1: Traffic Sign and Recognition [1]
but the road sign detection has still been on the research phase.
It is known that the road signs play a vital role while driving          The above figure (fig. [1]) depicts traffic signs that were
on any roads, playing an important role on road conditions and           typically used. The bounding box that surrounds the traffic
   Copyright © 2017 held by the authors




                                                                    80
sign was used to crop the image as for data acquisition. There           and the secondary factor being edge detection to typically
were a set of three main categories otherwise referred to as             make note of road signage. There was a multilayer receptor
super classes with respect to their shape and colour designated          that functioned as a recognition module. The basic idea behind
by the letters M which stood for mandatory, D which denoted              using this is to relate to the road signage patterns using the
danger and P which depicted prohibitory. Under the subject of            software C/C++. The tests were conducted on real images of
datasets there were subject relating to traffic sign detection           traffic signage to gauge the efficiency and performance.
which had subsets such as Integral Channel with Features
Classifier, Aspect Ratio Detection technique, Basic Training
System and Experimental Detection. The features picked up as
in the training of channels for the unsafe zone had channel
divisions as depicted in the fig [1]. The authors [1] depict
modern techniques that performed on large scale detection and
classification datasets taken into account from countries
Belgium and Germany. Without the use of application and just
by virtue of specific modification, existing methods for
pedestrian sign detection and face & digit classification, the
noted efficiencies were in the range of 95-99%.                                   Figure 3: Panels Considered in this Work [5]

                                                                         The system used a traffic signs detection module and by virtue
                                                                         of chain codes the components were operated and freeman
                                                                         method was used to depict the contours. The other step was to
                                                                         make simple a piecewise linear curve and for this Ramer-
                                                                         Douglas algorithm was used. A BGR colour image is same as
                                                                         RGB except the order of areas is reversed. Red occupies the
                                                                         least significant area, Green the second(still), and Blue the
                                                                         third. colour image, thresholder masks blue and red in colour
                                                                         are depicted below fig [4]. The traffic sign recognition here
                                                                         was based on the principle of neural networks technique. The
             Figure 2: Training of Channels [1]                          traffic sign was initially processed by image acquisition. The
                                                                         Image processing methodologies such as Threshold technique,
The acquired image was scaled to size that pertained to the              Ramer-Douglas, Contours and Fit-Ellipse were primarily used.
best-fit for general analysis. The technique that the authors [2]        The model serves a low computational cost object which is
implemented had four steps in total which were as follows.               quite a merit. And thus, real-time implementation is possible
The primary step as previously mentioned was to notice the               as the authors suggest.
traffic signage and depending upon the net contrast, Sobel
edge detection technique in addition to morphological action.
In the secondary step the detected sign was worked upon with
the concept of row and column count. The third step which in
another way was quite imperative was data extraction using
discrete     cosine    transform(DCT),      discrete     wavelet
transform(DWT) and Hybrid DCT-DWT. These are subjected
to feature extractions, 20 highest energy coefficients as in
training phase as in DCT, 300 features from traffic signs as in
DWT and 20 features in Hybrid DWT-DCT. The final step
involved the recognition through SVM. [2]
B. Traffic Sign Recognition for Intelligent Vehicle Using
    Neural Network on Open Source Computer Vision(Open                                 Figure 4: BGR Color Image [5]
    CV)
Traffic Sign Detection and Recognition contains many
features to help the driver in proliferating rate of safety. This        C. Detection and Recognition of Alert Traffic Signs
could be noticed because of immense significance in the                  Traffic signs provide critical information about safety and
automotive industry. This in another way means improving                 navigation. Automatic detection of traffic signage and signals
comfort and safety in general. This method [5], provides a               is quite imperative. The method Histogram of Gradient (HOG)
neat detection and recognition system which would be of                  [8] to draw out images and complete 1680 features. A cascade
ready assistance to the drivers. The approach fundamentally              classifier built with a Support Vector Machine (SVM) was
consisted of two imperative modules. Primarily there is a                used. To process the information the colour information from
detection module which has its basis on colour segmentation              different layers are placed onto a single vector. This functions




                                                                    81
a feature descriptor. Colour segmentation is carried out to             that were divided into two categories which basically were
reduce regions of search which might seem time consuming                main and sub-divisions. There were speed limit signs
otherwise.                                                              corresponding to speeds ranging from 20-120 km/hr. The
                                                                        other signs were prohibitory signs, Derestriction signs,
                                                                        Mandatory signs, Danger signs and unique signs. Real and
                                                                        imaginary parts of Gabor filters used for orientations that were
                                                                        four in number and on two scales. Under the process of
                                                                        classification, the pre-processing, complementariness with
                                                                        local features, local binary pattern and multi-feature fusion
                                                                        were used. The classifier accuracy ranged from 98.51 % to
                                                                        98.76 % under conditions of SVM and SVM+RF respectively.
                                                                        The RGB accuracy was found to be 99.79 % with gray scale
         Figure 5: Sub-Blocks of the Image [8]                          % at 98.95. The Gabor filter had an accuracy of 97.09 %. The
                                                                        recognition rate was found to be 98.76 % which the authors
The features that were imbibed in this were colour encoding,            claim is very close to human recognition range. The fusion of
HOG features and learning methods. There were sign and non-             the features of makes this method a sturdy hierarchical method
sign images in the data sheet which proved to be quite useful           with a high degree of accuracy. [9]
with regards to discretion purposes. The important features of
the method were stop and do not enter signage. Data set
training was carried out in addition to cascade sign detecting.                                III. METHODS
Speed and colour segmentation were other imperative features
used to define the methodology. The authors [8] have put                A. Scale Invariant Feature Transform
down an efficient alert traffic signage recognition scheme. The         Scale invariant feature transform [13] is constant to image
detection rate was found to be 92 % -100 % and the false                scale, distortion of affine, rotation and revolution, variation in
positive rate ranged from 0.19 % -5 %. The authors [8]                  3-dimension view point, disturbance in the signal and changes
recommend that this system can take into account illumination           in illumination, which are best match for the matching of the
and pose variance. The processing time in Matlab was found              image. Here, the method is carried out in two various stages,
to be 7-9 seconds and the real-time detection could be                  detection of key point and description of key point. Each stage
achieved [8]. The traffic sign with evaluation procedures in            has their own sub stages and they are given below:
the outline of the signage. The observation was basically done          Key point detection:
with subject to outdoor systems. Since the information of the           A) Scale-space detection
traffic signage is quite imperative with regard to safety, there
were cases with poor illumination of traffic signage which              B) Key point
affected the vision in general and this was quite unpredictable         Key point Description
considering the position and orientation of the signage. The            A) Orientation assignment
authors [4] had developed an artificial vision system which
recognized traffic signs considering geometric shapes in                B) Key point descriptor.
general even though there was poor illumination in general.             In the first stage, the search is made in every scales and image
                                                                        location. This is done by using the function „detect SURF
                                                                        Features‟ which then identifies the potential point of interest
                                                                        that stays constant to both the scale and orientation. At every
                                                                        intended location, the feature detection model is employed to
                                                                        determine the exact location and the scale of the points. These
                                                                        key points are based on the estimated values of their stability.
                                                                        The image gradients at every local point are estimated at the
                                                                        intended scale in the region around every key points. These
                                                                        are then converted to depict that allows for all significant
                                                                        levels of variation in local illumination and distortion of
                                                                        shape. Finally, bounding box method is used to display the
                                                                        region of importance. The bounding box is a feasible and a
                                                                        known method, which is used by various existing interactive
         Figure 6: Cascade Sign Detectors [8]
                                                                        image segmentation frameworks. Here, we discuss how this
                                                                        method is used to represent a topology, which prevents the
D. Multi-feature and Multi-classifier fusion                            solution from enormous amount of compression and
A swift and robust method for traffic sign recognition which            guarantees the user provided box bounds segmentation is a
typically used a range from course to fine strategy is presented        needed tight way [13]. The interpolation is done using
in this case. The Gabor analysis with subject to traffic signage




                                                                   82
quadratic Taylor expansion of the difference of Gaussian scale
space function Taylor expansion is given by,
                                                                         C. Gabor Filter and Feed Forward Propagation(NEWFF)
                                 ---------------(1)
                                                                         The basic function of filters in the Gabor family [11] are two
B. Support Vector Machine Classification and Watershed                   dimensional and can be represented as a Gaussian function
                                                                         modulated by the oriented complex sinusoidal signal. These
The identification of a traffic sign from various set of traffic         types of filters generally act as band pass filters. In this paper,
signs requires knowledge of their structure and brightness or            we use Gabor filter and also create Gabor filter. Once the filter
intensity value. Traffic sign detection technique is a tedious           bank is created, Gabor features are then extracted. During the
tasks and it needs various steps to get the needed result. In the        implementation, we generally input the image, which is
beginning stage, the original input image is converted to gray           selected manually. In the beginning, the sub images are scaled
scale image and the entire work in this method is done with              to a fixed size. It can be of any fixed dimension. Soon after the
the converted gray scale image. Therefore, the detection will            scaling, the image is the sub-divided into desired overlapping
emphasis on edge information, shape and structure of the sign,           windows. Later, assumptions on patches contained in the sub
morphological operations and filtering. The input is taken in            image are made, where each patch consists of its sub
as RGB, but soon it has to be converted to gray scale image.             windows. The Gabor windows are then subjected to each and
The formula used to identify the intensity value is given by:            every sub windows separately.
I=0.2989* R + 0.5870* G + 0.1140* B----------- (2)                       „newff‟ is a tool used in MATLAB for access neural network
Where                                                                    and it is elaborated as Feed Forward Neural Network, which is
„I‟= image intensity value ranging from 0,255.                           an artificial neural network, where the links are established
„R, G and B‟ = red, green and blue from RGB color model.                 between two units that do not form a cycle. As per the
In this phase, the variation in intensity is detected from the           research, this type of network is first of its kind also the
input image. If the traffic sign has some needed contrast with           simplest type of artificial neural network invented. This
background, then it uses edge detection. The canny method is             operates in a half-duplex mode, where the information flows
used to extract traffic sign with background. Filling the holes          in only one direction, and in forward direction from the input
of the binary images, the dilated gradient mask shows the                node, through the other hidden nodes to the output node. In
outline of the image quite nicely, but the holes (noise) in the          this network, there are no loops or cycles. These neural
interior of the image are still appears. To reduce such noises           networks are of two layers.
from occurring, the use hole filling algorithm is used. The              A) Single layer perceptron
connected borders are removed to a needed portion as an
output.                                                                  B) Multi-layer perceptron
Smoothen the traffic sign board is then obtained as that the
segmented object should look natural as compared with the                In case of single layer perceptron, the data flows from input
original image. Let us consider the topographic surface, water           node to single layer of output nodes. But in case of multi-layer
would collect in one of the two catchment basins. Water                  perceptron, there are multiple layers of computational units,
falling on the watershed ridge line separating the two basins            usually interconnected in a feed-forward direction [11].
would be equally likely to collect into either of the two
catchment basins. Watershed algorithm then finds the                     D. Blob Analysis
catchment basins and the ridge lines in the image. The                   In this method, [5] the main aim is to detect the regions in any
watershed algorithm is implemented in MATLAB image                       digital image, which has a varying property, such as intensity
processing toolbox as L=watershed (f), where f is the input              or color that camped be compared to the other surrounding
image and L is labelled matrix image having positive integer             regions. It is said the blob region is the one where the
values at different regions and at the watershed ridge lines.            properties of the image lying on them usually have the
The key behind using watershed transform for segmentation is             properties, which do not change at all. Therefore, giving us
this changing image into another image whose catchment                   idea that all the points in blob to be similar to each other.
basins are the objects you want to identify.                             In the first method, the derivative function is based on the
Histogram of Oriented Gradient (HOG) is feature descriptors              position and in the later method, it is based on identifying the
used for the purpose of object detection. HOG is used to                 maxima and minima of the local region.
capture color and shape as one feature. The gradient at each             In beginning, the blob detection was used gather regions of
pixel is the gradient with the greatest magnitude among the              interest for further processing. These regions could pose the
gradients computed on each of the channels. The bin is                   presence of some part or an object in the selected image
increased from 3 to 9 and the binning of the unsigned gradient           domain, with respect to recognition of some object or tracking
orientation from 0-180. Then rescaling each region in to 3*3             of object. It can also be related to histogram analysis, where
pixels and describe it by 9*9 blocks of 8*8 cells with 8 pixels.         the blob descriptors can be used to detect the peaks with an
Extracted output from HOG is given to SVM classifier to                  application to segmentation. The other general use of blob
analyse and recognize patterns, used for classification and              descriptor is mainly for texture analysis and texture
regression analysis. SVM classified as either sign of interest or        recognition.
background [3].




                                                                    83
Soon after the blob analysis on an image, we then process                features are detected, the next stage will be extraction of these
further in neural network. Here, the data are trained, their             detected features. The function „extract Features‟ is used to
performances are evaluated. Some part of the needed                      extract the features from both the images. The needed
information is stored to „net‟ file and are further processed.           information for this function to operate is the filename in
The obtained performance of the input image is then compared             which the gray scale image is stored and then their respective
with the estimated performances of all the images, prior to the          detected feature points stored in a file. For example, the key
experiment. If the condition is found to be satisfied, a small           features for the input image are extracted and the detected key
image box is opened where the selected input will be viewed              features stored in box Points‟. The extracted features are then
along with some desired message that we enter into out code              stored to the file „box Features‟. The same technique is
[5].                                                                     followed for the reference image. where the image is
                                                                         compared with its detected feature points stored under the file
                                                                         name „scene Features‟ to obtain its respective feature points
                    IV. IMPLEMENTATION                                   and it is later stored under the name „scene Features‟. In this
                                                                         phase, the extracted features of the input image and the
A. Sift Features                                                         extracted features of the reference image are matched and the
In the implementation phase begins with the selection of                 values are made to store. The matched features are viewed
image which will be loaded to the program. The filename of               using the function „show Matched Features‟, where the
the image file is selected along with its destination path and is        matching points of the image are obtained and are moved to
stored to the temporary file created in the program. The stored          the default file named „montage‟. Now this matched image is
image file is then read and then stored to the file. After               then displayed and entitled as „matched points (including
storing, the image is then converted from RGB to gray scale              outliers) ‟. In next step, we apply geometric transform on to
and further stored with the same name. This image can be later           the matched box points and on matched scene points. Later
viewed and can be renamed to desired title. Later, the image             this image is viewed as „Matched points (inliers only).
from the database is been read into the program. This image              A box polygon values are created to each rows and columns.
consists of all the possible traffic signs needed to carry of the        The forward transform points are applied on the estimated box
experiment. The image is later resized and is converted as gray          polygons and are renamed as new box polygons. This image is
scale image. After the loading of main image and reading the             then displayed as binary image output. The threshold level of
reference image from the database, the next stages will follow           the gray image is obtained and is saved in „level‟. The image
the comparison between these two images, as the features for             is then convert to binary using thresholding the size of the
these images will be detected, later extracted and then                  binary image is the estimated. To make sure all the needed
matched. Once the images are loaded and read, the next stage             information to be covered inside the frame. The highest values
for the comparison begins with the feature detection phase.              of 9 are allotted to both the row and column. Once the image
                                                                         is identified, a frame is made to be marked around it. Later, a
                                                                         bandwidth label is estimated for the binary image. We then
                                                                         use the function „region props‟, which then estimates the
                                                                         bounding box for the image. The size for the bounding box s
                                                                         again altered based on the needs. Then we shoe the figure of
                                                                         the input image along with a frame around it and rename this
                                                                         as „bounding box image‟.




       Figure 7: Block Diagram for SIFT Features

The features are first detected for the input image and stored
as „Box Points‟. It is programmed in such a way to display
only 100 key points for the input image and is then plotted
under the title „100 strongest key points from the Box Image‟
Then later, we detect the key points for the reference image
and it is stored under the name „Scene Points‟. Since, this
reference image has got more traffic sign images into it. It is                            Figure 8: Matched Points
programmed in a way to display 300 strongest key points,
which is 3:1 ratio with the input image. This image is later
plotted as „300 Strongest Feature Points from Scene Image‟.              B. Hog Features and Svm Classification
In both the cases, a special function called „select Strongest‟          We begin the implementation generally by loading the desired
is used to select the strongest key points. Once the key                 image file into MATLAB program destination. It’s




                                                                    84
programmed in a way to accept JPG files as the input, and                 beginning, we set the HOG window value to „3‟ in both X and
once acquired, it can be stored to any path and into any file             Y coordinates. The Bin value is then set to the value „9‟. Then
name. This file is then read and it is stored. This image is then         the size of the watershed image is the estimated with the
converted from RGB into gray scale. The converted file is                 function „size‟ and the values of lines and columns are stored
later resized to 512 X 512 and is stored. We detect the border            into an array. Then, the column vectors are initialized to zero
of the dominant part of the image. After the edge detection is            using the functions „zeros‟. Later, square root is performed on
phase, the detected image is then dilated.                                the estimated values and stored. Later, a condition is checked,
This dilation process is done to enhance the detected region.             and if it is satisfied, a verification is done to estimate the size
The outline of the image can be viewed clearly due to dilated             of the image. We then estimate the image which is present. In
gradient mask. To dilate the image, first we call the function            the images, any objected placed, will be the key information,
„strel‟, which creates as non-flat ball shaped image, whose               where the background of this information will be made to
height in this case is selected as 5 and its radius is also               black, therefore, the object present in the image will be only
selected as 5. Then, dilate the image along with the elements             viewed in front and this object will provide a peak value at its
in the flat structure and this dilated image is moved into the            location, based on the image will be detected. The obtained
file called „Iobrd‟. After the image is dilated, there will be            histograms are then assembled with 9 bins and it can vary till
noise occurring in the signal, in the form of „holes‟. In order           20 bins. We then create a bin with the name „H” and the
to reduce this noise, we use an algorithm, which is especially            needed information’s obtained or calculated will be moved
dedicated for filling the holes. This algorithm is called into the        into the bin named „H‟. In this phase, we use the data’s used
program and is followed by the dilated image stored in the file           in the excel sheet to recognize the data present in the bin „H‟.
named „Lobrd‟ and the „holes‟ themselves. The image is then               We also sum the values with the estimated values in the bin
moved to the file, named „BW3‟. This image is then display                and store them to the excel file. finally, we use the condition,
and is titled as „Hole Filled Image‟. The obtained image has              if the values in the E column matches with the already
to be smoothened in order to make the resulting image look                calculated values of the various images. If this condition
natural when compared to the input image. This can be done                satisfies, a small message box is made to open, where the
by eroding the traffic board object. The smoothened image is              input image entered would have been selected and displayed
then stored to the file „BW4‟ and displayed with the title                with the message of our choosing, which we use in the
„Smooth Image‟. It is an approach which is exclusively used               program.
for segmentation of the image, where the objects in contact
with one another are segmented clearly. In order to do this, the
function, „watershed‟ is called on the smoothed image in the
file „BW4‟ to identify the objects in contact with one another.
These values are then stored to label matrix „X‟. Then we use
the function, „label2rgb‟ which converts the labels to the RGB
according to the color map specified in the program. Here, the
map is a string which contains the color map function „jet‟.
Since the color map color selected is shuffled, the color map
colors will be pseudo randomly shuffled. This image file is
then copied to the filenameL1‟and entitled and
Watershedimage‟.                                                                 Figure 10: Watershed Image Output


                                                                          C. Gabor Neural Network
                                                                          The first stage is where an input image is loaded into the
                                                                          program. The loaded image is of JPG format. The loaded
                                                                          image I then read on to the program and stored. The image is
                                                                          the resized to the desired resolution. The resized image is then
                                                                          converted from RGB to gray scale. This image is stored as
                                                                          gray. We then create a function called filter bank and keep it
                                                                          ready to store variables or key points. The filter bank is then
                                                                          called upon using create filter bank function. We then use
                                                                          Gabor filter, where we use the filter bank and perform a
                                                                          convolution operation using Fast Fourier Transform. This
                                                                          transform is carried on the image, which has been converted to
    Figure 9: Block Diagram for SVM Classification                        gray scale. The values are the entered to „filterParams‟. We
                                                                          the view response of the filter by calling Filter Bank. Show
HOG is a feature extraction algorithm with is used for the                Responses, where we see the parameters of the gray scale
purpose of object detection. This algorithm is mainly used to             image. After the transform is applied to the image, the next
capture the color and shape of a particular feature. In the




                                                                     85
step is to obtain the feature of the image. Here, the Gabor
kernel is called using @(x), which then extracts the feature of
the image. The values are moved to the file „feature
Extractor‟. Again, a convolution is done when we apply Fast
Fourier Transform on the gray scale image and the extracted
features. Later, the response of this filter performance is
checked.



                                                                                  Figure 12: Gabor Transform Output



                                                                         D. Blob Analysis Neural Network
                                                                         The image is loaded to the system similar to how it was done
                                                                         in other algorithms. The image is then read into the code. The
                                                                         image is the resized to desired dimensions. Later this image s
                                                                         converted from RGB to gray scale. The first step in this phase
                                                                         is to set the level. Then the image is compared with the level
                                                                         and is then stored to „bw‟. These stored file is the viewed and
     Figure 11: Block Diagram for Gabor Neural Network                   entitled as „Binary Image processing‟. In this phase, the
                                                                         binary image undergoes morphological operation and the
In this stage, we create a neural network function training tool,        value is moved on to the file „bw2‟. This figure is then viewed
which is used to train the data and compare them with the                as „morphological operation‟. The image is the compared to
input image. First, we create a target T and the input P. we             100 and is stored as bw3. The size bw3 is later estimated.
keep the database for the images, where their values are                 Here, the value I is considered as row and j is considered as
computed and are placed in an excel sheet. We compute the                column. Here, Y2 and X2 are selected as 9 and 10
size of these targets and in input image. Later we call the              respectively, is because to frame the outline of the image
newff function, which allows the data to flow in only one                correctly, if in case the image is a large one. If the selected
direction from the node to output nodes in a single later                values are less, the bounding of image might happen in
through various hidden nodes. We then set goal, epoch and                between the sign. The region property is then estimated for
other function to train the data and test their performance. A           bounding box and label. The bounding box values are then
maximum function is taken and the values of P are calculated.            converted from the structure to cell. The image is then viewed
We then obtain the error for the code using the target1 and the          and the position and edge color are determined and renamed
output files by applying subtraction operation over the                  as shape measurement. The label value for the resampled
mentioned physical parameters. The performance between net,              image is estimated at the beginning of this phase. The region
target 1 and output is estimated using the function „perform‟.           properties of these phases are then estimated and saved as
The performance is the confirmed and sealed for the                      stats. we find the indication area, the net stage emphases on
multiplation. Now we compare the performance „per‟ with the              estimation of perimeter. The size of the re-scaled image is
image is folder. These values are obtained for every image in            determined and its perimeter is estimated. This image is later
the data base priory. Here close to 35 to 40 images are being            viewed and tiled as perimeter image. We find the perimeter
used in the database for comparison value. Once the value of             values in an image in both the axis of the image. The
„per‟ is matched with any of the images of the files. The                perimeter of the image is then summed. And in other lines, the
image which is imputed will be estimated. Once the condition             values are subtracted.
is satisfied, the image which is selected will be correctly
match the image in the database, and we get the exact value.
The same condition is used for all the images and their values
are satisfying conditions.




                                                                           Figure 13: Block Diagram for Blob Analysis Neural Network




                                                                    86
Finally, the estimated value of perimeter is the taken square             In figure 16, the quality of the all the 4 algorithms are
root and is saved as D. In this stage, to find the normalized of          estimated. There were 30 images used in this process and the
radial distance, the standard deviation is divided to estimate it.        algorithms were used on them to identify how these
This value is the viewed as the normalized derivation of radial           algorithms were able to recognize these images. After the
distance. We then estimate compactness, where the                         process, the qualities of these methods were estimated and are
mathematical formula is used to determine compactness,                    listed below:
where the estimated area and perimeter are used in the                         1) Blob Analysis and Neural Network = 96.6%
equation. We then write the data to the excel sheet. Once
written, the values are now read for all data’s and is stored into            2) HOG and SVM = 93.3%
the location „num‟. we then use the neural network tool to
train the set of data’s, test their performance and check out for             3) GABOR and Neural Network = 93.3%
errors and store all the values under the file net. Once the
performance is measure, we use the measure performance with                   4) SIFT Algorithm = 83.3%
the performances already measured for the images in the
database and check for which performance, the estimated
value of performance matched. If a condition is satisfied, the
image in that condition is then viewed in a small message box.
In this stage, there are more than 50 images used.




         Figure 14: Recognized Output of Blob Analysis


               V. COMPARISION OF METHODS
                                                                                        Figure 16: Robustness of the Methods
In the beginning, we had to estimate image acquisition speed.
After estimating, we compared with method hand an upper
hand over other method. It was found out that the Blob                    TABLE I.      COMPARISION OF METHODS
method had the acquisition rate of 3.78 images/seconds and it
was the highest when compared to other methods. The                                  Methods                        Image
                                                                                                     Robustness    Detectio
GABOR method was determined to be the second-best                                                                              Average
                                                                                                        (%)        n Time
method with the acquisition rate of 4.92 images per second.                                                          (sec)
The SIFT method was the third best method as it had the
                                                                            SIFT                     83.3            9.56       4.46
acquisition rate of 9.56 images per sec. And finally, SVM
method was the fourth best method as it had the acquisition                 HOG & SVM                93.3           14.71       13.81
rate of 14.71 images per sec. From this result, the Blob                    (Newff) Gabor Neural
                                                                                                     93.3            4.92       3.03
analysis method was chosen to be the best in terms of                       Network
maximum data acquisition rate.                                              (Fitnet) Blob Analysis
                                                                                                     96.6            3.78       3.22
                                                                            Neural Network




                                                                                               VI. CONCLUSION
                                                                          In this paper, we implemented a four distinct algorithms for
                                                                          traffic sign recognition. And we calculated speed and
                                                                          robustness of the methods. The robustness is the accuracy of
                                                                          the recognition. The algorithms are as follows: scale invariant
                                                                          feature transform, support vector machine and histogram of
                                                                          oriented gradients, Gabor, Blob analysis. We have proposed a
                                                                          FITNET function in algorithm of blob analysis. The blob
                                                                          analysis has higher detection rate than the other algorithm.
                                                                          The image detection speed was in the precedence order of
                   Figure 15: Graph for Speed                             BLOB (3.78 images/sec) > GABOR (4.92 images/sec) > SIFT




                                                                     87
(9.56 images/sec) > SVM & HOG (14.71 images/sec). This
indicated that BLOB algorithm had the maximum data
acquisition rate. Comparing the above conclusions, it is
observed that BLOB algorithm was the best-fit followed by
SIFT algorithm. The quality of algorithms had precedence of
BLOB was found to be the highest, i.e., BLOB (96.6 %) >
SVM (93.3%) = GABOR (93.3%) > SIFT (83.3%).


                               REFERENCES

[1] Mathias, M., Timofte, R., Benenson, R., Gool, L. V., “Traffic sign
       recognition — How far are we from the solution?” The 2013
       International Joint Conference on Neural Networks (IJCNN), 2013.
       doi:10.1109/ijcnn.2013.6707049
 [2] S.Sathiya, M. Balasubramanian, S. Palanivel, “Pattern Recognition Based
       Detection Recognition of Traffic Sign Using SVM,” International
       Journal Of Engineering and Technology (IJET 2014), vol. 6(2), pp.
       1147-1157, 2014.
[3] M. Blauth, E. Kraft, F. Hirschenberger, M. Bohm, “Large-Scale Traffic
       Sign Recognition Based on Local Features and Color Segmentation,”
       2012
[4] S. Lafuente-Arroyo, P. Gil-Jimenez, R. Maldonado-Bascon, “Traffic Sign
       Shape Classification Evaluation I: SVM Using Distance to Borders,”
       Conference: intelligent vehicles symposium, 2005.
[5] A. Salhi, B. Minaoui, M. Fakir, “Robust Automatic Traffic Signs
       Recognition Using Fast Polugonal Approximation of Digital Curves
       and Neural Network,” International Journal of Advanced Computer
       Science and Applications, pp. 1-7, 2014.
       10.14569/SpecialIssue.2014.040201.
[6] A. Lorsakul, J. Suthakorn, “Traffic Sign Recognition Using Neural
       Network on Opencv: Toward Intelligent Vehicle/Driver Assitance
       System, Center for Biomedical and Robotics Technology (BART LAB)
       Department of Biomedical Engineering, Faculty of Engineering,”
       Conference on Ubiquitous Robots and Ambient, 2007.
[7] C.-H. Chen, M. Chen, T. Gao, “Detection and Recognition of Alert Traffic
       Signs,” Publication of Stanford university, 2008.
[8] S. Yin, P. Ouyang, L. Liu, Y. Guo, “Fast Traffic Sign Recognition with a
       Rotation Invariant Binary Pattern Based Feature,” 2015. ISSN 1424-
       8220 Jounal/Sensors.
 [9] Y. Ma, L. Huang, “Hierarchical Traffic Sign Recognition Based on Multi-
       Feature and Multi-Classifier Fusion,” First International Conference on
       Information Science and Electronic Technology (ISET 2015).
[10] Z. Huang, Y. Yu, J. Gu, H. Liu, “An Efficient Method for Traffic Sign
       Recognition Based on Extreme Learning Machine,” IEEE Transactions
       on Cybernetics, 2016.
[11] Z. Sun, G. Bebis, R. Miller, “ON-Road Vehicle Detection Using Gabor
       Filters and Support Vector Machines,” Computer Vision Laboratory
       Department of Computer Science, University of Nevada, Ford Motor
       Company, Dearborn, MI, Digital signal processing, 2002.
[12] M. A. N. Al-Azawi, “Neural Network Based Automatic Traffic Signs
       Recognition,” International Journal of Digital Information and Wireless
       Communications (IJDIWC), pp.753-766, 2011.
[13] X. Hu, X. Zhu, D. Li, H. Li, “Traffic Sign Recognition Using Scale
       Invariant Feature Transform and SVM,” A Special Joint Symposium of
       ISPRS Technical Commission IV & AutoCarto in conjunction with
       ASPRS/CaGIS 2010 Fall Specialty Conference November 15-19, 2010
       Orlando, Florida.




                                                                                 88