<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>YORES: An Ensemble YOLO and Resnet Network for Vehicle Detection and Classification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Akansha Singh</string-name>
          <email>akanshasing@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Krishna Kant Singh</string-name>
          <email>krishnaiitr2011@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Deep Learning; ResNet; YOLO; Vehicle Detection; Intelligent Transportation System 1</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Delhi Technical Campus</institution>
          ,
          <addr-line>Greater Noida</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SCSET, Bennett University</institution>
          ,
          <addr-line>Greater Noida</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Vehicle identification is a significant process in Intelligent Transportation System (ITS). The growing number of vehicles on road has led to the need of automated methods for traffic monitoring and control. Autonomous vehicles and driver assistance systems require efficient vehicle detection methods. The real time performance of these methods must be high and efficient. The existing methods for vehicle identification have significant drawbacks like complex computations, poor performance and inability to detect vehicles in traffic videos. Thus, in this research, we offer an ensemble strategy for vehicle detection in traffic videos that combines the advantages of YOLO and Resnet. In contrast to Resnet, which is used for fine-grained detection, YOLO is utilized for coarse object detection. The final detection result is generated by averaging the results of the two algorithms. We test our method using a publicly available collection of traffic films and demonstrate that, when used alone, it beats both YOLO and Resnet. A multipart loss function is used by the YOLO network. The ResNet network uses cross entropy loss function. The global ensemble loss function is used that takes weighted average of these two loss function. The multipart loss function is used to combine the classification as well as vehicle localization losses. Thus, the method identifies the vehicle using classification and gives a bounding box using localization. A detailed comparative analysis of the methods is also done, and it is observed that the proposed method is better than other methods.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>bad weather is difficult. Classifying vehicles is not a simple yes/no question, but rather a
multiclass problem. Creating algorithms that can correctly categorise vehicles including cars, trucks,
buses, and motorcycles is a difficult area of study. An unbalanced dataset can lower the quality of
results obtained by vehicle identification and classification algorithms. It is a difficult research
topic to design algorithms that can correct for bias in datasets. The difficulties listed above are
only a small sample of the many that have been studied in the field of autonomous vehicle
recognition and categorization. By resolving these issues, the performance of these algorithms
can be enhanced, and they can be more widely used in practical settings.</p>
      <p>The literature review reveals that vehicle detection in traffic videos is difficult because of the
dynamic nature of the situations and the large number of possible vehicle types. It can be difficult
for vehicle detection algorithms relying on Haar features or HOG descriptors to function in certain
settings.</p>
      <p>Thus, in this paper YORES an ensemble YOLO Resnet model is proposed. Recent years have
seen significant progress in this area thanks to deep learning-based object detection systems like
YOLO and Resnet.</p>
      <p>The YOLO algorithm is a well-known example of a single-neural-network object detection
system. YOLO can detect objects of varying sizes and aspect ratios quickly and precisely. While
Resnet is commonly used for image classification and object detection, it is a deep convolutional
neural network. Resnet is well-known for its versatility and adaptability, as it can easily handle
complicated visual elements and learn from new da-ta.</p>
      <p>By combining YOLO and Resnet, we may overcome the shortcomings of both algorithms and
improve our ability to detect vehicles. In this research, we propose an ensemble method for
vehicle detection in traffic videos by combining the advantages of YOLO and Resnet. The detection
of small items or objects with low contrast against their back-ground may be difficult for YOLO.
YOLO predicts the bounding box and class of each object using a single grid cell, which may not
be precise enough for localizing small objects. Combining YOLO with other object detection
model, ResNet, can help alleviate this shortcoming by leveraging the advantages of each.</p>
      <p>A further shortcoming of independent object identification models is their potential inability
to distinguish between background clutter and actual objects. By combining the benefits of YOLO
and REsNet with alternate architectural designs, training data, or input representations, model
ensembles can help overcome this restriction. The resulting object detection system may be
better equipped to deal with the wide variety of conditions found in the real world.</p>
      <p>There are two phases to our ensemble method. In the first phase, coarse object detection is
carried out with the help of YOLO. YOLO can swiftly detect the presence of auto-mobiles in an
image or video because it has been trained on a vast collection of traffic footage. When cars are
identified, YOLO generates bounding boxes that contain those places.</p>
      <p>Resnet is utilized for fine-grained detection in the second step. Resnet, which was trained on
a more limited set of traffic recordings than YOLO, is able to improve upon the latter's detections
by pinpointing the exact position and orientation of the vehicles. Resnet's output is also a set of
bounding boxes that are associated with the observed vehicle locations.</p>
      <p>In order to arrive at a conclusive detection result, the results from both algorithms are
integrated via weighted average. Each algorithm's performance on a validation dataset is used to
determine the weights, which can be tweaked to give more weight to speed or accuracy.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Proposed Method</title>
      <p>The videos are converted to frames for further processing and identification of vehicles. Noise
may be present in the frames due to different illumination, weather, and camera calibrations.
Filtering techniques are applied during the pre-processing stage and all the frames are converted
into a normalized size of 224 × 224 × 3. The details of the complete method are described in the
sections below (figure 1).</p>
      <p>Video
Train ensembled</p>
      <p>model
Non Maximum
Suppression</p>
      <p>from Videos
(224x224x3)</p>
      <p>Vehicle
Identification and</p>
      <p>Localization
 ( ,  ) =</p>
      <p>1
1+[ ( 0, )]2
where  0 is the cut − off frequency and  ( ,  ) = √ 2 +  2
where</p>
      <p>are individual pixels of HSI layers obtained in previous step.</p>
      <sec id="sec-2-1">
        <title>2.1. Conversion of Video Data to frames</title>
        <p>
          The traffic scenes to be processed are generally captured by the CCTV cameras installed on
roads. These cameras capture the vehicles as a video. The processing of these videos cannot be
done directly. Thus, the conversion of the videos to image frames captured at different time
frames is required. A video taken over a time interval T may be represented as shown in equation
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ).
        </p>
        <p>{ 1,  2,  3, … … … …   }
where   = Traffic Video recorded at time interval  .</p>
        <p />
        <p>= image frame.
 = number of frames per second.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Pre-processing</title>
        <p>
          The pre-processing of the retrieved video frames is important. As these frames suffer from
poor quality due to different capturing conditions. They may also have noise due to the problems
in the image sensors. All these will lead to poor results and therefore some pre-processing is
required. After pre-processing the data will be ready for input to the model. The main issue is
presence of noise. Thus, the input frames are filtered using Butterworth low pass noise removal
filter (Basu, 2002) for removing the noise and smoothening the images. The mathematical
equation for the same is given in eq. (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ).
        </p>
        <p>Preprocessing
Combine loss
functions</p>
        <p>Train standalone</p>
        <p>ResNet</p>
        <p>Train standalone</p>
        <p>
          YOLO
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Proposed Network Architecture</title>
        <p>In recent years deep learning has shown very good results for object detection and
classifications in image/videos. In this paper, we have used a Resnet-50 network for detecting
various vehicles on the road. The network comprises of the convolution layer network which
extracts various important features from the image applying convolutions. The second part of the
network is feature localization network which comprises of region proposal networks and
pooling combined with non-maximum suppressions to detect the bounding boxes around the
vehicles. The backbone network used in the proposed work for initial feature extractions is ZF
network (Zeiler &amp; Fergus, 2014). The network has very fast training and testing speed and is very
useful in designing real time object detections. The network uses small size kernels which
maintain even lower-level details in the frames with max pooling. This reduces the time and
complexity in network processing.</p>
        <p>The second network used is YOLO which is efficient and fast object detection network (Diwan
et al., 2023). The network architecture for YOLO is shown below:
1. Input Layer: This layer is responsible for receiving the input video frames (RGB) from the
traffic videos.
2. Backbone Network: The EfficientNet design serves as the foundation for the backbone
network, which is made up of numerous convolutional layers and includes the following:
a. Convolutional layers: The backbone network contains a total of 9 convolutional layers,
each with a different number of filters and kernel sizes.
b. Bottleneck layers: The backbone network is comprised of 2 bottleneck layers, each of
which utilizes a combination of 1x1 and 3x3 convolutional layers to minimize the total
number of input channels.
c. Depthwise separable convolutions: The backbone network also includes two
depthwise separable convolutional layers. These layers make use of a combination of
depthwise and pointwise convolutions in order to reduce the number of computations
that are necessary for feature extraction.
3. The Neck Network: The neck network is what connects the head network to the backbone
network. It is made up of a few convolutional layers and includes the following components:
a. SPP layer: The neck network incorporates a spatial pyramid pooling (SPP) layer, which
implements max pooling at many scales to capture features at various granularities of
detail.
b. Convolutional layers: The neck network also incorporates a number of convolutional
layers, which further refine the features that were extracted by the backbone network.
4. Head Network: The head network is the part of the system that is in charge of producing
bounding boxes and detecting objects. The head network is made up of a number of
convolutional layers, including the following:
a. Levels of prediction that are based on anchors: The head network has three levels of
prediction that are based on anchors. Each of these layers predicts the class and
location of objects by making use of anchor boxes that range in scale.
b. Convolutional layers: The head network also includes a number of convolutional
layers, which further refine the predictions that were provided by the anchor-based
prediction layers.
5. Output Layer: The output layer is responsible for generating the final detection results,
which include the category and position of each object that was found.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Ensembling Technique</title>
        <p>Let  be an input video frame and let  ( )be the output of  on  , which consists of a
set of bounding boxes  = { 1,  2,  3, …   }, where each   = (  ,   ,   , ℎ ) represents the
location and size of a detected vehicle.
where  
= { 1
,  2
,  3
, …   
} is the final set of bounding boxes, and  1 and
 2 are the weights assigned to YOLO and Resnet, respectively. We can choose these weights based
on the performance of each algorithm on a validation dataset, and we can adjust them to prioritize
speed or accuracy depending on the application.</p>
        <p>Both the YOLO and ResNet models produce a significant number of ideas for each vehicle. The
abundance of proposals poses a challenge in the process of filtering and identifying a singular
bounding box for each vehicle. Therefore, the application of the non-maximum suppression
technique is employed to filter the bounding boxes and reduce them to a single box per vehicle.
The NMS algorithm takes as input a set of proposal boxes B, their corresponding confidence
scores (S), and a user-selected threshold value (N). The filtered proposals (D) are acquired as the
resulting output of the method.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. YOLO LOSS Function</title>
        <p>The employed loss function in this study is a multifaceted loss function. The losses employed
in this context are mean squared error losses, which incorporate the IoU score to quantify the
discrepancy between expected and actual values. The loss function comprises three components,
namely coordinate loss, confidence loss, and classification loss.</p>
        <p>=  
−
+  
−
+  
−</p>
        <p>on  , which also consists of a set of bounding boxes
 ′ = { 1′,  2′,  3′, …   ′}, where each   ′ = (  ′,   ′,   ′, ℎ ′)represents the location and size of a
detected vehicle.</p>
        <p>We can combine the outputs of YOLO and Resnet using a weighted average:</p>
        <p>=  1 +  2 ′
−
−
=  
=  
 2</p>
        <p>∑ =0 ∑</p>
        <p>=0 1
 2
∑ =0 ∑

 =0 1

where
where


 
 
 




: weight of coordinate loss
  ,   : centre coordinates</p>
        <p>: width of the bounding box
  : Confidence Score
ℎ : height of the bounding box
  ( ):   ℎ grid cell class probability</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. ResNet LOSS Function</title>
        <p>The cross-entropy loss function is defined as the difference between the true probability
distribution y and the anticipated probability distribution.</p>
        <p>
          = ∑   log ( ̂ )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
        </p>
        <p>Where   is the ith element of the true probability distribution y and  ̂  is the corresponding
element of the predicted probability distribution ŷ. The summation is taken over all elements i of
the distributions.</p>
      </sec>
      <sec id="sec-2-7">
        <title>2.7. Global Ensemble LOSS Function</title>
        <p>When combining YOLO with ResNet, we can use a loss function that is a weighted sum of the
losses from both models. For instance, we can compute the loss as a weighted sum of the YOLO
and ResNet loss functions by giving each loss term in the YOLO loss function a certain value.
Training model parameters can also be updated using a weighted combination of individual
model's optimization techniques. The relative success of each model on the validation data will
inform the decision of how much weight to give each feature.</p>
        <p>
          YORES ensembles YOLO and ResNet – and that their loss functions, L(YOLO) and L(ResNet),
have been assigned weights of alpha and beta, respectively. The ensemble loss function is then
calculated as:
 
=   
+   
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
        </p>
        <p>Here, alpha and beta are scalar weights that specify how much emphasis should be placed on
either of the two loss functions. These weights can be determined by looking at how well each
model does on validation data and giving more weight to the model that does better.</p>
        <p>Minimizing the global loss function   as a function of the model parameters is the
target of the optimization. Backpropagation is used to compute the gradients of the global loss
function with respect to the model parameters during training, and an optimization technique
like stochastic gradient descent (SGD), Adam, or RMSProp is used to update the model
parameters.</p>
        <p>The loss functions of YOLO and ResNet are combined in the ensembled model by weighting
the individual loss functions and then summing them to get the final loss function.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments and Results</title>
      <p>In this section the experiments are discussed. The proposed model is implemented using Python
programming language. The Python modules used include keras and tensorflow. Other
supporting modules are also used. The model training is done using GPU support as the dataset
is very large and training will ot be possible with simple CPU. The dataset contains two subsets
localization and classification. Using these datasets an annotated csv file is created. The csv file
comprises the bounding box position of each object. This is used as the ground truth for training.
The model is trained with 10000 iterations on the selected dataset. After the trained network is
fully trained it can identify the vehicles. The non max suppression threshold value is selected as
0.45. The model is then applied to test videos and images. Each detected object shows vehicle
bounded by a box. The name of the vehicle also appears on the box. The results of various steps
of the proposed method are shown below. Classification results for all categories of vehicles are
shown in figure 3.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Set Used</title>
        <p>The experiments are conducted using the publicly available datasets. Numerous publicly
available datasets are available for vehicle classes. But in this work one of largest vehicle dataset
MIO-TCD (Luo et al., 2018) is used. This dataset is divided into two parts the classification and
localization dataset. The localization is used for the object position and classification for vehicle
class. The distribution of the MIO-TCD dataset is shown in table 1. The dataset contains vehicles
from different field of views, illumination condition and weather. Some of the sample images from
the dataset are shown in figure 2.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Evaluation Metric</title>
        <p>following metrics are being used.
vehicles currently identified as vehicles.</p>
        <p>To quantitatively analyse the performance of the classification and detection method
Total Accuracy: The total accuracy demonstrates the percentage of the total number of
where</p>
        <p>= total number of correctly identified vehicles
Mean Recall and Mean precision</p>
        <p>
          The dataset which we are using is having different number of images or frames for different
datasets we have used another two metrics for rectifying this imbalance namely mean recall and
mean precision. These are obtained by taking the average of precision and recall overall category
of the vehicles.
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
where   =
        </p>
        <p>+ 
and   =</p>
        <p>+</p>
        <p>The overall results for all category of vehicle are shown in Table 2.
  , 
 and</p>
        <p>are true positives, false negatives and false positives for each category.
Accuracy of method for all classes of vehicles and background</p>
        <sec id="sec-3-2-1">
          <title>Category</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Articulated Truck</title>
        </sec>
        <sec id="sec-3-2-3">
          <title>Bicycle Bus Car</title>
        </sec>
        <sec id="sec-3-2-4">
          <title>Motorcycle</title>
        </sec>
        <sec id="sec-3-2-5">
          <title>Motorized Vehicle</title>
        </sec>
        <sec id="sec-3-2-6">
          <title>Non-Motorized Vehicle</title>
        </sec>
        <sec id="sec-3-2-7">
          <title>Pick-up Truck</title>
        </sec>
        <sec id="sec-3-2-8">
          <title>Single Unit Truck</title>
        </sec>
        <sec id="sec-3-2-9">
          <title>Work Van</title>
        </sec>
        <sec id="sec-3-2-10">
          <title>Background</title>
          <p>Accuracy (%)
98.7
85.2
98.2
99.8
100
67.8
71.2
98.2
78.3
97.8
98</p>
          <p>MRE
for all the methods have been shown in Table 3.
The graphical representations of the same are shown in figure 4.</p>
          <p>Articulated Truck</p>
          <p>Bicycle</p>
          <p>Bus</p>
          <p>Pick up Truck
0
50</p>
          <p>100</p>
          <p>The average accuracy comparison analysis with different methods is shown in figure 9.</p>
          <p>AV E R A G E A C C U R A C Y
ID1</p>
          <p>ID2</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and Future Work</title>
      <p>In this research, we offer an ensemble method to the problem of vehicle detection in traffic videos
by combining the advantages of the YOLO and Resnet algorithms. YOLO is used for coarse object
detection, and Resnet is used for fine-grained detection in our approach. YOLO is used to identify
large groups of objects. A weighted average assembly is used to aggregate the results of both of
these processes. The YOLO and ResNet loss functions are combined in the ensembled model by
first assigning weights to each loss function and then computing the weighted sum of these
individual loss functions as the overall loss function. We can effectively combine the strengths of
YOLO and ResNet and increase the performance of car detection from traffic videos by improving
the ensembled model using the overall loss function. This allows us to effectively mix YOLO and
ResNet.</p>
      <p>
        Some of the limitations of standalone object detection models can be circumvented with the
help of ensembling and YOLO. The proposed approach accurately identifies eleven classes of
vehicles, achieving state-of-the-art results. The comparison of the proposed approach with six
other methods demonstrates its superiority in terms of accuracy, speed, and robustness.
Therefore, the proposed approach has significant potential for practical applications in traffic
surveillance and management, such as traffic flow optimization and accident detection. Further
studies can investigate the scalability and generalizability of the proposed approach to various
traffic scenarios and different environments. Overall, this research contributes to the
development of intelligent transportation systems and paves the way for future research in this
field.
[17] Sobral, Andrews, and Antoine Vacavant. "A comprehensive review of background
subtraction algorithms evaluated with synthetic and real videos." Computer Vision and Image
Understanding 122 (2014): 4-21.
[18] Theagarajan, Rajkumar, Federico Pala, and Bir Bhanu. "EDeN: Ensemble of deep networks
for vehicle classification." Proceedings of the IEEE conference on computer vision and pattern
recognition workshops. 2017.
[19] Tomikj, Nikola, and Andrea Kulakov. "Vehicle Detection with HOG and Linear SVM." Journal
of Emerging Computer Technologies 1.1 (2021): 6-9.
[20] Tsai, Luo-Wei, Jun-Wei Hsieh, and Kuo-Chin Fan. "Vehicle detection using normalized color
and edge map." IEEE transactions on Image Processing 16.3 (2007): 850-864.
[21] Wang, Xinchen, et al. "Real-time vehicle type classification with deep convolutional neural
networks." Journal of Real-Time Image Processing 16.1 (2019): 5-14.
[22] Xiao, Y., Zhang, Y., Kaku, I., Kang, R., &amp; Pan, X. (2021). Electric vehicle routing problem: A
systematic review and a new comprehensive model with nonlinear energy recharging and
consumption. Renewable &amp; sustainable energy reviews, 151, 111567. doi:
10.1016/j.rser.2021.111567
[23] Xiao, Y., Zuo, X., Huang, J., Konak, A., &amp; Xu, Y. (2020). The continuous pollution routing
problem. Applied mathematics and computation, 387, 125072. doi:
10.1016/j.amc.2020.125072
[24] Xu, J., Zhang, X., Park, S. H., &amp; Guo, K. (2022). The alleviation of perceptual blindness during
driving in urban areas guided by saccades recommendation. IEEE Transactions on Intelligent
Transportation Systems, 23(
        <xref ref-type="bibr" rid="ref9">9</xref>
        ), 16386-16396.
[25] Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional
networks." European conference on computer vision. Springer, Cham, 2014.
[26] Zhang, Fukai, Ce Li, and Feng Yang. "Vehicle detection in urban traffic surveillance images
based on convolutional neural networks with feature concatenation." Sensors 19.3 (2019):
594.
[27] Zhang, Zehan, et al. "Maff-net: Filter false positive for 3d vehicle detection with multi-modal
adaptive feature fusion." arXiv preprint arXiv:2009.10945 (2020).
[28] Zhang, Zhaoxiang, et al. "EDA approach for model based localization and recognition of
vehicles." 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2007.
[29] Diwan, T., Anirudh, G., &amp; Tembhurne, J. V. (2023). Object detection using YOLO: Challenges,
architectural successors, datasets and applications. Multimedia Tools and Applications, 82
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Anan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhaoxuan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jintao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <article-title>Video vehicle detection algorithm based on virtual-line group</article-title>
          .
          <source>In APCCAS 2006-2006 IEEE Asia Pacific Conference on Circuits and Systems</source>
          (pp.
          <fpage>1148</fpage>
          -
          <lpage>1151</lpage>
          ). IEEE. (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Basu</surname>
            ,
            <given-names>Mitra.</given-names>
          </string-name>
          <article-title>"Gaussian-based edge-detection methods-a survey."</article-title>
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          , Part C (
          <article-title>Applications</article-title>
          and Reviews)
          <volume>32</volume>
          .3 (
          <year>2002</year>
          ):
          <fpage>252</fpage>
          -
          <lpage>260</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <surname>Zhen</surname>
          </string-name>
          , et al.
          <article-title>"Vehicle type classification using a semisupervised convolutional neural network."</article-title>
          <source>IEEE transactions on intelligent transportation systems 16.4</source>
          (
          <year>2015</year>
          ):
          <fpage>2247</fpage>
          -
          <lpage>2256</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Fei</surname>
            , Mengjuan,
            <given-names>Jing</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>and Honghai</given-names>
          </string-name>
          <string-name>
            <surname>Liu</surname>
          </string-name>
          .
          <article-title>"Visual tracking based on improved foreground detection and perceptual hashing</article-title>
          .
          <source>" Neurocomputing</source>
          <volume>152</volume>
          (
          <year>2015</year>
          ):
          <fpage>413</fpage>
          -
          <lpage>428</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Hassaballah</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mourad</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kenk</surname>
          </string-name>
          , and
          <string-name>
            <surname>Ibrahim</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>El-Henawy</surname>
          </string-name>
          .
          <article-title>"Local binary pattern-based onroad vehicle detection in urban traffic scene</article-title>
          .
          <source>" Pattern Analysis and Applications 23</source>
          .4 (
          <year>2020</year>
          ):
          <fpage>1505</fpage>
          -
          <lpage>1521</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Xu</surname>
          </string-name>
          <string-name>
            <surname>.</surname>
          </string-name>
          , (
          <year>2022</year>
          ).
          <article-title>The Improvement of Road Driving Safety Guided by Visual Inattentional Blindness</article-title>
          .
          <source>IEEE Transactions on Intelligent Transportation Systems</source>
          ,
          <volume>23</volume>
          (
          <issue>6</issue>
          ),
          <fpage>4972</fpage>
          -
          <lpage>4981</lpage>
          . doi:
          <volume>10</volume>
          .1109/TITS.
          <year>2020</year>
          .3044927
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <surname>Heechul</surname>
          </string-name>
          , et al.
          <article-title>"ResNet-based vehicle classification and localization in traffic surveillance systems." Proceedings of the IEEE conference on computer vision and pattern recognition workshops</article-title>
          .
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Shuguang</surname>
          </string-name>
          , et al.
          <article-title>"Video-based traffic data collection system for multiple vehicle types</article-title>
          .
          <source>" IET Intelligent Transport Systems 8.2</source>
          (
          <year>2014</year>
          ):
          <fpage>164</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <surname>Zhiming</surname>
          </string-name>
          , et al.
          <article-title>"MIO-TCD: A new benchmark dataset for vehicle classification and localization</article-title>
          .
          <source>" IEEE Transactions on Image Processing 27.10</source>
          (
          <year>2018</year>
          ):
          <fpage>5129</fpage>
          -
          <lpage>5141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Mirjalili</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>The whale optimization algorithm</article-title>
          .
          <source>Advances in engineering software</source>
          ,
          <volume>95</volume>
          ,
          <fpage>51</fpage>
          -
          <lpage>67</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Pérez-Hernández</surname>
          </string-name>
          , Francisco, et al.
          <article-title>"Object detection binary classifiers methodology based on deep learning to identify small objects handled similarly: Application in video surveillance." Knowledge-Based Systems 194 (</article-title>
          <year>2020</year>
          ):
          <fpage>105590</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.M.M.</given-names>
            <surname>Rahman</surname>
          </string-name>
          . https://mahbubur.buet.ac.bd/resources/DatabaseEBVT.htm ;
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Sengar</surname>
            ,
            <given-names>Sandeep</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>and Susanta</given-names>
          </string-name>
          <string-name>
            <surname>Mukhopadhyay</surname>
          </string-name>
          .
          <article-title>"Moving object area detection using normalized self adaptive optical flow</article-title>
          .
          <source>" Optik</source>
          <volume>127</volume>
          .16 (
          <year>2016</year>
          ):
          <fpage>6258</fpage>
          -
          <lpage>6267</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <surname>Poonam</surname>
          </string-name>
          , et al.
          <article-title>"Automatic vehicle detection using spatial time frame and object based classification</article-title>
          .
          <source>" Journal of Intelligent &amp; Fuzzy Systems 37.6</source>
          (
          <year>2019</year>
          ):
          <fpage>8147</fpage>
          -
          <lpage>8157</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <surname>Poonam</surname>
          </string-name>
          , et al.
          <article-title>"Vehicle identification using modified region based convolution network for intelligent transportation system." Multimedia Tools and Applications (</article-title>
          <year>2021</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Sivaraman</surname>
          </string-name>
          , Sayanan, and Mohan Manubhai Trivedi.
          <article-title>"Active learning based robust monocular vehicle detection for on-road safety systems." 2009 IEEE intelligent vehicles symposium</article-title>
          . IEEE,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>