<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Studying the eficiency and performance of the vehicle detection method based on feature fusion and attention enhancement</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ming Xue</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Wuhan Fiberhome Technical Services Co., Ltd.</institution>
          ,
          <addr-line>88 Youkeyuan Rd., Hongshan District, Wuhan, 430068</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <fpage>162</fpage>
      <lpage>177</lpage>
      <abstract>
        <p>Considering the cost deployment problem of trafic recognition algorithms, this paper considers YOLOv4 as the base architecture. The lightweight DenseNet is used as the backbone feature extraction network, and efective channel attention (ECA) and Adaptive Spatial Feature Fusion (ASFF) are used to enhance the PANet structure with attention-guided fusion. The weight ratio of the loss function is optimized and the mosaic method is used for training enhancement. The results show that the proposed algorithm improves both the detection accuracy and detection speed as well as reduces the number of parameters by 64. The research results provide some reference value for the trafic construction of smart cities.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;target detection</kwd>
        <kwd>vehicle detection</kwd>
        <kwd>YOLOv4</kwd>
        <kwd>feature fusion</kwd>
        <kwd>attention mechanism</kwd>
        <kwd>lightweighting</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As the pioneer of one-stage target detection, the YOLO series algorithm innovates on the detection
principle of the Faster R-CNN series by abandoning the RPN approach and using regression to obtain
the coordinate information of the bbox. YOLOv1, an algorithm that uses an end-to-end identification
approach, is known as the one-stage target detection algorithm. This algorithm was quickly deployed in
many real-world projects due to the dramatic increase in detection speed, and was even used in military
devices. A large number of one-stage target detection algorithms have also emerged since then, and
these algorithms have evolved through iterations in pursuit of faster and more accurate recognition
results [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4</xref>
        ].
      </p>
      <p>
        Bochkovskiy et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed YOLOv4, the algorithm developed for one-stage target detection. It is
based on the architecture of the classical YOLO target detection family, published in 2020 and endorsed
by the authors of YOLOv3. Such algorithms concentrate both target classification and localization in
the same network architecture, enabling end-to-end detection.
      </p>
      <p>The YOLOv4 algorithm consists of the CSPDarknet53 backbone network, SPPNet, PANet feature
fusion network and the YOLO-Head detection head module that is used in YOLOv3. Its network structure
is shown in figure 1.</p>
      <p>In this paper, we further improve the training and inference speed of the one-stage detection algorithm
by modifying the backbone network of the model, based on the YOLOv4 algorithm, and improve the
model structure using the attention mechanism and feature fusion module to enhance the detection
performance of the algorithm.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Improved YOLOv4 method</title>
      <sec id="sec-2-1">
        <title>2.1. Feature Parymid Network (FPN)</title>
        <p>Feature Pyramid Representation (FPN) is a common approach to address the challenge of scale variation
in target detection. Its structural layer design allows the model to more fully utilize the feature
information extracted from the backbone network.</p>
        <p>
          Various FPNs are designed to maximize the utilization of the multi-scale feature maps from backbone,
and its optimization leads to significant performance improvement of object detection. Therefore, the
algorithms in this paper work in concert with the fusion of PANet and ASFF to enhance the reuse and
extraction of feature maps and avoid the loss of efective information [
          <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6, 7, 8, 9</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Attentional mechanisms</title>
        <p>The attention mechanism focuses on local information while suppressing distracting information.
Attention mechanisms have made important breakthroughs in recent years in areas such as image and
natural language processing, and have widely demonstrated their efectiveness in improving model
performance.</p>
        <p>From a mathematical point of view, the attention mechanisms provides a weight-based model to
perform operations. The attention mechanism uses the network layer to calculate the weight values
corresponding to the relevant feature maps, and then applies these weights to the feature maps, so that
the feature maps with a large role in extracting information become somewhat more influential on the
overall. With respect to the content of interest, attention mechanisms can be split into three types:
channel attention mechanism, spatial attention mechanism, and mixed spatial and channel attention
mechanism.</p>
        <sec id="sec-2-2-1">
          <title>2.2.1. The spatial attention mechanism</title>
          <p>
            The Spatial Transformer Network (STN) [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] proposed by Google DeepMind is a spatial-based attention
by learning the shape change of the input so as to accomplish preprocessing operations suitable for a
specific task. The ST module consists of localisation net, grid generator and sample. The localisation net
determines the parameter  of the input required transformation. The grid generator finds the mapping
 ( ) of the output to the input features by  and the defined transformation. The sample combines the
location mapping and transformation parameters to select the input features and combine them with
bilinear interpolation for the output.
          </p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.2.2. Channel attention mechanism</title>
          <p>
            Sequeeze and Excitation Net (SENet) [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ] is a channel type Attention model, which automatically
enhances or suppresses channels after model learning by modeling the importance of each feature
channel. It divides a bypass branch after the normal convolution operation, and this branch is compressed
and fully connected to obtain a set of weight values.he importance of the diferent channels can be
learned by applying this set of weights to each of the original feature channels.
          </p>
        </sec>
        <sec id="sec-2-2-3">
          <title>2.2.3. Fusion of spatial and channel attention mechanisms</title>
          <p>
            Convolutional Block Attention Module (CBAM) [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] is a representative network that combines spatial
and channel attention mechanisms. It uses a channel-then-space approach for collocation, so that the
model models the important information of channel and spatial locations separately. Besides these,
there are many other attention mechanisms related to research, such as residual attention mechanism,
multi-scale attention mechanism, recursive attention mechanism, etc. [
            <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16 ref17">13, 14, 15, 16, 17</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Related methods</title>
        <sec id="sec-2-3-1">
          <title>2.3.1. Lightweighting of the backbone</title>
          <p>The backbone network is replaced with the lightweight network DenseNet-121, and the rest of the
architecture is optimized on the basis of YOLOv4.</p>
          <p>The main components of the DenseNet network are both the dense blocks and transition layers.</p>
          <p>The dense block is composed of several bolt necks. Each block uses the same number of output
channels, and then uses a loop to connect the input and output of each block in the channel dimension.
The structure of bolt neck is shown in the upper part of figure 3. Each bottle neck contains two
convolutions, the first one is a 1*1 convolution, which has 4  output channels. Here,  is a feature
map growth factor, which is the number of feature maps contributed by each bottle neck. The second
3*3 convolution has  output channels. Finally, the input of the module and the output of the 3*3
convolution are concat stacked to obtain the overall number of output channels of the module as ′ + .</p>
          <p>The dense block structure is shown in the middle part of figure 3, which consists of several bottle
neck. If the number of input channels of the whole dense block is 0. Since the output of Bottle Neck
stacks the output and input of the final convolutional structure in its interior, the number of feature
channels will be increased by  for each bottle neck that passes through it. Therefore, the number of
ifnal output feature maps of a dense block composed of  bottle neck is 0 + .</p>
          <p>By looking at the dense block structure, it can be seen that the input of each bottle neck is a stack
of all the outputs of its preceding layers. This essentially densely connected network structure is the
reason why DenseNet can achieve good results.</p>
          <p>The transition layer is used to control the model complexity, and its structure is shown in the lower
part of figure 4. Since the number of channels increases with each dense block connection, using too
many would result in an overly complex model. Therefore, the transition layer first reduces the number
of channels by a 1*1 convolution layer, and then to compress the height and width of the feature map,
an average pooling layer with stride=2 is used for downsampling, which further reduces the model
complexity.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Citation of attentional mechanisms</title>
      <p>To ensure the detection accuracy of the model while performing lightweight optimization of the model,
this paper will intersperse the attention mechanism module in the network structure.</p>
      <p>To keep the balance between model complexity and performance, this paper refers to an efective
channel attention module (ECA) that contains only a small number of parameters while delivering
significant performance gains.</p>
      <p>SE-Net is the basis of ECA-Net optimization and its structure is shown in figure 4(a). Global average
pooling is first performed separately for each input channel, followed by two fully connected layers
using diferent activation functions. This computational process causes the channel features to be
mapped from high to low and then to high dimensions. This dimensionality reduction operation reduces
the complexity of the model, but it also hinders the correspondence generated between weights and
weights, which may result in the loss of critical information.</p>
      <p>Empirical data show ECA-Net, by observing SE-Net and improving it, that it is important to avoid
dimensionality reduction when learning channel attention and a proper cross-channel interaction can
increase the complexity of the model only slightly while maintaining performance. Its structural design
is given in figure 4(b).</p>
      <p>
        On the left is the feature of the original input image, which is first subjected to global average pooling
(GAP) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] to obtain a 1*1* feature map, on which ECA obtains the local cross-channel interaction by
fast one-dimensional convolution of size . The parameter  can be developed by an adaptive function
based on the dimension of the input channel . This channel represents the local coverage of the
cross-channel interaction. Then, the sigmoid activation function generates the weight share for every
single channel. After that, the features with channel attention are obtained by merging the original
input features with the channel weights. The network based on this module extracts discriminative
features of images on the basis of channel dimensionality more easily.
      </p>
      <p>In order to avoid the consumption of large arithmetic resources due to manual adjustment, the size
of the parameter  can be generated adaptively by a function with the convolution kernel :
where || denotes the odd number of -nearest neighbors,  is set to 2, and  is 1. From the equation, it is
 =  () =
︂⌊ log2()

+

 ⌋︂</p>
      <p>odd
clear that the communication range of the high-dimensional channel is longer, while the communication
range of the low-dimensional channel is relatively contracted.</p>
      <p>In this paper, three ECA layers are inserted at the connection between backbone and neck of the
model to avoid dimensionality reduction while better bridging the two components, making the feature
transfer of the model more eficient and preventing the disappearance of feature information. At the
same time, the ECA layer allows the model to focus on more critical features and suppress unnecessary
features, thus ignoring the interference brought by the image background, which enhances the accuracy
of detection for the model even further.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Spatially adaptive fusion of feature layers</title>
      <p>ASFF can further enhance the extraction capability of PANet and can fuse the information of multiple
feature layers simultaneously. Its core idea is to adaptively adjust the spatial weights of each scale
features in fusion by learning. Its underlying structure is shown in figure 5.</p>
      <p>The ASFF-3 is an example of a convolution with a convolution kernel of 3*3, a step size of 2, and
a padding of 1. The 2 is scaled down to the same value as 3 with equal number of channels,
and is denoted as level_1_resized. The number of channels and dimensionality of level_1_resized,
level_2_resized, and 3 are the same. Finally, level_1_resized, level_2_resized, and 3 are multiplied
by  ,  and  , respectively, and the values are summed, and the number of channels is adjusted by a
ifnal convolutional layer to obtain a new feature layer with multi-layer perceptual field fusion. The
expression is as follows.</p>
      <p>=   · 1→ +   · 2→ +   · 3→
(2)
where  represents the new feature map of a layer obtained by ASFF,   ,   , and   represent the
weight parameters learned through the three feature layers, and   +   +   = 1 is guaranteed by
the Softmax function.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Designing the loss function</title>
      <p>
        The loss function based on the YOLOv4 algorithm contains three components: confidence error conf,
classification error cls, and regression frame prediction error loc. Among them, the confidence error
and classification error continue the design idea of YOLOv3 [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. However, CIoU loss was used in the
design of the regression frame prediction error. The CIoU is based on IoU, GIoU, and DIoU, and the
CIoU takes into account three geometric factors, which are overlap area, centroid distance, and aspect
ratio. They are calculated by the following equations [
        <xref ref-type="bibr" rid="ref20 ref21 ref22">20, 21, 22</xref>
        ].
2 
∑︁ ∑︁
      </p>
      <p>−  )︁ log(1 −  )]︁
−  )︁ log(1 −  )]︁
(3)
(4)
(5)
(6)
cls = −
∑︁ 
obj ∑︁ {︁ () log [︁ ()]︁ + [︁1
−  ()]︁ log [︁1
−  ()
]︁}︁
∈classes
CIoU(,  ) = IoU(,  ) −
 2(ctr, ctr)
2
− 
where: 2 is the number of grids,  is the number of prediction frames in each grid, obj and 
noobj
are the indicator values of the prediction frames containing and not containing the target,  is the
confidence true value,  is the prediction confidence,  noobj is the penalty weight factor,  () is the
actual probability that the target in the cell belongs to category ,  () is the probability that the
prediction is of category , IoU(,  ) is the intersection ratio of the predicted frame  to the real
frame  ,  2(ctr, ctr) is the Euclidean distance between the center point of the predicted frame and
the real frame,  is the diagonal distance of the minimum closed region containing both the predicted
and real frames,  is the balance adjustment parameter,  is the parameter measuring the consistency
of the aspect ratio.</p>
      <p>The regression frame prediction error loc is typically defined using the CIoU loss:
2 
=0 =0
loc = ∑︁ ∑︁ obj(1 − CIoU(</p>
      <p>, ^ ))
where  represents the predicted bounding box and ^
 represents the ground truth bounding box.</p>
      <p>
        In order to balance the loss sensitivity of diferent detection scales, in this paper, the three prediction
heads in the network structure are multiplied with diferent weights when calculating the total loss.
The weights assigned to Yolo Head1, Yolo Head2 and Yolo Head3 are 0.4, 1.0 and 4.0, respectively [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>Experiment and analysis</title>
      <sec id="sec-6-1">
        <title>6.1. Experimental setup</title>
        <p>6.1.1. Dataset
In this paper, a variety of datasets will be used for performance evaluation.</p>
        <p>1. The RSOV dataset is published by Brno University of Technology and consists of three
subdatasets with diferent viewpoints, namely the rear view shot dataset, the eye level view shot
dataset, and the unconstrained shot dataset, each of which contains 5000 images of vehicles with
annotations. Thus, the dataset has a total of 15,000 images containing information of 41 diferent
brands and categories of vehicles. In this paper, we divide the dataset according to the ratio of
8:1:1, and finally get 12,000 training sets, 1,500 validation sets and 1,500 test sets.
2. BIT-Vehicle dataset is captured by two cameras at diferent times and locations, and these images
vary in terms of lighting conditions, vehicle color, and camera viewpoint. All vehicles in the
dataset are classified into six categories: Bus, Microbus, Minivan, Sedan, SUV, and Truck. The
dataset has a total of 9850 images, and the dataset is divided according to the ratio of 8:1:1,
resulting in 7880 training sets, 985 validation sets and 985 test sets.
3. To verify the detection generality of the proposed algorithm, the classical multi-category dataset
PASCAL VOC is used. The PASCAL VOC Challenge is a world-class competition in computer
vision covering several sub-tasks such as classification of images, detection and segmentation
of targets, etc. VOC2007 and VOC2012 are two classical benchmark datasets publicly provided
by the competition, including a total of 20 categories including people, airplanes, and cars, and
each version of the dataset is produced in a uniform manner. In the paper, we use in total 16,551
images of trainval data from VOC2007 and VOC2012 as the overall dataset, which is randomly
partitioned according to the ratio of 0.81:0.09:0.1, resulting in 13,405 training sets, 1,490 validation
sets and 1,656 test sets.</p>
        <sec id="sec-6-1-1">
          <title>6.1.2. Dataset pre-processing</title>
          <p>Since there are large diferences in the number of samples and uneven distribution of some images
in some datasets, mosaic data enhancement is performed on the dataset before model training. The
operation process is shown in figure 6.</p>
          <p>The mosaic data enhancement method first takes out a batch of data to be processed from the dataset,
then randomly uses four images to be scaled or shifted in diferent proportions, placed in the direction
of the four corners of the rectangle, crops the excess part of images that exceed the specified input
size, and finally gets a new image as training data. The mosaic method for data preprocessing not
only enhances the diversity of data and enriches the image dataset, but also improves the batchsize in
disguise and enhances the eficiency of the model.</p>
        </sec>
        <sec id="sec-6-1-2">
          <title>6.1.3. Evaluation index</title>
          <p>1. Evaluation metrics for model inference</p>
          <p>For the target detection task, the precision  , recall , and mean average precision  (mean
Average Precision) are commonly used as evaluation metrics for model identification. The
calculation of the relevant metrics is given below.</p>
          <p>a) Precision  and recall .</p>
          <p>The precision is the ratio of correct predictions to the total number of predictions. It is one
of the simplest metrics. Recall calculates the ratio of the number of predicted positive cases
to the total number of positive case labels. They are calculated as follows:</p>
          <p>= , (8)</p>
          <p>+  
where   denotes the number of positive samples correctly identified as positive,  
denotes the number of negative samples incorrectly identified as positive, and   denotes
the number of positive samples incorrectly identified as negative.</p>
          <p>=</p>
          <p>+  
b)  and  .</p>
          <p>Average Precision ( ) is the area limited by the precision-recall curve. The better a
classifier is, the higher the  value. It is used to evaluate the detection accuracy of a class,
and is calculated as follows:</p>
          <p>
            0
The mean Average Precision ( ), which is the average value of multiple categories
of  . The  has a size in the interval [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ] and the larger its value, the better, and
this metric is the most important one in the target detection algorithm and is calculated as
follows.
 = 1 ∑︁  ()
          </p>
          <p>=1</p>
          <p>FLOPsconv = ℎout × out × (2 × ℎ ×  × in − 1) × out
where ℎout and out are the height and width of the output feature map. The term (2 × ℎ ×
 × in − 1) accounts for multiplication-addition operations for each output element.
For a fully connected layer:
where in is the number of input neurons and out is the number of output neurons.
The total computational complexity of a neural network model can then be expressed as:
fc = (in + 1) × out
FLOPsfc = (2 × in − 1) × out</p>
          <p>Total FLOPs = ∑︁ FLOPs
=1
2. Evaluation Index of Model Parameters</p>
          <p>In a deep learning model, the size of the number of parameters determines to some extent
the depth of the model, the speed of inference and even the detection accuracy. A large deep
learning architecture is often accompanied by high accuracy because it is closer to the neuronal
composition of the human brain. However, a large number of parameters also means a sacrifice
of inference time and response speed. Therefore, a small number of parameters is important
for deploying deep learning models to embedded devices or platforms, and for handling large
numbers of concurrent requests.</p>
          <p>The metrics of floating point operations (FLOPs) is referred to as the number of float-point
operations. It is understood as the number of computations, and the base unit is . It can be used
to measure the complexity of an algorithm/model.</p>
          <p>Params is referred to as a total number of parameters to be trained, and the base unit is  .

Params = ∑︁ 
=1
where  is the total number of layers in the model and  is the number of parameters in layer .
For a typical convolutional layer, the number of parameters can be calculated as:
conv = (ℎ ×  × in + 1) × out
where ℎ and  are the kernel height and width, in is the number of input channels, out is the
number of output channels, and the +1 term accounts for the bias parameter for each output
channel.</p>
          <p>For    in a convolutional layer:
where FLOPs is the number of floating point operations in layer .
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)</p>
          <p>Model eficiency can be characterized by the ratio:</p>
          <p>Eficiency
=</p>
          <p>Accuracy</p>
          <p>FLOPs × Params
This metric helps quantify the trade-of between model performance and computational resources
required.</p>
        </sec>
        <sec id="sec-6-1-3">
          <title>6.1.4. Training strategies</title>
          <p>
            In the paper, we use migration learning to speed up the training of models. Migration learning is used
to help train a new model by migrating the weight parameters of an already trained model to a new
one. The basis of this approach is that most of the data or tasks are correlated, and by sharing some of
the parameters of the pre-trained model to the new model to be trained, the training process of the new
model can be significantly accelerated and optimized for the purpose of saving computational resources
[
            <xref ref-type="bibr" rid="ref24">24</xref>
            ].
          </p>
          <p>In deep neural networks, the previous convolutional layers generally learn shallow features with
generality, while the later convolutional layers learn more targeted, higher-level abstract features for
the current training target. Freezing some of the network layers first can speed up the training and also
prevent the weights from being corrupted in the early stage of training. Migration learning can also
efectively avoid the problem of poor generalization of the model due to the existence of local minima
in the objective function. The migration learning strategy in this paper is as follows.
1. Selecting publicly available DenseNet weight files that have been trained through Imagenet or
other large datasets as the parameter source for pre-trained weights for migration learning.
2. Load the weight files into the backbone network of this paper’s model, and then freeze the
backbone network without participating in back propagation. The other unfrozen network layers
are trained with a certain number of epochs to perform gradient updates.
3. After a certain number of epoch iterations, the frozen layers are unfrozen and all network layers
are involved in the backpropagation update to finally obtain the appropriate parameter matrix
and bias vector.</p>
        </sec>
        <sec id="sec-6-1-4">
          <title>6.1.5. Experimental conditions and parameter settings</title>
          <p>The experiments in this paper are conducted under Linux with Intel(R) Xeon(R) CPU E5-2678 v3 @
2.50GHz processor, 100GB RAM, NVIDIA GTX1070Ti graphics card, and Pytorch 1.8.0 framework for
model training and testing. The training parameters were set as shown in table 1.</p>
        </sec>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Comparative experiments and discussion</title>
        <sec id="sec-6-2-1">
          <title>6.2.1. Comparison with YOLOv4</title>
          <p>Since the framework of this paper is inspired by the structure of YOLOv4, this paper mainly uses
YOLOv4 as the comparison object to test the improvement results.</p>
          <p>1. Training process comparison.
(17)</p>
          <p>To show the convergence of the model, the YOLOv4 algorithm is compared with the proposed
algorithm in terms of the loss values at training time. Since the initial training loss values are large, the
curve generation will disturb the overall display, so the values of the first 5 epochs are removed from
the loss curve graph. The final loss curves of the two models can be seen in figure 7.</p>
          <p>On the figure, we can see that the trend of the loss value tends to be smooth in the last 10 epochs,
and there is almost no change in the magnitude, and the final convergence for the proposed algorithm
is smaller than that of the original YOLOv4 model under the same loss function calculation for both
models.</p>
          <p>2. Model complexity comparison</p>
          <p>The FLOPs and Params introduced in the previous section are used here to measure the complexity
of the models.</p>
          <p>According to the diferent model components and usage methods, the experimental groups are
divided into four groups for comparison in this paper. The group marked as A1 is the original YOLOv4
algorithm; the group (A2) is based on YOLOv4 with the addition of an ECA layer in addition to replacing
the backbone network with DenseNet; while the third group (A3) difers from A2 in that the ECA layer
is turned into an ASFF structure; and finally, the fourth group (A4) has the proposed algorithm, with
the replacement of the DenseNet backbone network and adding both ECA and ASFF layers. The results
of each experimental group are given in table 2.</p>
          <p>As we can see, the original YOLOv4 algorithm has the highest complexity in the table, thus it takes
longer time in training and inference, and the final generated weight file takes up more space. The
comparison between A3 and A4 shows that the ECA attention structure not only works well, but also
increases the computational pressure minimally; by comparing A2 and A4, it can be seen that the ASFF
structure has a certain fraction on both FLOPs and Params when the computation of the ECA layer is
known to be tiny. By comparing A1 and A4, we can see that the computation of the proposed algorithm
in this paper is only 51% of that for YOLOv4 and the number of parameters is only 36% of that of the
original one, so the proposed algorithm can significantly reduce the usage cost of the model and is
beneficial to the deployment of vehicle detection algorithms in practice.</p>
          <p>3. Comparison of detection efect</p>
          <p>In terms of detection efectiveness, YOLOv4 is used as baseline to compare with the proposed
algorithm on a variety of datasets. The results are shown in table 3. As it can be seen from the table,
the proposed algorithm has an improvement efect on diferent datasets compared to Baseline. In the
tasks and datasets related to vehicle detection, the RSOV dataset improves by 3.53% and the BIT-Vehicle
dataset improves by 2.46%. This confirms that the proposed algorithm has good applicability in the
vehicle detection task. Meanwhile, the mAP of the PASCAL VOC dataset improves by 2.86%. This
confirms that the proposed algorithm has good generalization ability and still performs well.</p>
          <p>Algorithm
YOLOv4
Our</p>
          <p>RSOV</p>
          <p>In figure 8, the recognition efects and detection heat maps of the two algorithms under the generic
task are shown. The comparison between D1 and D2 shows that both algorithms have excellent
performance in detecting people, however, D1 has the situation of missing detection for the obscured
transportation, and it can also be seen from the heat map E1 that the YOLOv4 algorithm does not
pay much attention to the target location of the transportation in the missing detection area. The
recognition ability of D2 has been greatly improved. On E2 we see that the attention of the proposed
algorithm to the missed detection region of D1 has been well improved.</p>
        </sec>
        <sec id="sec-6-2-2">
          <title>6.2.2. Performance comparison with other algorithms</title>
          <p>To further validate the downstream detection task and the proposed algorithm, the proposed algorithm is
compared with other mainstream algorithms in with respect to detection efect, complexity and various
other indicators. The experimental conditions and parameter settings are the same as in section 6.1.5
of in this paper, and the data sets are selected as RSOV for vehicle detection tasks and VOC data sets
representing generic detection tasks, and the comparison results of various algorithms are illustrated in
table 4. As can be seen, the proposed algorithm outperforms other algorithms on a variety of data sets.
The YOLOv4-MobilenetV3 network, which combines the MobilenetV3 backbone network with YOLOv4,
has the simplest model with a low number of parameters, but its mAP has a significant gap compared
with other algorithms, making it dificult to meet the requirements for accurate vehicle recognition in
trafic scenarios. EficientDet-d3 and YOLOv4-MobilenetV3 have the same level of counts and excellent
detection, but the algorithm takes longer time to train, on average 2-3 times longer than other one-stage
algorithms, so it is not suitable for scenarios with strict speed requirements. YOLOv3 and YOLOv4
are both classic algorithms of the YOLO family, and although their detection capabilities perform well,
the model parameters of both algorithms are relatively large, which can bring hardware expenses
at the practical deployment level for trafic vehicle detection tasks. The more efective retinanet is
reduced in the number of parameters compared to the former, but its computational efort is significantly
aggravated, and the detection efect is also a certain distance from the proposed algorithm. Therefore,
the above comparison results show that the proposed algorithm has a balance of computation, number
of parameters and detection efect, not only has excellent detection efect, but also has advantages in
training dificulty and model deployment cost compared with other algorithms.</p>
        </sec>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Ablation experiments and analysis</title>
        <p>Ablation experiments are mainly used to analyze the degree of influence of diferent components
on the whole model. To highlight the efectiveness and synergy of the improvement points of the
proposed algorithm, ablation experiments are conducted by using some of the improvement points in
this paper. The dataset is selected as RSOV for vehicle detection task and VOC dataset representing
generic detection task, where the experimental configuration and parameter settings for the ablation
experiments are the same as in section 6.1.5 of this paper.</p>
        <p>In this ablation experiment, the component modules are classified as follows: DenseNet backbone
network module, ECA attention module, ASFF feature fusion module, and mosaic data enhancement
module. The mAP performance of each of their groups is shown in table 5. In table 5, “✓” indicates that
the method is used and “–” indicates that the method is not used.</p>
        <p>Firstly, we can see there is a considerable improvement in the detection eficiency of the target when
all the improved modules are working in concert. From the experimental data of G1 and G2, it can
be seen that the detection network using DenseNet lightweight backbone has good feature extraction
ability for a variety of datasets based on migration learning. Comparing G4, G5 and G6, it can be seen
that the obvious enhancements to mAP are the ASFF module and the mosaic data enhancement method.
These two methods enhance the utilization of feature information and the generalization ability of
the model, especially the synergistic efect of the two makes the enhancement more obvious. As can
be seen in G3, the enhancement efect of the ECA attention module is small in percentage, however,
its ability to refine the efective information in the channels allows it to enhance the mAP to a new
superior value when used in combination with the ASFF module and the mosaic method. Thus, this
ablation experiment is a good demonstration of the efectiveness of each component module of the
proposed algorithm.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This paper focuses on the one-stage target detection method which has higher requirements for detection
speed and deployment cost. Therefore, the proposed method uses YOLOv4 as the base architecture,
and significantly reduces the number of parameters in the model by replacing the DenseNet, which
has excellent performance, as the backbone feature extraction network; reconstructs the existing FPN
network module, uses the ECA attention structure for the transition and transfer of feature information
between backbone and neck, and adds the information cross-fusion function before the final detection
layer of the network of the ASFF structure; while optimizing in terms of loss function and image
preprocessing. The studies of the eficiency of the proposed method are carried out on RSOV,
BITVehicle and VOC datasets.</p>
      <p>The training process converges faster with a lower value of the loss function for the proposed method.
Comparison of complexity between the proposed method and the basic YOLOv4 shows the almost
twofold decrease in complexity and the number of parameters is reduced by 64%. The detection accuracy
in various datasets with diferent degrees, for example, the mAP reaches 98.70% on the test set of RSOV
dataset. The research results provide some reference value for the trafic construction of smart cities.</p>
      <p>For the data augmentation algorithm and trafic information recognition algorithm proposed in this
article, the expansion and optimization of the dataset can be considered. By expanding and optimizing
the dataset, the accuracy and generalization of trafic information recognition can be improved. For
example, images of diferent vehicle types can be added to enrich the dataset of trafic scenes, improve
data diversity, and more data augmentation techniques such as scaling and random cropping can be
used to increase data volume. We also considered the deployment possibility of models in embedded
devices in vehicles, and therefore strictly controlled the complexity of algorithms.</p>
      <p>Recently new versions of YOLO appeared. In our future studies, we will focus on implementing the
developed algorithm in modern versions of YOLO.</p>
      <p>Declaration on Generative AI: The author have not employed any generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>X.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Dynamic Head:
          <article-title>Unifying Object Detection Heads with Attentions</article-title>
          ,
          <source>in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>7369</fpage>
          -
          <lpage>7378</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR46437.
          <year>2021</year>
          .
          <volume>00729</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>EficientNet: Rethinking Model Scaling for Convolutional Neural Networks</article-title>
          , in: K. Chaudhuri, R. Salakhutdinov (Eds.),
          <source>Proceedings of the 36th International Conference on Machine Learning</source>
          ,
          <string-name>
            <surname>ICML</surname>
          </string-name>
          <year>2019</year>
          ,
          <volume>9</volume>
          -
          <fpage>15</fpage>
          June 2019, Long Beach, California, USA, volume
          <volume>97</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6105</fpage>
          -
          <lpage>6114</lpage>
          . URL: http://proceedings. mlr.press/v97/tan19a.html.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Sun,</surname>
          </string-name>
          <article-title>ShufleNet: An Extremely Eficient Convolutional Neural Network for Mobile Devices</article-title>
          , in: 2018
          <source>IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>6848</fpage>
          -
          <lpage>6856</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2018</year>
          .
          <volume>00716</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kalenichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Weyand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Andreetto</surname>
          </string-name>
          , H. Adam,
          <article-title>MobileNets: Eficient Convolutional Neural Networks for Mobile Vision Applications</article-title>
          , CoRR abs/1704.04861 (
          <year>2017</year>
          ). URL: http://arxiv.org/abs/1704.04861. arXiv:
          <volume>1704</volume>
          .
          <fpage>04861</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bochkovskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <article-title>YOLOv4: Optimal Speed and Accuracy of Object Detection</article-title>
          , CoRR abs/
          <year>2004</year>
          .10934 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2004</year>
          .10934. arXiv:
          <year>2004</year>
          .10934.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <article-title>Efective Fusion Factor in FPN for Tiny Object Detection</article-title>
          , in: 2021
          <source>IEEE Winter Conference on Applications of Computer Vision (WACV)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1159</fpage>
          -
          <lpage>1167</lpage>
          . doi:
          <volume>10</volume>
          .1109/WACV48630.
          <year>2021</year>
          .
          <volume>00120</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <surname>Z.</surname>
          </string-name>
          <article-title>Bai, Multi-scale Pulmonary Nodule Detection by Fusion of Cascade R-CNN and FPN</article-title>
          , in: 2021
          <source>International Conference on Computer Communication and Artificial Intelligence (CCAI)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1109/CCAI50917.
          <year>2021</year>
          .
          <volume>9447531</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Saqlain</surname>
          </string-name>
          , Y. Ma,
          <article-title>FPN-GAN: Multi-class Small Object Detection in Remote Sensing Images</article-title>
          ,
          <source>in: 2021 IEEE 6th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>478</fpage>
          -
          <lpage>482</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICCCBDA51879.
          <year>2021</year>
          .
          <volume>9442506</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zell</surname>
          </string-name>
          ,
          <article-title>Yolo+FPN: 2D and 3D Fused Object Detection With an RGB-D Camera</article-title>
          , in: 2020
          <source>25th International Conference on Pattern Recognition (ICPR)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4657</fpage>
          -
          <lpage>4664</lpage>
          . doi:
          <volume>10</volume>
          .1109/ ICPR48806.
          <year>2021</year>
          .
          <volume>9413066</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jaderberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zisserman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kavukcuoglu</surname>
          </string-name>
          , Spatial Transformer Networks, in: C.
          <string-name>
            <surname>Cortes</surname>
            ,
            <given-names>N. D.</given-names>
          </string-name>
          <string-name>
            <surname>Lawrence</surname>
            ,
            <given-names>D. D.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sugiyama</surname>
          </string-name>
          , R. Garnett (Eds.),
          <source>Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems</source>
          <year>2015</year>
          , December 7-
          <issue>12</issue>
          ,
          <year>2015</year>
          , Montreal, Quebec, Canada,
          <year>2015</year>
          , pp.
          <fpage>2017</fpage>
          -
          <lpage>2025</lpage>
          . URL: https: //proceedings.neurips.cc/paper/2015/hash/33ceb07bf4eeb3da587e268d663aba1a-Abstract.html.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Shen</surname>
          </string-name>
          , G. Sun,
          <string-name>
            <surname>Squeeze-</surname>
          </string-name>
          and
          <string-name>
            <surname>-Excitation Networks</surname>
          </string-name>
          ,
          <source>in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>7132</fpage>
          -
          <lpage>7141</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2018</year>
          .
          <volume>00745</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Woo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. S.</given-names>
            <surname>Kweon</surname>
          </string-name>
          , CBAM: Convolutional Block Attention Module, in: V.
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hebert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Sminchisescu</surname>
          </string-name>
          , Y. Weiss (Eds.),
          <source>Computer Vision - ECCV 2018 - 15th European Conference</source>
          , Munich, Germany, September 8-
          <issue>14</issue>
          ,
          <year>2018</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>VII</given-names>
          </string-name>
          , volume
          <volume>11211</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2018</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -01234-
          <issue>2</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Residual Attention Network for Image Classification</article-title>
          ,
          <source>in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>6450</fpage>
          -
          <lpage>6458</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2017</year>
          .
          <volume>683</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Basak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kundu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Giri</surname>
          </string-name>
          ,
          <article-title>Single Image Super-Resolution using Residual Channel Attention Network</article-title>
          ,
          <source>in: 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>224</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICIIS51140.
          <year>2020</year>
          .
          <volume>9342688</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mnih</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Heess</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kavukcuoglu</surname>
          </string-name>
          , Recurrent Models of Visual Attention, in: Z.
          <string-name>
            <surname>Ghahramani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Welling</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Cortes</surname>
            ,
            <given-names>N. D.</given-names>
          </string-name>
          <string-name>
            <surname>Lawrence</surname>
            ,
            <given-names>K. Q.</given-names>
          </string-name>
          <string-name>
            <surname>Weinberger</surname>
          </string-name>
          (Eds.),
          <source>Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems</source>
          <year>2014</year>
          , December 8-
          <issue>13</issue>
          <year>2014</year>
          , Montreal, Quebec, Canada,
          <year>2014</year>
          , pp.
          <fpage>2204</fpage>
          -
          <lpage>2212</lpage>
          . URL: https: //proceedings.neurips.cc/paper/2014/hash/09c6c3783b4a70054da74f2538ed47c6-Abstract.html.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <article-title>Spatiotemporal Fusion of Remote Sensing Images using a Convolutional Neural Network with Attention and Multiscale Mechanisms</article-title>
          ,
          <source>International Journal of Remote Sensing</source>
          <volume>42</volume>
          (
          <year>2021</year>
          )
          <fpage>1973</fpage>
          -
          <lpage>1993</lpage>
          . doi:
          <volume>10</volume>
          .1080/01431161.
          <year>2020</year>
          .
          <volume>1809742</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Deng</surname>
          </string-name>
          , L. Cheng,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Attention-based BiLSTM fused CNN with gating mechanism model for Chinese long text classification</article-title>
          ,
          <source>Comput. Speech Lang</source>
          .
          <volume>68</volume>
          (
          <year>2021</year>
          )
          <article-title>101182</article-title>
          . doi:
          <volume>10</volume>
          .1016/J.CSL.
          <year>2020</year>
          .
          <volume>101182</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yan</surname>
          </string-name>
          , Network In Network, in: Y. Bengio, Y. LeCun (Eds.),
          <source>2nd International Conference on Learning Representations, ICLR</source>
          <year>2014</year>
          ,
          <article-title>Banf</article-title>
          ,
          <string-name>
            <surname>AB</surname>
          </string-name>
          , Canada,
          <source>April 14-16</source>
          ,
          <year>2014</year>
          , Conference Track Proceedings,
          <year>2014</year>
          . URL: http://arxiv.org/abs/1312.4400.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Polyphonic Sound Event Detection Based on Residual Convolutional Recurrent Neural Network With Semi-Supervised Loss Function, IEEE Access 9 (</article-title>
          <year>2021</year>
          )
          <fpage>7564</fpage>
          -
          <lpage>7575</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2020</year>
          .
          <volume>3048675</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          , W. Liu,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <surname>Distance-IoU Loss</surname>
          </string-name>
          :
          <article-title>Faster and Better Learning for Bounding Box Regression</article-title>
          ,
          <source>in: The Thirty-Fourth AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI</source>
          <year>2020</year>
          , The Thirty-Second
          <source>Innovative Applications of Artificial Intelligence Conference</source>
          ,
          <source>IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI</source>
          <year>2020</year>
          , New York, NY, USA, February 7-
          <issue>12</issue>
          ,
          <year>2020</year>
          , AAAI Press,
          <year>2020</year>
          , pp.
          <fpage>12993</fpage>
          -
          <lpage>13000</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI. V34I07.6999.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>H.</given-names>
            <surname>Rezatofighi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tsoi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gwak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sadeghian</surname>
          </string-name>
          , I. Reid,
          <string-name>
            <given-names>S.</given-names>
            <surname>Savarese</surname>
          </string-name>
          ,
          <article-title>Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression</article-title>
          ,
          <source>in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>658</fpage>
          -
          <lpage>666</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2019</year>
          .
          <volume>00075</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ren</surname>
          </string-name>
          , W. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zuo</surname>
          </string-name>
          ,
          <article-title>Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation</article-title>
          ,
          <source>IEEE Transactions on Cybernetics</source>
          <volume>52</volume>
          (
          <year>2022</year>
          )
          <fpage>8574</fpage>
          -
          <lpage>8586</lpage>
          . doi:
          <volume>10</volume>
          .1109/TCYB.
          <year>2021</year>
          .
          <volume>3095305</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Z</surname>
          </string-name>
          .
          <article-title>-j.</article-title>
          <string-name>
            <surname>Jiang</surname>
          </string-name>
          , Y.-f. Fan,
          <source>Singularity Intensity Function Analysis of Autoregressive Spectrum and Its Application in Weak Target Detection Under Sea Clutter Background, Radio Science</source>
          <volume>55</volume>
          (
          <year>2020</year>
          )
          <article-title>e2020RS007108</article-title>
          . doi:doi.org/10.1029/2020RS007108.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wei</surname>
          </string-name>
          , S. Liu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <article-title>Incremental learning based multi-domain adaptation for object detection</article-title>
          ,
          <source>Knowl. Based Syst</source>
          .
          <volume>210</volume>
          (
          <year>2020</year>
          )
          <article-title>106420</article-title>
          . doi:
          <volume>10</volume>
          .1016/J.KNOSYS.
          <year>2020</year>
          .
          <volume>106420</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>