<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>control based on computer vision⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vasyl Teslyuk</string-name>
          <email>vasyl.m.teslyuk@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iryna Gado</string-name>
          <email>iryna.v.nychai@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bohdan Fylypchuk</string-name>
          <email>bohdan.fylypchuk.kn.2021@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Koval</string-name>
          <email>andrii.v.koval@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Automated Control Systems, Lviv Polytechnic National University</institution>
          ,
          <addr-line>12 S. Bandera Str., Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SoftServe</institution>
          ,
          <addr-line>2D Sadova Street, Lviv, 79021</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper presents a method for automated adaptive traffic light control in urban infrastructure using computer vision technologies. The proposed approach addresses pressing issues of traffic congestion, delays, and limited accessibility resulting from increasing vehicle density in modern cities. A modular and scalable software system has been developed to detect vehicles in real time using the YOLOv11 deep learning model, process the data through decision-making logic implemented in .NET Core. WebSocket is used for real-time communication between modules, while an automatic fallback to HTTP ensures continuity in case of connection loss. A React-based web interface allows for system monitoring, configuration management, and access to logs. A formal mathematical model is introduced to dynamically allocate green light durations based on real-time vehicle detection and configurable traffic density thresholds. Unlike traditional fixed-cycle systems or computationally heavy machine learning frameworks, the proposed solution balances precision, modularity, and responsiveness. The approach also anticipates future enhancements, including pedestrian detection for inclusive mobility and integration with smart city platforms.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;adaptive traffic light control</kwd>
        <kwd>computer vision</kwd>
        <kwd>YOLO</kwd>
        <kwd>WebSocket</kwd>
        <kwd />
        <kwd>NET Core</kwd>
        <kwd>React</kwd>
        <kwd>intelligent transportation systems 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The rapid growth in the number of vehicles in urban areas leads to the overloading of road
infrastructure, increased traffic congestion, higher levels of air pollution, and delays in the
movement of public transport and emergency services. These challenges drive the development
and implementation of intelligent traffic management systems, particularly adaptive traffic lights
capable of responding to real-time traffic conditions.</p>
      <p>Modern approaches to dynamic traffic regulation increasingly rely on technologies such as
computer vision, deep learning, and real-time video stream processing. However, the deployment
of such systems often requires significant computational resources (especially GPUs), large
volumes of training data, and complex infrastructure, which can hinder scalability in environments
with limited budgets and technical capabilities.</p>
      <p>This paper proposes a modular architecture for an adaptive traffic light system integrating a
YOLO-based vehicle detection module, a WebSocket communication channel, decision-making
logic implemented on the .NET platform, and an administrative panel developed with React. The
proposed approach combines the high object detection accuracy typical of deep convolutional
neural networks with the flexibility of modern web development technologies, thereby minimizing
resource requirements and facilitating system customization, extension, and scalability.</p>
      <p>The designed system operates in real time, dynamically adjusting traffic light phases based on
current traffic density. To address broader urban mobility challenges, future developments are</p>
      <p>0000-0002-5974-9310 (V. Teslyuk); 0000-0003-1615-6483 (I. Gado); 0009-0006-5565-162X (B. Fylypchuk);
0009-00068815-1031 (A. Koval)</p>
      <p>© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
planned to incorporate pedestrian detection near crosswalks, enabling adaptive adjustments of
traffic light phases. With the expansion of observation zones and further refinement of
decisionmaking algorithms, the system could be adapted to promote inclusive urban environments,
particularly to assist individuals with mobility impairments — a demographic that has significantly
increased as a result of the full-scale war in Ukraine. The integration of automated pedestrian
detection would enhance safety and contribute to reducing physical barriers within urban
infrastructure.</p>
      <p>The proposed study aims to automate the process of adaptive traffic light control within urban
transport environments by employing computer vision technologies, enabling real-time vehicle
detection. The developed system is designed to enhance traffic management efficiency, reduce
congestion, and improve the throughput of intersections under dynamic regulation conditions.</p>
      <p>The object of the study is the traffic light regulation process within urban infrastructure.</p>
      <p>The subject of the study is the methods and tools for automated adaptive traffic light control
based on computer vision technologies and software-hardware solutions.</p>
      <p>The main objective of the work is to increase the efficiency of traffic flow management by
developing a model for automated traffic light regulation using computer vision technologies and
modern real-time data processing tools.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>In contemporary scientific and engineering literature, intelligent traffic control systems utilizing
computer vision and deep learning are actively studied. In particular, the works [1 –3] propose
models for adaptive traffic light control based on the prediction of traffic flows using neural
networks.</p>
      <p>Deep reinforcement learning algorithms for optimizing traffic light cycles under varying traffic
intensities are discussed in [4–6]. In contrast, studies [7, 8] focus on the application of computer
vision for vehicle detection, which serves as the basis for decision-making regarding signal phase
changes. However, such systems often require significant computational resources and involve
complex deployment procedures.</p>
      <p>Solutions based on OpenCV and the processing of regions of interest (ROIs) are described in [9,
10], where the authors emphasize the efficiency of real-time vehicle detection under constrained
computational resources.</p>
      <p>A distinct category of research [11, 12] addresses the theoretical aspects of adaptive traffic light
control, as well as hybrid approaches that combine classical algorithms with machine learning
methods.</p>
      <p>Against this backdrop, the proposed study is particularly relevant as it focuses not only on the
theoretical analysis of existing approaches but also on the practical integration of key components
into a unified system. Specifically, the integration of YOLO as a vehicle detector, a WebSocket
channel for real-time communication, control logic implemented on the .NET platform, and an
administrative interface developed with React enables the creation of an adaptive system that
operates with minimal latency, is scalable, and does not require complex retraining procedures.</p>
      <p>Compared to systems based solely on complex machine learning models, the proposed solution
demonstrates a balance between accuracy, flexibility, and computational efficiency, making it
suitable for rapid pilot deployment in urban environments.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Materials and Methods</title>
      <p>The development of the intelligent dynamic traffic light management system required a
comprehensive technical approach, which included a well-founded choice of system architecture,
technologies, and implementation tools. The primary criteria for the selection of these components
were stable real-time operation, scalability, deployability on available hardware, and ease of future
maintenance.</p>
      <p>A modular system architecture was designed to incorporate components for computer vision,
traffic light control logic, communication channels between system modules, an administrative
interface, and a data storage system. Python was selected as the primary programming language
for implementing the computer vision module due to its flexibility and compatibility with libraries
such as OpenCV, NumPy, and Torch, as well as with modern image processing models, including
Ultralytics YOLO. This made real-time video processing feasible even on embedded devices.</p>
      <p>To support the object detection functionality, the YOLOv11 family of models developed by
Ultralytics was chosen due to its balance between inference speed and detection accuracy [13].
These models were pretrained on the COCO dataset (Common Objects in Context), a widely
adopted benchmark in computer vision research. A detailed summary of the YOLOv11 model
variants is provided in Table 1, while the key characteristics of the COCO dataset are presented in
Table 2.</p>
      <p>A performance comparison of YOLOv11 models in terms of accuracy and inference time is
illustrated in Figure 1.</p>
      <p>The server-side component of the system was developed using the .NET Core platform and C#,
which allows for efficient event processing, adaptive traffic light control, interaction with the
MSSQL database, and API provision for the client-side. The administrative interface was built using
React, allowing system monitoring, configuration parameter adjustments, log viewing, and
intersection status updates.</p>
      <p>For real-time data exchange between the Python application and the server-side, WebSocket
was employed, ensuring stable two-way communication. If the connection is lost, the system
automatically switches to a backup HTTP channel to ensure continuous operation. The MSSQL
relational database is used to store traffic light configurations, change histories, and action logs.
Docker was utilized for containerization of system components, while Git was used for version
control and collaborative development.</p>
      <p>The selected technologies complement each other effectively, ensuring that all functional
system requirements are met—from video data collection and event processing to adaptive traffic
light control and administration. This approach ensures stable system operation even with limited
hardware resources and provides a foundation for future scalability and integration with other city
infrastructure systems.</p>
      <p>To ensure real-time adaptive control of traffic lights, the system implements a calculation model
that dynamically determines the duration of the green phase based on the detected number of
vehicles. The underlying algorithm is described below.</p>
      <p>The system determines the duration of the green phase for each traffic light based on a direct
dependency model linked to the number of detected vehicles.</p>
      <p>Using real-time computer vision data, the number of vehicles N approaching from each
direction is calculated. To improve detection reliability, a confidence threshold cthreshold is
introduced, which defines the minimum confidence level required for an object to be considered a
valid detection.</p>
      <p>An object is recognized as a vehicle and counted only if the following condition is met:</p>
      <p>M (2)</p>
      <p>N =∑i=1 δ (ci≥cthreshold ) ,
where M – the total number of detected objects in the frame, ci – the confidence value for the
ith detection, δ(⋅) – the indicator function, equal to 1 if the condition is true and 0 otherwise.</p>
      <p>The resulting number N is then utilized within the proposed model to calculate the adaptive
green phase duration, ensuring consistency with the system's real-time decision-making
framework.</p>
      <p>A sequence table (SequenceGreenTime) is configured by the system administrator, specifying
recommended green phase durations T for different ranges of vehicle counts N. This approach
enables the consideration of specific characteristics of individual intersections and local traffic
patterns (for instance, by allocating additional time for accident-prone directions or transit routes).</p>
      <p>If an exact match is found in the SequenceGreenTime(N) table, the predefined duration is used:</p>
      <p>T =SeqeunceGreenTime (N ) , (3)</p>
      <p>If no exact match exists, the green phase duration is calculated according to the following
formula:</p>
      <p>T =N × t perve hicle , (4)
where tper_vehicle represents the standard time allocated per vehicle.</p>
      <p>The calculated value is then constrained within administrator-defined minimum and maximum
bounds:</p>
      <p>T min ≤ T ≤ T max ,
where Tmin and Tmax are administrator-defined parameters set in the configuration settings.
The final green phase duration is thus determined as:
(5)
(6)</p>
      <p>T =max ⁡(T min , min ⁡(T max , SequenceGreenTime ( N )∨ N × t perve hicle))
The resulting value T is transmitted via WebSocket to the controlled group of traffic lights,
updating their operating mode in real time.
4. Results
For the purpose of demonstrating the architecture of the developed system, two separate class
diagrams were created: one for the server-side module built on the .NET Core platform, and
another for the client-side application developed in Python, responsible for implementing the
computer vision functionality. Figure 2 presents the Python-based diagram, as it provides a more
concise and illustrative representation of the client-side application’s core functionality.</p>
      <p>At the core of the system’s functional logic lies the vehicle detection use case, which initiates
the adaptive control of traffic light phases. This process defines the starting point for the real-time
traffic analysis cycle.</p>
      <p>The vehicle detection process is implemented at the client-side application level, where the
incoming video stream from the camera is processed using the YOLO deep neural network. This
network enables accurate and rapid identification of vehicles in video frames. The detection results
are aggregated into a message containing the number of detected vehicles, the camera identifier,
the timestamp, and other relevant parameters. This message is transmitted to the server via a
WebSocket connection and is used for decision-making regarding traffic signal changes.</p>
      <p>Thus, the vehicle detection event serves as the starting point for the operation of the entire
intelligent system: it ensures the acquisition of primary data, forms the basis for analytical
decisions, and determines the efficiency of response to changes in traffic load.</p>
      <p>For a comprehensive representation of the developed system's architecture and the interaction
among its modules, a component diagram was constructed. The diagram depicts the
communication flows between the computer vision application, the backend server, the database,
and the administrative panel. This integrated model emphasizes the modular structure of the
system and its real-time data processing capabilities in the context of dynamic traffic light control.</p>
      <p>Figure 4 illustrates the overall system architecture, emphasizing the modular components and
their communication flows.</p>
      <p>The functioning of the system was tested in a simulation environment using video data from
surveillance cameras. The results confirmed the consistency of the algorithm and server-side logic.</p>
      <p>To assess the responsiveness of the traffic light control system, an experimental delay
measurement was conducted for two types of communication. In the HTTP-based architecture, the
vehicle detector sends a request to the server over HTTP, which then relays a command to the
traffic light over a persistent socket connection. In the socket-based architecture, the detector
communicates directly with the server via a socket, which then forwards the command through
another socket to the traffic light. Measurements were taken locally, with minimal external
network interference, in order to accurately capture the internal latency of each architecture.</p>
      <p>The results demonstrate that the HTTP-based approach exhibited average delays typically
ranging from 17 to 26 milliseconds after initial setup. These values are acceptable for a fallback
communication channel in case the primary socket-based channel becomes unavailable. In contrast,
the socket-based implementation consistently demonstrated lower and more stable latency,
typically between 16 and 20 milliseconds, which is preferable for real-time system responsiveness.
These patterns are clearly illustrated in Figure 5.</p>
      <p>The table 3 summarizes the comparative characteristics of both communication models.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Discussions</title>
      <p>The adaptive traffic management system developed in this work is based on the application of
computer vision technologies for real-time traffic situation assessment and dynamic adjustment of
traffic light phases. This approach enables flexible and efficient control in contrast to classical
methods.</p>
      <p>Traditional systems like SCATS (Australia) and SCOOT (UK) use magnetic sensors and
predefined algorithms, but lack flexibility and require costly infrastructure.</p>
      <p>In parallel, modern open-source solutions are evolving, including projects such as SmartFlow,
Byte-Blender, and Dacee-dee — all publicly available on GitHub. These projects focus on affordable
vehicle detection from real-time video streams, mainly using Python and YOLO frameworks.
However, they often exhibit limited server-side logic, simplified architectures, and lack deep
adaptability for managing extensive traffic networks.</p>
      <p>In contrast, our system combines YOLO-based vision in Python, .NET Core decision logic, and
WebSocket communication to provide reliable, scalable control. Its hybrid architecture allows
modularity and future extensions like forecasting or Smart City integration.</p>
      <p>Despite its advantages, the chosen architectural approach has certain technical limitations.
Since video processing and detection are handled by a separate Python application, the system
depends on stable communication between Python and .NET components. In case of disconnection
or errors, the .NET server may not receive real-time data for adaptive control.</p>
      <p>WebSocket transmission requires robust handling of interruptions and reconnections, especially
in unstable urban networks. Scaling to many cameras and lights may require server optimization to
handle multiple connections.</p>
      <p>Nonetheless, these disadvantages are outweighed by the system’s enhanced stability,
performance, and reliability—factors that are crucial for deployment in real-world urban
environments. The system demonstrates a high level of engineering maturity, allows modular
component isolation, and ensures the convenient integration of new functionalities via API or
other standard interfaces.</p>
      <p>This study advances the field of intelligent transportation systems by developing an adaptive
traffic light control method that dynamically adjusts signal timings based on real-time computer
vision analysis. A hybrid distributed system architecture is proposed, integrating a lightweight
YOLO-based vehicle detection module implemented in Python with a robust server-side
decisionmaking engine based on .NET Core technologies, ensuring efficient operation even with limited
computational resources. The research formalizes a real-time prioritization model that calculates
green phase durations dynamically, using confidence-filtered vehicle detections and
administratorconfigurable adaptation ranges. Furthermore, a resilient communication mechanism was
implemented using WebSocket protocols with automatic fallback to HTTP, which guarantees
uninterrupted data transmission under unstable network conditions typical of urban
infrastructures. The modular and scalable system design also provides flexibility for future
extensions, such as pedestrian detection integration or Smart City platform interoperability. By
combining advanced AI-based detection techniques with distributed and fault-tolerant control
logic, this work contributes a practical and scientifically grounded solution for the deployment of
adaptive traffic light systems in real-world urban environments.</p>
      <p>Thus, the proposed system successfully combines the theoretical foundations of adaptive traffic
management with efficient engineering implementation, ensuring its competitiveness among
existing solutions.
This paper proposes a scientifically grounded method for the automated adaptive control of traffic
lights, based on computer vision technologies and modern real-time data processing tools. The
developed system combines theoretical approaches to modeling traffic processes with practical
engineering solutions focused on operational stability, scalability, and integration into urban
infrastructure.</p>
      <p>The modular architecture of the system enables efficient processing of video streams for vehicle
detection, dynamic adjustment of green light durations based on current traffic density, and
continuous communication between client-side and server-side modules via the WebSocket
protocol.</p>
      <p>In future development stages, the system may be enhanced with functionality for pedestrian
recognition near crossings to improve inclusivity. This would allow the algorithm to be adapted to
the needs of people with mobility impairments, for whom timely signal changes are crucial.</p>
      <p>The results obtained lay the groundwork for deploying the system in real urban environments
and further advancing intelligent transportation systems, considering Smart City principles and
inclusive design. The developed solution demonstrates high potential for implementation under
resource-constrained conditions, ensuring a balance between detection accuracy, processing speed,
and technical reliability.</p>
      <p>Thus, the research makes a significant contribution to the development of intelligent
transportation systems, laying the foundation for further scientific research and practical
implementation of adaptive urban traffic management technologies.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT for grammar and spelling
checks, as well as for improving the clarity of certain passages. After using this tool, the authors
reviewed and edited the content as needed and take full responsibility for the publication’s content.
7. S. A. Anwar, F. T. Zohura, J. Paul, "Intelligent traffic control system using computer vision
algorithms," in Proceedings of SPIE 12673, Optics and Photonics for Information Processing XVII,
2023, doi: 10.1117/12.2682676.
8. B. Chong, M. A. Ismail, "Smart traffic light control system using image processing," IOP
Conference Series: Materials Science and Engineering, vol. 1088, no. 1, 012021, 2024, doi:
10.1088/1757-899X/1088/1/012021.
9. Z. Fahrunnisa, R. Rahmadwati, R. A. Setyawan, "Adaptive traffic light signal control using
fuzzy logic based on real-time vehicle detection from video surveillance," Jurnal Ilmiah Teknik
Elektro Komputer dan Informatika, vol. 10, no. 2, pp. 123–132, 2023, doi:
10.26555/jiteki.v10i2.28712.
10. A. P. Rangari, A. R. Chouthmol, C. Kadadas, P. Pal, S. K. Singh, "Deep Learning based smart
traffic light system using Image Processing with YOLO v7," in Proceedings of the 4th
International Conference on Circuits, Control, Communication and Computing (I4C 2022),
Bangalore, India, 2022, pp. 129–132, doi: 10.1109/I4C57141.2022.10057696.
11. T. Azfar, J. Li, H. Yu, R. L. Cheu, Y. Lv, R. Ke, "Deep learning-based computer vision methods
for complex traffic environments perception: A review," arXiv preprint, arXiv:2211.05120, 2022.
12. S. Sun, H. Wu, L. Xiang, "City-Wide Traffic Flow Forecasting Using a Deep Convolutional</p>
      <p>
        Neural Network," Sensors, vol. 20, no. 2, 421, 2020, doi: 10.3390/s20020421.
13. Ultralytic
        <xref ref-type="bibr" rid="ref4">s. (2024</xref>
        ). YOLO models documentation. Retrieved from
https://docs.ultralytics.com/models/yolo/
14. COCO Data
        <xref ref-type="bibr" rid="ref4">set. (2024</xref>
        ). Common Objects in Context. Retrieved from https://cocodataset.org
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Ayoubi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Aman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Akbari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lodin</surname>
          </string-name>
          ,
          <article-title>"AI-enabled traffic light control system: An efficient model to manage the traffic at intersections using computer vision,"</article-title>
          <source>International Journal of Integrated Science and Technology</source>
          , vol.
          <volume>2</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>767</fpage>
          -
          <lpage>794</lpage>
          ,
          <year>2024</year>
          , doi: 10.59890/ijist.v2i8.
          <fpage>2438</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>"Deep learning-based detection for traffic control,"</article-title>
          <source>in Proceedings of the 5th International Conference on Advances in Artificial Intelligence (ICAAI</source>
          <year>2021</year>
          ), ACM,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          , doi: 10.1145/3505711.3505736.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>K. S. Mehta</surname>
            ,
            <given-names>K. N.</given-names>
          </string-name>
          <string-name>
            <surname>Raj</surname>
            ,
            <given-names>K. N. Brahmbhatt,</given-names>
          </string-name>
          <article-title>"Machine learning solutions for adaptive traffic signal control: A review of image-based approaches,"</article-title>
          <source>World Journal of Advanced Engineering Technology and Sciences</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>476</fpage>
          -
          <lpage>481</lpage>
          ,
          <year>2024</year>
          , doi: 10.30574/wjaets.
          <year>2024</year>
          .
          <volume>13</volume>
          .1.0437.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>"Adaptive Traffic Signal Control System Using Deep Reinforcement Learning,"</article-title>
          <source>in Proceedings of the 2024 IEEE International Conference on Intelligent Signal Processing and Effective Communication Technologies (INSPECT)</source>
          , Gwalior, India,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/INSPECT63485.
          <year>2024</year>
          .
          <volume>10896157</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <article-title>"Researches on Intelligent Traffic Signal Control Based on Deep Reinforcement Learning,"</article-title>
          <source>in Proceedings of the 16th International Conference on Mobility, Sensing and Networking (MSN</source>
          <year>2020</year>
          ), Tokyo, Japan,
          <year>2020</year>
          , pp.
          <fpage>729</fpage>
          -
          <lpage>734</lpage>
          , doi: 10.1109/MSN50589.
          <year>2020</year>
          .
          <volume>00124</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          23, no.
          <issue>7</issue>
          , pp.
          <fpage>7112</fpage>
          -
          <lpage>7141</lpage>
          ,
          <year>2022</year>
          , doi: 10.1109/TITS.
          <year>2021</year>
          .
          <volume>3066958</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>