<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Multi-robot Sanitization of Railway Stations Based on Deep Q-Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Riccardo Caccavale</string-name>
          <email>riccardo.caccavale@unina.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincenzo Calà</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mirko Ermini</string-name>
          <email>mi.ermini@rfi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alberto Finzi</string-name>
          <email>alberto.finzi@unina.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincenzo Lippiello</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabrizio Tavano</string-name>
          <email>fabrizio.tavano@unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Deep Reinforcement Learning, Multi-robot Systems, Experience Replay Bufer</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Rete Ferroviaria Italiana</institution>
          ,
          <addr-line>Firenze Osmannoro, Florence</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Rete Ferroviaria Italiana</institution>
          ,
          <addr-line>Rome</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Università degli studi di Napoli ”Federico II”</institution>
          ,
          <addr-line>Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Sanitizing railway stations is a relevant issue especially due to the recent evolution of the Covid-19 pandemic. In this work, we propose a multi-robot approach to sanitize railway stations based on a distributed Deep Q-Learning technique. The framework relies on anonymous information from existing WiFi networks to localize passengers inside the station and to develop a map of possible risky areas to be sanitized. Starting from this map, a swarm of cleaning robots, each one endowed with a robot-specific convolutional neural network, learns how to on-line cooperate inside the station in order to maximize the sanitized area depending on the presence of the passengers.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent years, the spreading of diseases such as the Covid-19 has emphasized the problem
of sanitizing large and crowded public environments like railway stations. In the present
work, our aim is to design a solution for the sanitizing by the Deep Q-Learning technique
in a real case of study of interest for Italian railway infrastructure manager RFI s.p.a., in a
real environment ofered by the most important italian railway station of the capital, Roma
Termini. The framework relies on anonymous information from existing WiFi networks to
localize passengers inside the station and to develop a map of possible risky areas to be sanitized.
Starting from this map, we propose a decentralized approach where a swarm of cleaning robots,
each one endowed with a robot-specific convolutional neural network, learns how to on-line
cooperate inside the station in order to maximize the sanitized area depending on the presence
of the passengers. In the multi-robot sanitizing system literature, the prominent approach
is based on coverage path planning (CPP) [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">1, 2, 3, 4, 5</xref>
        ] where the area to sanitize is divided
between agents in order to cover the whole space. These approaches are suitable for cleaning
and sanitizing the environment with a scalable number of robots, but prioritization issues are
hardly considered. MARL frameworks are often proposed to ensure flexibility and scalability in
      </p>
      <p>Robot 1
ER-Buffer1</p>
      <p>ER-DQN1</p>
      <p>Robot 2
ER-Buffer2</p>
      <p>ER-DQN2</p>
      <p>Robot N
ER-BufferN</p>
      <p>ER-DQNN
a1
a2
aN</p>
      <p>
        Server
diferent applications like exploration [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], construction [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], or target-capturing [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], but also in
this case priority-based cleaning issues are not commonly covered. An interesting approach is
proposed by [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], where multiple agents distributedly learn a collaborative policy in a shared
environment using A3C training method in order to achieve a target-capturing task. Inspired
by these approaches, we propose a scalable multi-robot sanitizing framework where multiple
mobile robots learns to cooperate during the execution of cleaning tasks into large crowded
environments, introducing a priority-based strategy.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The architecture</title>
      <p>Our multi-robot sanitizing problem can be described as follows. Starting from a gridmap 
representing the environment to be sanitized, we define  as the set of possible heatmaps (i.e.,
priority distributions) on the map  , and  as the set possible free-obstacle positions of  . In
this setting, we assume  agents, tasked to sanitize the environment  , each one endowed with
a set of single-agent actions  . Our aim is to find a set of agent-specific strategies ( 1, … ,   )
such that each   ∶  ×  →  drives an agent towards prioritized areas, in coordination with
the other agents, in order to maximize the global cleaning efect. This distributed approach is
mainly designed to support the scalability: we adopt a client-server approach, where each agent
(client) learns a decoupled agent-specific strategy by communicating with a central system
(server).</p>
      <p>
        A representation of the overall architecture is depicted in Figure 1. The framework is
composed of a set of intelligent agents, representing mobile cleaning robots, each one communicating
with the central server. The role of the server (server-side) is to merge the outcomes of the
agents activities with (anonymized) data about people positions in order to produce a heatmap
for the risky areas to be sterilized. The role of each agent (agent-side) is to elaborate the
heatmap by means of an agent-specific Deep Q-Network (DQN) and to update the local strategy
  considering the environmental settings and the diferent priorities in the map. In this
framework, the cleaning priority can be defined as a heatmap, whose hot/cold points are high/low
priority areas to be sanitized. Following this perspective, a state-position couple (, ) ∈  × 
is defined as a 2 channel matrix  ×  × 2 where  and  are the width and the height of the
environment, respectively. The first channel  of the matrix represents the cleaning-priority
on the environment, whose elements are real numbers in the interval [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], where 1 is the
maximum priority and 0 means that no cleaning is needed. The second channel  is a binary
matrix representing the position and size of the cleaning area of the robot, which is 1 for the
portions of the environment that are in the range of the robot cleaning efect, and 0 otherwise.
This matrix can be shown as a heatmap (see map in Figure 1), where black pixels have 0 priority,
while colors from red to yellow are for increasingly higher priorities.
      </p>
      <p>
        In our framework, the update of priorities is performed by the server, which collects the
outputs of the single agents, and integrates them considering the position of people and obstacles.
More specifically, the cleaning priority is computed from the position of clusters of people
by modeling possible spreading of viruses or bacteria. In our setting, we exploit the periodic
convolution of a Gaussian filter  (,  2) every  steps, where  ,  2 and  are suitable parameters
that can be regulated depending on the meters/pixels ratio, the timestep, and the considered
typology of spreading (in this work we assume a setting inspired to the aerial difusion of
the Covid-19 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]). Here, starting from a set of randomly generated clusters, the probability
distribution evolves through the iterative convolution of the Gaussian filter. The convolution
process acts at every step by incrementally reducing the magnitude of the elements of the
heatmap matrix, while distributing the priority on a wider area. Convolution is here exploited to
simulate the efects of the attenuation and the spreading of the contamination process over time.
We have chosen the parameters of the Gaussian function in order to have a radius of the area,
interested by the infection, of 5 meters ( =0,  = 0.9). This value is selected considering that we
know the position of a cluster of people with an WiFi average positioning error of accuracy of
about 3 meters as described in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and we consider also that the distance of safety is about of 2
meters between peoples that make use of the indicated surgery masks during the actual period
of emergency caused by the Covid-19 difusion [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In the map (see Figure 2, right) there are
several black areas (0 priority) that are regions of space associated with the static obstacles of
the environment (shops, rooms and walls inside the station). These areas are assumed to be
always clean, hence unattractive for the robots. When an agent moves into the environment
with an action   ∈  , the region in the neighborhood of the newly reached position is cleaned
by the server, which sets to 0 the associated priority level. In our framework, we propose a
simple multi-agent variation of the experience replay method proposed in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Following this
approach, each of the  agents is endowed with a specific replay bufer, along with specific
target and main DQNs, that are synchronously updated with respect to the position of the agent
and to the shared environment provided by the server (see Figure 1). The local reward function
  is designed to drive the agents toward prioritized areas of the environment (hot points), while
avoiding obstacles and already visited areas (cold points). In this direction, we firstly introduce
a cumulative priority function   that summarizes the importance of a cleaned area:
  = ∑   (, )  (, )
      </p>
      <p>(,)
  = { 

if   &gt; 0;
otherwise.</p>
      <p>(1)
(2)
as the sum of the element-wise priorities from matrix   in the area sterilized by the agent  (i.e.
where   (, ) = 1 ). The value in Equation 1 is then exploited to define the reward   for the agent
 :
Specifically, when an agent  sanitizes a priority area, the reward is equal to the cumulative
value   ; otherwise, if no priority is associated to the cleaned area (i.e.,   = 0) a negative
reward  &lt; 0 is earned (we empirically set  = −2 for our case studies). This way,
agents receive a reward that is proportional to the importance of the sanitized area, while
routes toward zero-priority areas, such as obstacles or clean regions, are discouraged. Notice
that in this framework, when the action of an agent leads to an obstacle (collision), no motion
is performed. This behavior penalizes the agent (no further cleaning are performed), thus
producing an indirect drive towards collision-free paths. Moreover, as long as an agent moves
through the environment it leaves a wake of cleaned space behind. This way, since the priority
of already visited areas is 0, agents can indirectly observe their mutual behavior from the priority
update, in so avoiding explicit communication, hence robots in our experiments are not directly
aware of the position of the other agents which is indirectly estimated from their paths.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Case Studies</title>
      <p>A graphical representation of the environment is shown in Figure 2. We selected a region of
space of 100 × 172 meters in front of the rails, where people usually stands waiting for the
incoming trains. From that region we also isolated shops, stairs and walls as obstacles to be
avoided by the robot during the sanitizing process (black areas in the Figure, 2, right). Agents
can move by one pixel in any direction, hence the set  includes 8 actions (4 linear and 4
diagonal) while, in case one action leads to an inconsistent location (obstacle or out of bound)
the agent stays in the current location. In this setting, we propose two case studies: in the
ifrst one we assess the system performance during the learning phase considering diferent
numbers of robots (2 to 6 robots) while, in the second case, a more realistic scenario is considered,
where the cleaning performance of robots are assessed considering an increasing number of
moving clusters. In this first case study, we show how the learning performance of the proposed
approach scales over the number of cleaning agents. The starting point of every robot in the
heatmap is set at random, because in our study we want to find a solution that is independent
by this initial condition. We designed a training process where, at the beginning of each episode,
a random number of clusters is selected and each cluster is randomly positioned inside the
station. Specifically, each obstacle-free location of the map has a 0.02 probability of generating
a cluster. Each episode ends when agents successfully clean up to the 98% of the map or until a
timeout is reached (400 steps are performed). During the training process we collect the overall
reward as the sum of the single agents rewards and the number of steps needed to accomplish
the task. This setting is intentionally designed to train agents to address a generic distribution
of priorities, which can be generated during daily cleaning processes. As for the execution
time, the number of steps needed to accomplish the task, namely to clean the 98% of the map,
decreases with the increasing number of agents. Specifically, the 2 agents configuration needs
174 steps on average to accomplish the task, while the 4, 6 and 8 agents ones need 127, 112, and
94 steps, with a time reduction of 27%, 12%, and 16%, respectively. Also in this case, the time
reduction indicates that the proposed approach successfully scales to diferent number of robots.
In order to assess the performance of the system into more realistic scenarios, we propose
a diferent setting by considering diferent number of clusters and a simulated WiFi server
that periodically updates the position of clusters at a specific rate (once every 15 steps). The
numbers of clusters have been selected according to the average number of visitors-per-hour of
the considered portion of the station (see Figure 2); moreover, during the runs, the values are
designed to be randomly reduced up to the 30% in order to simulate the departure/arrival of
passengers in the station.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>In this work we proposed a scalable multi-robot sanitizing framework based on a distributed
Deep Q-Learning technique, suitable for the eficient cleaning of large and crowded indoor
environment such as railways stations. The proposed simulated experiments indicate that, as
expected, the cleaning performance of the framework is proportional to the number of robots
and inversely proportional to the number of people in the station. To asses the performance
of our framework we proposed a worst-case test, where a large number of moving people is
scattered (uniformly distributed) all around the station and robots should cover a wide area
to perform the task. This setting is challenging compared to a real railway station, where
people are often grouped near specific areas like shops, info points or ticket ofices (see example
in Figure 2, left) and robots can easily converge to those areas to maximize the sanitization
efect. As future research activities, we plan to extend our pilot study by testing the proposed
framework in a more realistic scenario, considering more complex robotic models and daily
recorded data about the real people distribution in the station.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. H.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <article-title>Complete coverage navigation of cleaning robots using triangular-cell-based map</article-title>
          ,
          <source>IEEE Transactions on Industrial Electronics</source>
          <volume>51</volume>
          (
          <year>2004</year>
          )
          <fpage>718</fpage>
          -
          <lpage>726</lpage>
          .
          <source>doi:1 0 . 1 1 0 9 / T I E . 2 0</source>
          <volume>0 4 . 8 2 5 1 9 7 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Miao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.-Y.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <article-title>Scalable coverage path planning for cleaning robots using rectangular map decomposition on large environments</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>38200</fpage>
          -
          <lpage>38215</lpage>
          .
          <source>doi:1 0 . 1 1 0</source>
          <string-name>
            <given-names>9</given-names>
            <surname>/ A C C E S S</surname>
          </string-name>
          .
          <volume>2 0 1 8 . 2 8 5 3 1 4 6 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>T.-K. Lee</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Baek</surname>
          </string-name>
          , S.-Y. Oh,
          <article-title>Sector-based maximal online coverage of unknown environments for cleaning robots with limited sensing</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>59</volume>
          (
          <year>2011</year>
          )
          <fpage>698</fpage>
          -
          <lpage>710</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S0921889011000893. doi:h t t p s : / / d o i .
          <source>o r g / 1 0 . 1 0 1 6 / j . r o b o t . 2 0 1 1 . 0 5 . 0 0 5 .</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>T.-K. Lee</surname>
            ,
            <given-names>S.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Baek</surname>
            , S.-Y. Oh,
            <given-names>Y.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Choi</surname>
          </string-name>
          ,
          <article-title>Complete coverage algorithm based on linked smooth spiral paths for mobile robots</article-title>
          ,
          <source>in: 2010 11th International Conference on Control Automation Robotics Vision</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>609</fpage>
          -
          <lpage>614</lpage>
          .
          <source>doi:1 0 . 1 1</source>
          <volume>0</volume>
          <fpage>9</fpage>
          <string-name>
            <surname>/ I C A R C V</surname>
          </string-name>
          .
          <volume>2 0 1 0 . 5 7 0 7 2 6 4 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>T.-K. Lee</surname>
            ,
            <given-names>S.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Baek</surname>
            ,
            <given-names>Y.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Choi</surname>
          </string-name>
          , S.-Y. Oh,
          <article-title>Smooth coverage path planning and control of mobile robots based on high-resolution grid map representation</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>59</volume>
          (
          <year>2011</year>
          )
          <fpage>801</fpage>
          -
          <lpage>812</lpage>
          . URL: https://www.sciencedirect.com/science/article/ pii/S0921889011000996. doi:h t t p s : / / d o i .
          <source>o r g / 1 0 . 1 0 1 6 / j . r o b o t . 2 0 1 1 . 0 6 . 0 0 2 .</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <article-title>Mrcdrl: Multi-robot coordination with deep reinforcement learning</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>406</volume>
          (
          <year>2020</year>
          )
          <fpage>68</fpage>
          -
          <lpage>76</lpage>
          . URL: https://www.sciencedirect.com/science/ article/pii/S0925231220305932. doi:h t t p s : / / d o i .
          <source>o r g / 1 0 . 1 0</source>
          <volume>1 6</volume>
          / j . n e u c o
          <source>m . 2 0 2 0 . 0 4 . 0 2 8 .</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Omidshafiei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pazis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Amato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>How</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vian</surname>
          </string-name>
          ,
          <article-title>Deep decentralized multi-task multi-agent reinforcement learning under partial observability</article-title>
          , in: D.
          <string-name>
            <surname>Precup</surname>
            ,
            <given-names>Y. W.</given-names>
          </string-name>
          <string-name>
            <surname>Teh</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 34th International Conference on Machine Learning</source>
          , volume
          <volume>70</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>2681</fpage>
          -
          <lpage>2690</lpage>
          . URL: http:// proceedings.mlr.press/v70/omidshafiei17a.html.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sartoretti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Paivine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K. S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Koenig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Choset</surname>
          </string-name>
          ,
          <article-title>Distributed reinforcement learning for multi-robot decentralized collective construction</article-title>
          , in: N.
          <string-name>
            <surname>Correll</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Schwager</surname>
          </string-name>
          , M. Otte (Eds.),
          <source>Distributed Autonomous Robotic Systems</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Setti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Passarini</surname>
          </string-name>
          , G. De Gennaro,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barbieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Perrone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Borelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Palmisani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Di</given-names>
            <surname>Gilio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Piscitelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Miani</surname>
          </string-name>
          , et al.,
          <article-title>Airborne transmission route of covid-19: Why 2 meters/6 feet of inter-personal distance could not be enough</article-title>
          ,
          <year>2020</year>
          . URL: https://www. ncbi.nlm.nih.gov/pmc/articles/PMC7215485/.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T. Kim</given-names>
            <surname>Geok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zar Aung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Sandar</given-names>
            <surname>Aung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Thu</given-names>
            <surname>Soe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdaziz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pao Liew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hossain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Tso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. H.</given-names>
            <surname>Yong</surname>
          </string-name>
          ,
          <article-title>Review of indoor positioning: Radio wave technology</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>11</volume>
          (
          <year>2021</year>
          ). URL: https://www.mdpi.
          <source>com/2076-3417/11/1/279. doi:1 0 . 3 3</source>
          <volume>9 0</volume>
          / a p p
          <volume>1 1 0 1 0 2 7 9 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mnih</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kavukcuoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Silver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Rusu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Veness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Bellemare</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Riedmiller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Fidjeland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ostrovski</surname>
          </string-name>
          , et al.,
          <article-title>Human-level control through deep reinforcement learning</article-title>
          ,
          <year>2015</year>
          . URL: https://www.nature.com/articles/nature14236#article-info.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>