<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Algorithms of Landmark Robot Navigation Basing on Monocular Image Processing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vasyl Koval</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Information Computing Systems and Control, Ternopil National Economic University, UKRAINE</institution>
          ,
          <addr-line>Ternopil, 8 Chekhova str.</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>1</fpage>
      <lpage>3</lpage>
      <abstract>
        <p>The application of mobile robots is very important in environments that are dangerous or inappropriate for human life. One of the problems arising for the mobile robot when targeting point within the indoor application during navigation is the provision of its localization. In this paper, the developments of the algorithms that provide and enable mobile robot to position itself within the indoor environment by using one video camera and a landmark template is presented.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        One of the most popular application for mobile robots (MR)
is providing navigations in environments in which humans
can’t be present or environments that are dangerous to
human’s health [
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ]. The interaction of MR with the operating
environment is provided by the application of a number of
sensors for the perception of it, actuators (effectors) for
influencing the environment and a control system that allows
robot to perform purposeful and useful actions. By analyzing
the indoor application of mobile robots, it is possible to
conclude that its activities in the environment can be
considered as a cyclic system.
      </p>
      <p>
        Within the main loop, MR executes the procedures for the
perception of the environment state, process the received
information and determines actions that changes its position in
the environment according to the fixed purpose. Thereafter,
MR analyze changes and the information about the new
environment state obtained is send to the control system. Due
to these processes being executed, a new loop of mobile robot
activities is organized till the purpose will be reached [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The actual problem is the creation of mobile robots that are
capable of independently navigating and autonomously
performing the assigned tasks. At the same time in most cases,
humans provide remote control for the MR [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Such state is
determined by the inability of the robot to make independent
decisions and as a result, it provides number of shortcomings
and increases the probability of erroneous actions. In addition,
it is usually problematic for people to correctly assess the
situation on a telemetry data basis and implementation of
adequate control. These shortcomings can be avoided if the
MR control by humans will be carried out at the level of goal
setting, but not at the level of the task execution for individual
movements. In this case, the robot must independently (or with
minimal human impact) perform the assigned tasks. [
        <xref ref-type="bibr" rid="ref5 ref6">5,6</xref>
        ].
      </p>
      <p>Typically, the technical vision system is used by MR during
navigation. There are three strategic levels for reaching target
point of movement by MR and they include: a) corresponding
to the far, b) middle and d) near navigation. To be capable of
providing such navigational levels, it is significant to develop
some algorithms and tools that could support robot to estimate
its position or localize at the operating environment.</p>
    </sec>
    <sec id="sec-2">
      <title>II. PROBLEM FORMULATION</title>
      <p>One of the core task for robot’s navigation is the
determination of the MR position and orientation (often
referred to as the pose) in its environment. The basic principles
of landmark-based and map-based positioning also apply to
the vision-based positioning or localization, which relies on
optical sensors in contrast to ultrasound, dead-reckoning and
inertial sensors.</p>
      <p>
        Most localization techniques provide absolute or relative
position and/or the orientation of sensors. Techniques vary
substantially, depending on the sensors, their geometric
models and the representation of the environment [
        <xref ref-type="bibr" rid="ref7 ref8">7,8</xref>
        ].
      </p>
      <p>The geometric information about the environment can be
given in the form of landmarks, object models and maps in two
or three dimensions. A vision sensor should capture image
features or regions that match the landmarks or maps.</p>
      <p>The MR positioning means finding of the position and the
orientation of a robot platform globally in the environment.
Usually for this purpose of various types of range, finders are
used. Finders have large numbers of drawbacks, the main one
among them is that the finder can focus only on the
configuration of the working area and the problem of
localization (determining coordinates) is solved with errors.</p>
      <p>Moreover, the traditional navigating systems usually use
odometers for positioning of wheeled platform in an
environment. They determine the path traversed by each of the
wheels of robot. As a result, such approach leads to
accumulated errors. Therefore, the practical problem is to
create tools and algorithms that allow mobile robot to provide
positioning for movement to the target.</p>
      <p>Therefore, the ideal sensor for solving the distinct problems
listed above is the video camera from the vision system of the
robot. The proof of this statement may be human visual
system. In this scientific report, the main attention focuses on
the approaches that use photometric vision sensors, i.e.,
cameras for MR positioning.</p>
      <p>
        In robotics, it is possible to find the implementation of stereo
cameras for similar applications. Two or sometimes three
cameras and special image processing techniques are used to
reconstruct the robot’s environment [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Stereo image
processing has its drawbacks. The main one among them is
finding the correspondence between two stereo images that is
very complicated. Moreover, the authors of these methods
often simplify the process by creating artificial landmarks,
including the usage of different kinds of structured lights, etc.
At the same time, in nature there are many organisms that have
successfully provided orientation in the environment by using
only one video-sensor. This fact creates prerequisites for
research methods for analog behavior in technical systems.
      </p>
      <p>
        To address practical issues of the task definition, we will
consider some of the environment in which mobile robot
operates at industry (Fig. 1) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>In that environment, it is quite difficult to localize robot.
Moreover, MR needs to determine its position independently
for provision of the navigational goal, subsequently, to deliver
the goods or perform other necessary operations.</p>
      <p>Based on the above mentioned practical needs, it is
proposed that the development of algorithms and software
units use one camera and image processing technique to easily
solve the task of positioning the mobile robot in the
environment during the movement to the target.</p>
    </sec>
    <sec id="sec-3">
      <title>III. IDEA OF THE PROPOSED ALGORITHM</title>
      <p>The idea of the Proposed Algorithm is to study and solve
problems. Let us consider the technology that is taken from
nature when organisms are oriented in space through various
beacons (example: the sun). For this purpose, one of the
possible solution of the previously mentioned task is proposed
to fix a landmark on the ceiling of the technological
environment where the mobile robot operates. In a situation
wherein, the coordinates of landmark are known, there is a
need for algorithms and software units of the image processing
that will determine the robot’s position in the environment.</p>
      <p>Thus, the following geometric interpretation is proposed to
solve this task (Fig. 2). According to Fig. 2, a special situation
is proposed, when on the base of the mobile robot platform the
video camera is fixed and directed vertically upward. Thus, the
location of mobile robot platform determines by the position
of video camera. The camera is located at some distance from
the ceiling OM (Fig. 2). Any point located on the ceiling is
projected through the center of the camera lens (point O o
Fig.2) to the sensor panel (plane ABDC on the Fig. 2). For
localization of the mobile robot in the environment, the
landmark template is fixed on the ceiling and has coordinates
known as (X2, Y2). To perform the movement of the mobile
robot to its target, it is necessary to find its location in the
environment (it is necessary to search the coordinates of points
(X1, Y1) and the angle "Alfa") basing on the projections of
landmark to image. As part of the robot localization, one of the
practical task is the identification of landmark on the image
from the video camera.</p>
      <p>Thus, the input data for the developed algorithm and
software units are color RGB images obtained by the robot’s
video camera. For the solution of the task that is given above,
the following restrictions and assumptions are considered:
- a preliminary calibration of the camera was done. Due to
the result of the video camera calibration, its position is
fixed onboard of the mobile robot platform and does not
change during operation;
- MR provides movement in a straight and flat horizontal
surface like a floor, which practically represents a
homogeneous coating (laminate flooring, linoleum,
construction coupler);
- there are no overhanging objects that could cause a
collision with a mobile robot in the environment;
- a landmark template exist in the environment with
known parameters and it is visible for mobile robot;
- within presented restrictions, the operating
threedimensional model of the environment that is presented
on Fig. 2 is considered.</p>
      <p>The expected output of the algorithm is the selected segment
of the landmark template at the image plane, which is used to
calculate the trajectory of mobile robot movement to the target
point.</p>
      <p>Thus, to achieve this task, it is proposed to use the video
camera as an effective passive sensor. By using this, the mobile
robot will provide the proper positioning in the environment
on the way to the target point.</p>
      <p>IV. GENERALIZED ALGORITHM OF MOBILE ROBOT</p>
      <p>
        NAVIGATION BASED ON MONOCULAR IMAGE
In general cases, the navigation of MR to the target is
provided by using image processing from one camera. The
robot navigation consists in analyzing of the current robot
location and local targets that follow to global position. These
local targets can be displayed as a line or landmarks
representing a sequence of intermediate links that should
follow the robot. Sometimes, there is a situation wherein the
robot has only one global target to achieve. In this case, the
movement of mobile robot must be ensured, taking into
consideration the possible local obstacles or static architectural
designs of the environment. Local movement to the target may
be provided by one of the known methods that are based on
local or global navigation [
        <xref ref-type="bibr" rid="ref11 ref18">11, 18</xref>
        ]. Within this scientific work,
the local movements of mobile robots are not considered, but
it considers the algorithm by which the robot determines its
position for predicting the direction of movement to the target
as a subtask of robot navigation.
      </p>
      <p>For simplicity in the consideration of the above presented
principles of robot navigation, let consider the robot
environment as grid-based model. In this environment, the
coordinates of the target point are given, which represents the
goal of the robot movement. The ultimate purpose of a robot
navigation is to build a direction (trajectory) of movement to
the global target point and to generate the control commands,
which defines the required acceleration of MR wheels for
maneuvering. The navigation task is completed when the robot
is within a certain range with respect to the point of global
goal.</p>
      <p>
        For the provision of the above presented way of robot
navigation, unlike existing known local methods of navigation
[
        <xref ref-type="bibr" rid="ref12 ref13 ref14 ref15">12-15</xref>
        ], it is proposed to take the appropriate decisions for the
direction of MR movement at each step in a loop. Thus, the
decision to bypass obstacles and direction taken at each
iteration of a loop depends on the location of the landmark on
the image of the robot’s camera. The main processes provided
by mobile robot for navigation to its target can be presented by
generalized algorithm and it consist of the following:
1. At the first step, there’s an execution of the image
processing procedures that initialize values for algorithms
and provide camera calibration procedure.
2. At the second step of the algorithm, the position of the
mobile robot platform (coordinates of its center point) and
the target position are determined. The vector of the length
between the point of the robot’s position and the point of
the target’s position is determined (the distance to the
target).
3. If the position of the mobile robot is within a certain radius
delta (the concrete value is specified during the
initialization procedure) from the target point, then it stops
working and take a decision of reaching the goal of
movement. In this case, the algorithm of mobile robot
movement is finish. This moment represents the
stoppoint for the MR navigation algorithm.
      </p>
      <p>Otherwise, the following sequence of steps are executed:
4. Gathering of the video-frame from the robot’s
videocamera.
5. It performs the segmentation of landmark template on the
image received previously from the video-frame.
Thereafter, the coordinates of the central point of the
landmark template are calculated at the local coordinate
system of the image.
6. It provides calculations of the directional angle of the
Mobile Robot’s position relative to the placement of
landmark template that is segmented on the image from
the video-camera.
7. It is performing the procedure of calculating the distance
from the position of mobile robot to the central point of
the landmark template.
8. It Performs the procedure of the MR positioning at the
manipulating environment.
9. Based on the coordinates of the target and the position of
the MR at the manipulating environment, it performs the
procedure for defining the direction of movement.
10. Providing the MR maneuvering, based on the necessary
parameters of acceleration for the MR motors. As a result,
the movement of the robot’s platform provides changes to
its position in the environment (its coordinates).
11. Return to step two.</p>
      <p>The flowchart of generalized algorithm is given on Fig. 3.</p>
    </sec>
    <sec id="sec-4">
      <title>V. LANDMARK TEMPLATE DETECTION ON THE IMAGE PLANE</title>
      <p>During navigation, the MR estimate its location and position
in the environment based on the landmark position. The last
one, it is possible to receive based on image processing from
one robot’s video-camera. It means that the orientation of the
given landmark of the images allows determination of the
position of the robot’s platform and as a result it provides
smooth navigation.</p>
      <p>In accordance to the list of steps presented above in the
generalized algorithm, one of the first process of robot
navigation is to capture a video frame from video camera.
Performances of these steps can take place using existing and
known possible approaches. At the same time, it is necessary
to design the methods that can detect landmark template on
video-image. To identify the landmark template for mobile
robot navigation, it is proposed that algorithm is used to
perform the following procedures (Fig. 4):
The RGB-image that receives from color video camera
represents input for algorithm execution. The algorithm
mentioned above selects some areas of pixels on the image
(image segment) that belongs to possible landmark template
that are selected. Thereafter, it is applying the procedure of
rejecting all other than landmark template segments by using
the various metrics. As the result of the algorithm, the image
segment of pixels that respond to landmark template is
selected.</p>
      <p>Let consider the implementation of algorithm for landmark
template detection on the image plane (Fig. 4) as the most
important part for landmark robot navigation.</p>
      <p>All the processes done were formalized mathematically for
the investigation and implementation of the algorithms
mentioned above. Also, it was designed specific graphical
representation of the landmark template (Fig. 6a). The shape
of landmark allows unique identification among other objects
of the image, and to determine angular orientation on the
global environment map. Additionally, the three metrics for
guaranteeing the selection of landmark template among the
other segments on the image plane were suggested:
- the number of pixels in the segment;
- the distance between the most remoted pixels in the
segment;
- the presence and number of holes in the segment.</p>
      <p>To investigate and demonstrate algorithm, a special
situation of landmark location on the ceiling of MR
environment was taken (Fig. 6a). As it could be seen on the
image captured by the video-camera, there exist additional
object (lighting lamp). Such object could be located at the
range of the camera’s vision and needs to be removed as
unwished for processing. Median filter with 3x3 matrix
operation were applied to each image pixel on Fig. 6a.</p>
      <p>According to the algorithm of landmark detection, the
following values of thresholds were selected for red, green and
blue colors: R_Tresh=75±28, G_Tresh=95±10,
B_Tresh=133±10. The result of image thresholding presents
on Fig.6b.</p>
      <p>a)
c)
b)
d)</p>
    </sec>
    <sec id="sec-5">
      <title>VI. ALGORITHM IMPLEMENTATION</title>
      <p>
        The above designed algorithms for robot navigation was
explored by using Mat-lab software. The implementation of all
processes that provide MR navigation to its target is currently
under development. The conceptual interest of researches
consists of obtaining the stable segmentation of landmark
template for MR pose estimation in the environment. During
researches, many navigation procedures could be implemented
by the application of specific functions that are appropriate for
the MR configurations individually, depending on the type of
robot. For example, it could be possible to use ARIA
environment for robots from ActivMedia Robotics Company
[
        <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
        ] (Fig. 5).
      </p>
      <p>
        Fig.5. One video-camera application for navigation of mobile robot
Pioneer P3-DX (potential application) [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]
the plurality of image objects were selected. It includes 2436
pixels and three holes. The distance between the most remoted
points is 147.
      </p>
      <p>Experimental studies have shown the usage of a sufficient
metric for identifying landmark template at 200 different
locations at the environment.</p>
      <p>The actual representation of the algorithm scenarios will be
demonstrated during the presentation.</p>
    </sec>
    <sec id="sec-6">
      <title>VII. SUMMARY AND CONCLUSION</title>
      <p>In this paper, algorithms of mobile robot movements were
developed and experimentally investigated by using a one
video camera. This practical task was reached by applying
localization techniques using landmark template detection.</p>
      <p>The generalized algorithm that allows mobile robot to move
to the target was developed based on reading from one video
camera and image processing procedures.</p>
      <p>The graphic template of landmark was designed, which
allows the MR to identify its position as the image among other
objects and allows it to determine angular orientation on a
global environment of mobile robot.</p>
      <p>The algorithm for landmark template segmentation was
designed based on the image processing that allows the MR to
identify its position on the image plane. By knowledge of the
position of landmark template in the environment, it is possible
to localize mobile robot.</p>
      <p>The experimental studies of the proposed algorithm of
landmark template detection on the video images have shown
the stability on each algorithm step and provided a selection of
one segment among the plurality of image objects.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Robla-Gómez</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Becerra</surname>
            <given-names>V.</given-names>
          </string-name>
          , “Working Together.
          <article-title>A Review on Safe Human-Robot Collaboration in Industrial Environments”</article-title>
          ,
          <source>IEEE Access (Vol. 5)</source>
          , November 14,
          <year>2017</year>
          , pp.
          <fpage>26754</fpage>
          -
          <lpage>26773</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Baudoin</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Habib</surname>
            <given-names>M.</given-names>
          </string-name>
          , “
          <article-title>Using Robots in Hazardous Environments”, 1st</article-title>
          <string-name>
            <surname>Edition</surname>
          </string-name>
          , Woodhead Publishing,
          <year>2010</year>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>692</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , PatrUn,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Lane</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.M.</surname>
          </string-name>
          , “
          <article-title>Design and Evaluation of a Reactive and Deliberative Collision Avoidance and Escape Architecture for Autonomous Robots”</article-title>
          , Autonomous Robot Vol.
          <volume>24</volume>
          ,
          <year>2008</year>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Goebel</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jubeh</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raesch</surname>
            <given-names>S-L</given-names>
          </string-name>
          &amp;
          <article-title>Zuendorf. A. “Using the Android Platform to control Robots”</article-title>
          ,
          <source>In Proceedings of 2nd International Conference on Robotics in Education (RiE</source>
          <year>2011</year>
          ). Vienna, Austria,
          <year>September 2011</year>
          , pp.
          <fpage>135</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Siegwart</surname>
          </string-name>
          , Roland, Nourbakhsh, Illah Reza, “Introduction to Autonomous Mobile Robots (
          <article-title>Intelligent Robotics</article-title>
          and Autonomous Agents series)” / Siegwart, Roland, Nourbakhsh, Illah Reza, Scaramuzza, MIT Press;
          <source>2nd Revised edition</source>
          ,
          <year>2011</year>
          , P.
          <fpage>453</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Siciliano</given-names>
            <surname>Bruno</surname>
          </string-name>
          , Khatib Oussama “Springer handbook of robotics”, Springer International Publishing,
          <year>2016</year>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2227</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Arras</surname>
            ,
            <given-names>K.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castellanos</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schilt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siegwart</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , “
          <article-title>Feature-based Multi-hypothesis Localization and Tracking Using Geometric Constraints”</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>44</volume>
          ,
          <year>2003</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Betke</given-names>
            <surname>Margrit</surname>
          </string-name>
          , Gurvits Leonid “
          <article-title>Mobile robot localization using landmarks”</article-title>
          ,
          <source>IEEE Transaction on robotics and automation</source>
          , Vol
          <volume>13</volume>
          , No. 2,
          <string-name>
            <surname>April</surname>
            <given-names>1997</given-names>
          </string-name>
          , pp.
          <fpage>251</fpage>
          -
          <lpage>263</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sachenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Koval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Adamiv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kapura</surname>
          </string-name>
          ,
          <article-title>"Evaluation of Camera Calibration Methods for Computer Vision System of Autonomous Mobile Robot”</article-title>
          ,
          <source>Proceedings of International Conference "Modern Information and Electronic Technologies” (MIET-2009)</source>
          ,
          <source>Odessa (Ukraine)</source>
          ,
          <year>2009</year>
          , p.
          <fpage>29</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>“Are Robots About to Take Over E-commerce</surname>
            <given-names>Warehouses</given-names>
          </string-name>
          ?”,
          <year>2018</year>
          , http://www.airindknows.com/arerobots-about
          <article-title>-to-take-over-e-commerce-warehouses/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Jian</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , “
          <article-title>Comparison of Optimal Solutions to Real time Path Planning for a Mobile Vehicle “</article-title>
          , by
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhihua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kevin</surname>
          </string-name>
          ,
          <source>IEEE Transactions on Systems, Man and Cybernetics</source>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <source>System and Humans</source>
          , Vol.
          <volume>40</volume>
          ,
          <year>2010</year>
          , pp.
          <fpage>721</fpage>
          -
          <lpage>725</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Ersson</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            <given-names>X.</given-names>
          </string-name>
          , “
          <article-title>Path Planning and Navigation of Mobile Robots in Unknown Environments”</article-title>
          ,
          <source>IEEE Journ. of Robotics and Automation,.# 6</source>
          ,
          <issue>2010</issue>
          , pp.
          <fpage>212</fpage>
          -
          <lpage>228</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.L.</given-names>
            <surname>Guzmán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berenguel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rodríguez</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Dormido</surname>
          </string-name>
          , “MRIT: Mobile Robotics Interactive Tool” [electronic resource],
          <year>2018</year>
          , http://aer.ual.es/mrit/.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>O.</given-names>
            <surname>Adamiv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Koval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dorosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sapozhnyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kapura</surname>
          </string-name>
          ,
          <article-title>"Mobile Robot Navigation Method for Environment with Dynamical Obstacles”</article-title>
          ,
          <source>Proceedings of the IEEE Fifth International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications</source>
          ,
          <volume>21</volume>
          -
          <fpage>23</fpage>
          September 2009,
          <string-name>
            <surname>Rende</surname>
          </string-name>
          (Cosenza), Italy, pp.
          <fpage>515</fpage>
          -
          <lpage>518</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Chernonozhkyn</surname>
            ,
            <given-names>V.A.</given-names>
          </string-name>
          , “
          <article-title>Local Area Navigating System for ground mobile robots”</article-title>
          ,
          <source>Scientific and Technical Journal YTMO St</source>
          . Petersburg State University,
          <year>2008</year>
          , №
          <volume>57</volume>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Oleh</surname>
            <given-names>Adamiv</given-names>
          </string-name>
          , Vasyl Koval, Arunas Lipnickas, Viktor Kapura, “
          <article-title>Local navigation method for improvement of mobile robot movement”</article-title>
          ,
          <source>Proceedings of the 3rd International Conference Mechatronic Systems and Materials (MSM</source>
          <year>2007</year>
          ),
          <source>Kaunas (Lithuania)</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>245</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>William</given-names>
            <surname>Benn</surname>
          </string-name>
          and Stanislao Lauria, “
          <article-title>Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions”</article-title>
          , Hindawi Publishing Corporation Mathematical Problems in Engineering, Volume 2012,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>14</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Olivier</surname>
            <given-names>Koch</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matthew</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Walter</surname>
          </string-name>
          . “
          <article-title>Ground Robot Navigation using Uncalibrated Cameras”</article-title>
          ,
          <source>In Proc. IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <source>May</source>
          <year>2010</year>
          , pp.
          <fpage>2423</fpage>
          -
          <lpage>2430</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Adept</surname>
            <given-names>Mobile robots</given-names>
          </string-name>
          ,
          <year>2014</year>
          ,: http://www.activmedia.com/.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[20] “AmigoBot Operations Manual, revision 4.3”</source>
          ,
          <year>2018</year>
          , http://robots.mobilerobots.com/wiki/Manuals.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21] “Mobile robotics platforms”,
          <year>2018</year>
          , https://raweb.inria.fr/rapportsactivite/RA2015/lagadic/ui d51.
          <source>html.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>