<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. (2018). On Application of Latticed
Differential Equations with a Delay for Immunosensor Modeling. In Journal of Automation
and Information Sciences</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.54254/2755-2721/33/20230278</article-id>
      <title-group>
        <article-title>A comparative study of bug algorithms for robot</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mykhaylo Strembitskyi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Myroslava Yavorska</string-name>
          <email>myavorska@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andriy Palamar</string-name>
          <email>palamar.andrij@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roman Kochan</string-name>
          <email>roman.v.kochan@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valeriy Yeromenko</string-name>
          <email>v.yeromenko@wunu.edu.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>79013, 12 Stepan Bandera Street, Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ternopil Ivan Puluj National Technical University</institution>
          ,
          <addr-line>Ruska str., 56, 46001, Ternopil</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>West Ukrainian National University</institution>
          ,
          <addr-line>Lvivska str. 11, Ternopil, 46009</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>55</volume>
      <issue>4</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>An important stage in the design of robotic cyber-physical immunosensors systems is the development and research of their mathematical models, which would adequately reflect the important, from the point of view of the problems of research, aspects of the spatial structure of immunopixels. After all, the quality of the model determines the effectiveness of their processing methods in immunosensors systems. The peculiarities of the laser distance sensors application for the detection of obstacles and positioning of a mobile robotic platform in space is considered. The work developed a robotic platform using laser distance sensors. A method of determining dynamic obstacles and a method of positioning are proposed. The calculation of non-contact sensors of the distance of the robotic platform, the scheme of placement of sensors and positioning of the mobile robotic platform in space are described There is a mobile robotic platform in which a complex of infrared distance sensors is used to determine parallelism, distance to an obstacle and positioning.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;mobile robotic platform</kwd>
        <kwd>sensor</kwd>
        <kwd>algorithm</kwd>
        <kwd>positioning</kwd>
        <kwd>linearization 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The planning the movement of a mobile robotic platform, determining obstacles and positioning in
space are one of the most important problems of the operation of robotic platforms and one of the
areas of modern scientific and practical knowledge that is being investigated. Robot navigation is
the problem of finding a path for a robot to move from a start location to a goal location in an
environment, while avoiding obstacles and minimizing costs [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The problem can be complicated
because unknown or dynamic environments, where the robot has limited or uncertain information
about its surroundings. One of the approaches to solve this problem is to use bug algorithms,
which are simple and efficient techniques that rely on local sensor information and do not require
global maps or planning [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Bug algorithms are inspired by the behavior of insects, such as ants
or cockroaches, that can navigate complex environments using simple rules. Their basic idea is to
make the robot move toward the goal until it encounters an obstacle, then follow the obstacle
boundary until it finds a point closer to the goal than the point where it hit the obstacle, and then
resume moving toward the goal. This process is repeated until the robot reaches the goal or
determines that the goal is unreachable [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        Bug algorithms can be classified into two categories: contact-based and range-based.
Contactbased bug algorithms use only tactile sensors, such as bumpers or whiskers, to detect obstacles.
Range-based bug algorithms use distance sensors, such as sonar or laser, to measure the distance to
obstacles and the goal [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Contact-based bug algorithms are simpler and more robust, but they
may require more time and distance to reach the goal. Range-based bug algorithms are faster and
more efficient, but they may be more sensitive to sensor noise and errors [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
      </p>
      <p>
        The main features, advantages, and disadvantages of different approaches to bug algorithms
constructions, the discuss their performance and applicability in various scenarios and also some
examples of practical applications are presented in [
        <xref ref-type="bibr" rid="ref6 ref7">6-8</xref>
        ]. We were review the main features,
advantages, and disadvantages of different bug algorithms [9,10], and discuss their performance
and applicability in various scenarios. The development of mathematical models based on
difference and differential equations on lattices, has demonstrated high accuracy in simulating
spatial and dynamic interactions in complex systems [11-13]. These modeling techniques, which
ensure operational stability and precise parameter identification, can inform the design of
sensorbased robotic systems where spatial awareness and real-time response are critical [14,15].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Range-Based Bug Algorithms</title>
      <p>Range-based bug algorithms are more advanced and efficient type of bug algorithms. They were
first proposed by Khatib (1986), who introduced a variant called Potential Field Method (PFM).
These algorithms assume that the robot has a perfect compass and a distance sensor that can
measure the distance to obstacles and the goal. Range-based bug algorithms use the distance sensor
information to create a potential field, which is determined with a function that assigns some
values to each point in the environment. The potential field is composed of two components: an
attractive component, which pulls the robot toward the goal, and a repulsive component, which
pushes the robot away from obstacles. The robot’s motion is determined by the gradient of the
potential field, which indicates the direction of the steepest ascent or descent. The robot moves
along the direction of the negative gradient, which leads to the minimum of the potential field.</p>
      <p>Range-based bug algorithms are faster and more efficient than contact-based bug algorithms, as
they permit to avoid obstacles without touching them and find shorter or optimal paths to the goal.
However, range-based bug algorithms may suffer from some drawbacks, concerning local minima,
oscillations, and sensitivity to sensor noise and errors.</p>
      <p>Bug navigation algorithms are based on some mathematical concepts that describe the robot’s
motion, sensing, and decision making. For example:
1.
2.</p>
      <p>The robot’s position is represented by a vector p=( x , y ) in a Cartesian coordinate system,
where x and y are the horizontal and vertical coordinates, respectively.</p>
      <p>The robot’s orientation is represented by an angle θ measured from the positive x-axis in a
counterclockwise direction.</p>
      <p>The robot’s goal position is represented by another vector g=( xg , y g), which is assumed to
be known and fixed.</p>
      <p>The robot’s motion is controlled by two inputs: a linear velocity v and an angular velocity
ω. The robot’s kinematic model is given by:
where i and j are the unit vectors along the x-axis and y-axis, respectively. The robot’s
distance to the goal is given by the Euclidean norm of the difference between p and g:
dp =v cos θ i + v sin θ j,
dt
dθ =ω,
dx
d ( p , g )=‖p− g‖=√( x− x g)2+( y − y g)2,
(1)
(2)
(3)
The robot’s heading error is given by the difference between θ and α ( p , g ):</p>
      <p>The robot’s distance sensor measures the distance to the nearest obstacle in a given direction β,
which is relative to the robot’s orientation. The sensor model is given by:
α ( p , g )=arctan
y g− y
xg− x</p>
      <p>,
e ( p , θ , g )=θ−α ( p , g ),
r ( β )=d ( p , o ( β ))+n ( β ),</p>
      <p>s=f ( d ( p , g ))+m,</p>
      <p>The robot’s direction to the goal is given by the angle between the vector p-g and the positive x
- axis:
(4)
(5)
(6)
(7)
(8)
(9)
where o ( β ) is the position of the obstacle point in direction β, and n ( β ) is the sensor noise,
which is assumed to be zero-mean Gaussian with variance σ 2. The robot’s intensity sensor
measures the signal strength from the goal, which depends on the distance and the signal intensity
function f ( d ). The sensor model is given by:
where m is the sensor noise, which is assumed to be zero-mean Gaussian with variance τ 2.</p>
      <p>The robot’s decision making is based on switching between two modes: move-to-goal (MTG)
and follow-boundary (FB). The switching conditions are given by:</p>
      <p>MTG → FB : r ( 0)&lt; d s,</p>
      <p>FB → MTG : d ( p , g )&lt; d (l , g ) or s &gt; smax,
where d s is a safety distance threshold, l is the leave point, and smax is a maximum intensity
threshold.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Control system implementation method for robotic platform</title>
      <p>Thе control unit considered is designed to control the robotic platform shown in Figure 1.</p>
      <p>The first action after the initialization of the components is to poll the distance sensors and
process the data that will arrive from the ADC to the controller. The next step will be to determine
the direction of rotation of the DC motor, at the same time the position of the servo is set. After
these operations are completed, a control signal is sent to the DC motor, causing it to start.</p>
      <p>When the control is carried out, it is possible to change the position of the servo drive, control
the speed and direction of rotation of the direct current motor using an encoder.</p>
    </sec>
    <sec id="sec-4">
      <title>4. The principle of operation of non-contact infrared distance sensors</title>
      <p>The principle of operation of the infrared sensor Sharp GP2Y0A02YK0F is based on the use of an
infrared beam that is reflected from an object to measure the distance.</p>
      <p>The distance is calculated using the triangulation method [16, 17]. The sensor consists of an
infrared LED and PSD element [18]. The emitted light beam is reflected from the object, the
reflected beam reaches the PSD of the element on which a "light spot" is formed. When the position
of the object changes, the angle of reflection of the beam changes Figure 2.</p>
      <p>The sensor has a built-in circuit for processing the signal, which allows you to determine the
distance to the object. The measured distance is represented as an analog signal.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Calculation of non-contact distance sensors of the robotic platform</title>
      <p>To determine the calibration characteristic of a non-contact distance sensor, we turn to the
experimentally determined dependences of the output voltage on the distance to the reflecting
surface. The dependencies for the surfaces with different reflective characteristics are shown in
Figure 3.</p>
      <p>As one can see from Figure 3, it is possible to specify the operating ranges for which the
dependence of the sensor voltage from the distance measured are acceptable.</p>
      <p>An example of an approximate distance determination based on information received from a
sensor is shown in Figure 4.</p>
      <p>Lmax – is the maximum distance from the sensor to the measurement object.</p>
      <p>Vmax – is the voltage we get when measuring the maximum possible distance to the object.</p>
      <p>For the most optimal distance measurement modes we will operate with the next data: 150
centimeters to the measuring object will equal 0 volts of the output signal, and 0 centimeters equals
2.2 volts.</p>
      <p>Sensor characteristic coefficients after approximation by a 5th order polynomial: [-3. 83,
-645.2215, -3849.3362, -1230, 150]</p>
      <p>Minimal error of Approximation: 0.0014 cm
Mean error of Approximation: 2.6278 cm
Maximal error of Approximation: 9.1994 cm
Blue Pill ADC resolution = 12-bits</p>
      <p>These error metrics are comparable to those achieved in complex systems modeling using
numerical simulation and parameter identification techniques, where mean approximation errors
below 5% are considered acceptable for real-time applications [19]. The use of polynomial
approximation and signal linearization in sensor calibration mirrors approaches used in the
analysis of complex systems dynamics, where accurate signal reconstruction is essential [20].</p>
      <p>ADC resolution of the microcontroller:</p>
      <sec id="sec-5-1">
        <title>Resolution of the linearized sensor</title>
        <p>32.132 =0.0008 bVit ,
0.0008
( 0.0187 )=0.043 cm,
(11)
(12)</p>
        <p>Methods of accurate distance measurement and determination of the parallelism of mobile
robotic platforms to obstacles and objects in real time are an important component in the tasks set
for modern robotic platforms [21].</p>
        <p>Analysis of the interference of mobile robotic platforms in space is carried out with the help of
four infrared sensors Sharp GP2Y0A02YK0F. The arrangement of sensors is shown in Figure 6.
a)
b)</p>
        <p>The distance to the obstacles and parallelism measurement of the mobile robotic platform to
objects is determined from an isosceles right triangle. The distance is determined by finding the
bisector (B) according to the formula (13)</p>
        <p>Since, while maintaining the required accuracy based on the characteristics of the sensor, we
can measure the distance (a, b) in the range from 15 to 150 cm, the considered scheme allows us to
determine the distance to the obstacle and the parallelism from 0 to106 cm.</p>
        <p>To determine parallelism, we define (aex) the experimental distance and compare with the real
(a)</p>
        <p>B=√2 ab ,</p>
        <p>a+b
a=</p>
        <p>B
cos 45 °</p>
        <p>,
a
α1=atan ⁡( ),</p>
        <p>b
α1=atan
α1=atan
2 B−a
√3 B
(17)</p>
        <sec id="sec-5-1-1">
          <title>If ( аех )≠ ( а ) or, we determine the angle α1</title>
          <p>The angle α is determined using the formula</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>If ( α1 )&gt;( α ) then the desired angle is α1</title>
        </sec>
        <sec id="sec-5-1-3">
          <title>If (α1)&lt;( α ) then the desired angle is α1</title>
          <p>where aex is the experimentally determined leg length of an isosceles right triangle.
a – the actual length of the leg of the studied triangle.
α 1 – angle of deviation from the object.</p>
          <p>B – bisector of the studied triangle.</p>
          <p>α – valid angle to the object is 45°.</p>
          <p>Thus, the work algorithm is as follows. The mobile robotic platform distance to the obstacle and
parallelism calculation is carried out from the measured distances of the isosceles right triangle as
in figure 6 b). The distance to the object is determined by the bisector estimating. After finding the
bisector the experimental dynamic distance in motion to the object and compare it with the real
one. If the experimental distance differs from the real one, then the angle α1 is determined, if
(α1)&gt;( α ) then the angle α1 is determined. If (α1)&lt;( α ) then we determine the angle α1. When
performing all angle determinations, we compare it with the actual angle to object 45°.</p>
          <p>The proposed method of measuring distance and parallelism allows to carry out these
operations using a small number of sensors, and also allows to make measurements quite
accurately based on the basic characteristics of the sensors. The obtained results are useful for the
tasks described in [22,23].</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>A method of determining dynamic obstacles and a method of robotic platform positioning are
proposed considered. The approach proposed is established on the laser distance sensors using for
the obstacles detecting and positioning a mobile robotic platform in the space. The calculation of
non-contact sensors of the distance of the robotic platform, the scheme of placement of sensors and
positioning of the mobile robotic platform in space are described.</p>
      <p>The method proposed of using laser distance sensors for determining obstacles and positioning
a mobile robotic platform in the space allows to determine the coordinates of the location and
course of a mobile robotic platform using infrared distance sensors. The above functions in various
combinations can be performed by a positioning system, an inertial navigation system capable of
calculating the distance to obstacles, calculating the parallelism of the mobile robotic platform to
the plane, and calculating the course of the mobile robotic platform.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <sec id="sec-7-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Kaff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>García</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Martín</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. De La Escalera</surname>
            , and
            <given-names>J. M.</given-names>
          </string-name>
          <string-name>
            <surname>Armingol</surname>
          </string-name>
          ,
          <article-title>Obstacle detection and avoidance system based on monocular camera and size expansion algorithm for UAVs</article-title>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Bhavesh</surname>
          </string-name>
          ,
          <article-title>Comparison of various obstacle avoidance algorithms</article-title>
          .
          <source>Int. J. Eng. Res. Technol.4</source>
          , (
          <year>2015</year>
          )
          <fpage>629</fpage>
          -
          <lpage>632</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coppola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. N.</given-names>
            <surname>McGuire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Y. W.</given-names>
            <surname>Scheper</surname>
          </string-name>
          , G.
          <string-name>
            <surname>C. H. E. de Croon</surname>
          </string-name>
          ,
          <article-title>On-board communicationbased relative localization for collision avoidance in micro air vehicle teams</article-title>
          , Autonomous Robots,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Fuentes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maravall</surname>
          </string-name>
          , J. de Lope,
          <article-title>Entropy-based search combined with a dual feedforward/feedback controller for landmark search and detection for the navigation of a UAV using visual topological maps</article-title>
          ,
          <source>in Proc. Robot 2013: First Iberian Robotics Conference. Springer Series on Advances in Intelligent Systems and Computing</source>
          , Heidelberg: Springer, volume
          <volume>233</volume>
          ,
          <year>2014</year>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Noreen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Habib</surname>
          </string-name>
          ,
          <article-title>Optimal path planning using RRT* based approaches: a survey and future directions</article-title>
          .
          <source> Int. J. Adv. Comput. Sci.</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>97</fpage>
          -
          <lpage>107</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Lopez-Guede</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Graña</surname>
          </string-name>
          ,
          <article-title>Neural modeling of hose dynamics to speedup reinforcement learning experiments</article-title>
          ,
          <source>in Bioinspired Computation in Artificial Systems (Elche)</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>311</fpage>
          -
          <lpage>319</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Gfeller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mihalak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Suri</surname>
          </string-name>
          , E. Vicari,
          <string-name>
            <given-names>P.</given-names>
            <surname>Widmayer</surname>
          </string-name>
          ,
          <article-title>Counting targets with mobile sensors in an unknown environment</article-title>
          .
          <source>In ALGOSENSORS</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>