<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>System and method of automatic collection of objects in the room</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mariia Yu. Tiahunova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Halyna H. Kyrychek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana O. Bohatyrova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daryna D. Moshynets</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University “Zaporizhzhia Polytechnic”</institution>
          ,
          <addr-line>64 Zhukovsky Str., Zaporizhia, 69063</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>174</fpage>
      <lpage>186</lpage>
      <abstract>
        <p>A system and method of automatic collection of objects in a room focused on the minimum energy consumption is proposed in this paper. This result is achieved by the implementation of an improved method of automatic collection of items in the room and mathematical method for calculating the desired motion trajectory; development and implementation of algorithms that implement the proposed method; software and hardware implementation of the system for automatic collection of items with minimal energy consumption, including the small number of system components and an improved movement trajectory. The system's main component is the Arduino Uno, which acts as a controller. The developed software makes it possible to evaluate the implemented method's efectiveness in a real-life system. An application example of the proposed method is given.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;roboplatform</kwd>
        <kwd>Alphabot</kwd>
        <kwd>Arduino</kwd>
        <kwd>optimization</kwd>
        <kwd>evolutionary method</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Humanity is on the verge of a new era of technological development. Already now, robotics
is rapidly being introduced into people’s daily lives, helping them to interact more efectively
with automated systems, improve existing jobs and, in general, give people more time to focus
on what interests them, it is important and fun. We are talking about cyber-physical systems
for modeling complex sociotechnical systems that largely control themselves.</p>
      <p>
        The principle of operation of such systems is to combine physical production processes or any
other processes that require continuous control in real-time using software [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and electronic
systems.
      </p>
      <p>
        The robot difers from a conventional automatic system in its multipurpose purpose, versatility,
and ability to be reconfigured to perform various functions. Automation of robotic systems
requires the development of robots capable of intellectual activity, using a pre-prepared program
or artificial intelligence technologies [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ].
      </p>
      <p>Today, the use of robots covers almost all industries and tasks. With the rapid development
of robotics, human-machine interaction will soon become a common daily practice. Moreover,
today technological advances are increasing the adaptability and flexibility of robots. Most
people in developed countries prefer home automation, allowing control of lighting, air
conditioning, audio content, security systems, and home appliances. Much attention is paid to the
creation of mobile robots that are used in home conditions or in situations where the presence
of a person is dangerous and optional. Modern robotics focuses on the complete automation
and autonomy of robots and robotic systems. A characteristic feature of work is the ability to
partially or completely fulfill the mechanically motor and intellectual functions of a person.</p>
      <p>Additional support is provided by connected jobs that apply to all types of services, such as
robotic vacuum cleaners. Modern companies are actively developing this area, developing new
assistant robots. The number and popularity of home robotic assistants in homes are growing
at a high rate, due to the increase in the functions that provide work and the decrease in the
price of them.</p>
      <p>Most robotic home assistants are entertaining in nature, such as a voice assistant built into
a music speaker. Another group of robots is designed to free humans from doing household
chores, they can guardhouses, wash windows, clean pools and do small household chores.Work
greatly improves the quality of our life at home, at work and during leisure. They are adapted
to recognize objects, distinguish objects of the environment, repeat simple human movements,
coordinate with other works. Such functionality has become possible thanks to the use of
innovations in the development of robotics, as well as due to the improvement of algorithms
that control perception, thinking, control and coordination of work.</p>
      <p>The purpose of the study is to organize the automatic collection of items in the room by
developing a system and collection method that allow assessing not only the results of the work,
but also implementing the collection process with minimal energy consumption.</p>
      <p>The objective of the research is to develop a robotic autonomous system for collecting objects
in a room with increased requirements for energy saving due to the developed method.</p>
    </sec>
    <sec id="sec-2">
      <title>2. System structure</title>
      <p>The basis of the collection system is represented by the Alphabot double-deck robotic platform,
which allows you to remotely control the development via Bluetooth or WiFi. The robotic
platform is equipped with two motors, contains mounting holes for mounting various sensors,
controllers and power supplies. The main component of the aforementioned system is the
Arduino Uno hardware and software complex, which acts as a control controller. It is a board
that houses the microcontroller and other electronic components.</p>
      <p>
        The MG995R servo motor is used for automatic remote control of the claws, which assists in
the opening (closing) functions. The operation of the wheels is provided by DC motors with a
gearbox, the rotation speed of the motors is helped to determine the photo-interrupting sensors.
To determine the distance, the HC-SR04 ultrasonic sensor is connected to the robot platform
[
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
      <p>The robotic claw is made of light metal. The claw opens approximately 55 mm and, depending
on the servo used, can lift heavy objects.</p>
      <p>
        The main connections to the Arduino board are shown in figure 1. In this case, the Board is
connected in a standard way to the Alphabot platform, according to the documentation [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Optimization of energy consumption in this work is considered from the point of view of
minimizing the trajectory of the robot’s movement. Changing the component base, in the future,
is planned as the next stage of research in terms of energy consumption optimization.</p>
    </sec>
    <sec id="sec-3">
      <title>3. System operation</title>
      <p>For the efective functioning of the system, a method for collecting items from the floor of the
room was developed, which allows collecting with minimal energy consumption. In order for
the energy consumption to be minimal, it is necessary that the time of the task being performed
is also minimal. This result can be achieved by moving the robot along the minimum path along
the total distance.</p>
      <p>
        Thus, the developed method of collecting objects from the floor is as follows. The room
where the robot moves to collect items is a matrix of points, each of which is equal to the length
of the robot platform with a claw. The robot is installed in those points where it can turn to the
left. With the help of an ultrasonic sensor, it is determined how far it can travel forward, that is,
to a wall or an object [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
      <p>As soon as the system reaches the obstacle and it turns out to be a wall, then this point
becomes a “basket” where objects will be collected. Then the robot makes a 180-degree turn
and continues to go to the next obstacle. If there is an object on the way, he tries to grab it and,
if successful, determines the shortest path to the “basket” from the point at which he stopped,
and begins to move in this direction.</p>
      <p>Having reached the point of collecting objects, he leaves the object and returns to the point
where he picked it up, and continues to search for the following objects. If the robot was unable
to grip the object, because the dimensions difer from the permissible parameters, it makes a
detour around the object. When the robot reaches the other end of the room, it looks for the
shortest route back to the starting point. As soon as the robot is at the starting point of the
room, it turns of.</p>
      <p>The algorithm for the functioning of the system that implements the developed method of
collecting items, based on the above, is as follows:
1. The robot scans the distance to the obstacle using an ultrasonic sensor.
2. The robot moves towards an obstacle.
3. Having stopped at an obstacle, the object is gripped: the claw opens, the robotic platform
travels 4 cm and the claw is closed.
4. The robot moves backward 8 cm and checks the distance:
• if the distance does not change and the “basket” is indicated, then go to step 8;
• if the distance does not change and the “basket” is not indicated, then the robot
turns in place by 180 degrees, moves to the wall with the help of obstacle sensors
and makes a reverse turn;
• if the distance changes and the “basket” is indicated, then go to step 6;
• if the distance changes and the “basket” is not indicated: the robot defines this point
as a collection point. Then step 6 is performed.
5. The robot opens the claw, moves forward 4 cm, closes the claw. Then step 9 is performed.
6. Turn left.
7. If the distance allows movement, a 90-degree turn is performed, then step 1 is performed,
otherwise, the robot is at the end of the room: step 10 is performed.
8. The shortest path to the “basket” is determined. A movement is made towards it and step
5 is carried out.
9. The shortest path to the capture point of the object is determined. When the robot arrives
at this point, step 1 is performed.
10. The shortest path to the starting point is determined.</p>
      <p>11. The movement to the starting point is carried out, and the robot finishes its work.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Mathematical justification</title>
      <p>To prove that energy consumption will be minimal, with the smallest distance traveled by the
robot and with the minimum number of devices, we will construct the following mathematical
model.</p>
      <p>First, let’s prove that a device with fewer connected devices should use less power.</p>
      <p>Let  – be the distance that the robot will cover when moving around the room, while
collecting all possible objects. Then we will designate the energy that he will spend as .
Value  – the number of connected devices powered from one source.</p>
      <p>The energy of each -th device is denoted as , where  = (1,  ). Then the energy that

will be spent by the robot for the entire path traveled is determined as follows:  = ∑︀ .
1
Considering that the value  cannot be less than zero, then according to the mathematical
law of additivity, the lower the value of the energy of the  devices , the lower the value of
the energy that is spent by the robot while covering the entire distance . This means that
the fewer devices connected to the robotic platform, the less energy they will spend during
a certain operating time. Thus, it has been proved that the device developed in this scientific
work meets the requirements of energy saving.</p>
      <p>Secondly, we will prove that the energy consumption will be the least, with the minimum
distance traveled by the robot. Suppose that the robot moves uniformly, because the time
for gripping the object does not depend on the path of movement of the robot, but remains
unchanged regardless of which path was traversed by the robot. Then the following formula
for determining the distance is valid:  = , where  – the time during which the movement is
carried out at a given speed of robot .</p>
      <p>Then  =  . Taking into account the energy of motion  = 22 , we get that  = (2 )2 .
This means that the less the path traveled by the robot, the less energy will be spent by the
device. Which it was necessary to prove. Thus, at this point of work, it was proved that the
developed system meets the requirements of energy saving and has significant advantages in
this.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Analysis of similar systems</title>
      <p>Minimizing energy costs is always an urgent problem. Many scientists are dealing with this
issue, and among robotics, not least. Naturally, there are many studies on similar topics, some
of which can be considered as distant analogs of the developed system.</p>
      <p>
        So, for example, Taniguchi et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] considers autonomous planning based on spatial concepts
for guidance in the house using service robots. To complete the tidying task, the robot must
recognize the environment, evaluate objects to clean while moving, and determine where to
place objects in three-dimensional (3D) space.
      </p>
      <p>The proposed tidy up planning method is formulated to first select the object whose tidied
place is the most defined. In the spatial concept model learned in the tidied environment, the
likelihood is the highest when each object position is tidied. Therefore, when tidying up one
object from scattered objects, the object with the highest likelihood is selected first. In this
study, tidying up implies moving the positions of the observed objects to increase the likelihood
of the spatial concept mode.</p>
      <p>First, the robot detects cluttered objects while moving. Next, if multiple scattered objects
are detected, the robot decides on an object and a place to be tidied up. The robot estimates
simultaneously the order and positions of the objects to tidy up from the multiple objects it
observed in the cluttered environment. Moreover, the robot can determine if the tidied position
of the object is unknown from Equation. If unknown, the robot can ask where to move the
object.</p>
      <p>Finally, the robot performs motion planning. After both estimating the tidied positions and
planning the object order by Equation, the robot is required to accurately manipulate the objects
to move them. The developed system uses the MoveIt! framework for the motion planning of
the robot arm when grasping an object. Furthermore, the robot moves to the appropriate search
positions while performing self-localization based on its map and observations.</p>
      <p>When the robot finishes tidying an object, the state of the other objects may have changed
owing to external factors. To deal with such scenarios, it is possible to sequentially plan the
tidy up task by redoing the object detection after tidying each object.</p>
      <p>Approach is divided into learning and planning phases. In the learning phase, the robot
observes the tidied environment. Spatial concepts are formed from observed data regarding
objects and their 3D positions by multimodal learning. In the planning phase, the robot observes
scattered objects in the environment, and estimates the order and positions of objects to be
tidied up based on the spatial concepts formed in the learning phase.</p>
      <p>According to the above flow, the robot tidies up a selected object to the target position
repeatedly. In the case of an unknown object, the robot tidies it up by asking the user about its
place name.</p>
      <p>In contrast to the developed system, the reduced one is much more advanced, a robot that
has undergone machine learning scans the surrounding space using a camera and builds the
shortest paths based on the data obtained and built-in algorithms. On the one hand, such a
system is more profitable in terms of finding goals and shortest paths in advance, on the other
hand, it requires much higher development costs, in contrast to the one given in the article.</p>
      <p>
        In a large number of works, the construction of trajectories for robots with obstacle detection
is considered. So, for example, in work [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for the study of space, an algorithm is used, based
on the algorithms of the robot’s movement “Spiral” and “Movement along the wall”.
      </p>
      <p>Algorithm “Spiral” is a movement along a circle with increasing radius. “Movement along
the wall” is an algorithm in which the robot moves along the perimeter of space. Unlike the
“Spiral” algorithm, in this algorithm the robot can go around obstacles. The proposed algorithm
for learning a mobile robot in order to detect obstacles is a modified combination of the two
algorithms described above. The learning algorithm consists of three parts. The first part can
be called the main one – it is an algorithm for passing one circle (figure 2). The robot begins
its exploration of space from a possible perimeter, and with each circle it narrows it down to
a radius less by one. In order for the robot to narrow the radius with each passed circle, the
second part of the algorithm is needed – the algorithm for starting the study of a new circle
(figure 3).</p>
      <p>After the end of the movement according to this algorithm, the robot needs to check the
space for unexplored areas (figure 4). In the case when there are few obstacles and their shape
is simple, the robot explores the space in one pass. Otherwise, there will be dark areas unknown
to the robot. To prevent the robot from spending a lot of time on the way to such a site, it is
necessary to use the algorithm for finding the shortest path. There are many obstacle avoidance
algorithms. The most eficient is Dijkstra’s algorithm if we consider the classical algorithms for
solving the problem.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], to solve the trajectory planning problem, an approach is used based on artificial neural
networks of Hopfield, which belongs to intelligent algorithms – fuzzy and genetic algorithms,
the main advantage of which is the speed of calculation with a moderate load on onboard
computing complexes. However, the use of the specified mathematical apparatus imposes
significant restrictions on the minimum system requirements of the MR computing platform,
which excludes the possibility of implementing the method on devices with limited computing
resources. For example, AVR or STM microcontrollers with a processor capacity of less than 32
bits, which are also a significant drawback.
      </p>
      <p>Based on everything discussed above, we can conclude that the developed system is really
relevant, having no exact analogs and showing an advantage over others for the set goals.</p>
    </sec>
    <sec id="sec-6">
      <title>6. System implementation</title>
      <p>
        By connecting the Arduino board to power, the firmware starts to actively execute. The
microcontroller is configured so that the bootloader is responsible for the control functions
at system startup. First, the loader checks for 1-2 seconds, the user will not start sending a
new program. If the reprogramming process is started, the sketch is loaded into memory and
control is transferred to it. If there are no new programs, the loader executes the previously
saved program [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>After starting the execution of the program, the Arduino performs a number of routine
operations of initializing and setting up the environment, and only then proceeds to execute the
same code that is contained in the sketches. Thus, Arduino eliminates the need to remember all
the details of the microprocessor architecture and concentrate on the tasks at hand.</p>
      <p>
        The robotic platform contains a large number of sensors and actuators, which can be controlled
by the AlphaBot.h library, which can be downloaded from the oficial AlphaBot website [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The
instance used to send commands from the sketch is Car1, and the method implemented by the
sketch is: SetSpeed(), Forward(), Brake(), Backward(), Left(), Right(), LeftCircle(), RightCircle(),
MotorRun().
A set of functions for controlling a servo drive is provided by the Servo.h library:
• servo.attach() – indicates the pin to which the servo control wire is connected;
• servo.write() – set the turning angle;
• servo.writeMicroseconds(uS) – a value is passed to control the servowrite in microseconds
(uS), setting the rotation angle to this value;
• servo.read() – reads the current angle of rotation of the servo;
• servo.attached() – determines if there is a binding to the servo via pin;
• servo.detach() – disconnect a pin from the Servo library.
      </p>
      <p>
        By downloading the program, the Arduino allows the code to participate in the initialization
of the system. To do this, the microcontroller is given commands that it will execute at the time
of the boot and will no longer return to them (that is, these commands are executed only once
at system startup). And it is for this purpose that a block is allocated in the program in which
such commands are stored – void setup() [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ]. In the function, the necessary properties are
set for connecting the configuration of the ultrasonic sensor, setting the speed of the robot,
specifying the pin for connecting to the servo drive.
void setup()
{
ProximityConfig();
UltrasonicConfig();
myservo.attach(9);
Serial.begin(9600);
Car1.SetSpeed(150);
}
Variables that are used in the program for the functioning of the robot:
• run – to set the start of the robot’s functioning;
• side – to determine the side to which the robot is returning;
• distance – to determine the distance to the obstacle at the present time;
• spacing – to save the old value of the distance to the obstacle;
• cart – to determine the presence of a “basket”;
• pos_last_point – to save the distance to the point where the object was taken;
• time_turn – to save the time for which the reversal is carried out;
• pos_cart – to save the distance to the “basket”.
      </p>
      <p>Be sure to after setup() there is a loop() function – the entry point to the program, which
ensures the execution of commands while the Arduino board is turned on. This function
contains all commands that will be executed cyclically. Starting from the first command, the
microcontroller performs all subsequent steps to the very end and immediately returns to the
beginning to repeat the same instruction. Checking the distance in the program is performed
by the check distance() function. It allows you to check the driving distance in the range from 2
to 30 cm.
int check_distance()
{ if((1 &lt; Distance) &amp;&amp; (Distance &lt; 38))
{ return false;
} else { return true; } }</p>
      <p>The capture of objects is carried out by the Servo() function.</p>
      <p>A cycle is started that opens the claw 180 degrees. Next, a delay is called to start the motors.
The robot moves forward 4 cm. The next cycle is responsible for closing the claw.</p>
      <p>The object_check() function checks the delight of the subject.
int object_check()
{
Distance_test();
if (spacing &gt; Distance)
{
return true;
} else {
return false }}</p>
      <p>If the distance to the object is less than the value of the distance that was before the capture,
then it is considered that the object was taken by the robot. Otherwise, the object defines the
wall.</p>
      <p>To leave an item in the “basket”, the program calls the leave_object() method. Obstacle sensors
are controlled by the sensor_check() function. The movement of the robot backward is set and
a cycle is performed until the sensors notify about the object. Upon completion, the system
performs a 180-degree turn in place.
void sensor_check()
{
Car1.Motor(-150,-150);
Car1.Brake();
while(true)
{
RSensor = digitalRead(RSensorPin); LSensor = digitalRead(LSensorPin);
if (LSensor == HIGH &amp;&amp; RSensor == HIGH)
{ Car1.Motor(0,0);
Car1.Brake();
delay(1000);
Car1.RightCircle(2000);
Car1.Brake();
break;
}}}
The Distance_test() function writes the distance to the obstacle in the Distance variable.
void Distance_test()
{
}
digitalWrite(TRIG, LOW);
delayMicroseconds(2);
digitalWrite(TRIG, HIGH);
delayMicroseconds(10);
digitalWrite(TRIG, LOW);
float Fdistance = pulseIn(ECHO, HIGH);
Fdistance = Fdistance / 58;
Distance = Fdistance;</p>
      <p>The definition of the shortest path to the “basket” is provided by the road_to_busket() function.
Changing side determines the side to return to (1 – left, 2 – right).</p>
      <p>The road_to_last_point() function returns the work to the point along the shortest path where
the item was taken.</p>
      <p>To return to the starting position of the work, the road_to_start_point() function is called.</p>
      <p>The turn() function determines the side in which to turn and carries out movement in a
certain direction.
int turn(){
if (side == 1) {
Car1.RightCircle(1000);
Car1.Brake();
Distance_test();
if (check_distance() == true) {
Car1.RightCircle(1000);
Car1.Brake();
side = 2;
return true;
} else {
return false;
}} else {
Car1.LeftCircle(1000);
Car1.Brake();
Distance_test();
if (check_distance() == true) {
Car1.LeftCircle(1000);
Car1.Brake();
side = 1;
return true;
} else {
return false; }}}</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>On the basis of the Arduino family platform, an object collection system was created that is
able to: estimate the distance to an obstacle using an ultrasonic sensor; perform the movement
to a given point; grab an object no larger than 5.5 cm; deliver the item to the item collection
point; determine the shortest path to a certain point. The movement to the right is performed
in the same way as the algorithm of movement to the left.</p>
      <p>There are limitations associated with the use of an ultrasonic distance sensor:
1. Partial displays can distort the measurement results (this can be caused by curved or
inclined surfaces with respect to the direction of emission of the signal).
2. Objects made of sound-absorbing, insulating materials, with a woolen surface can weaken
the signal.
3. The size of the object afects the signal quality. The smaller it is, the weaker the reflected
signal.
4. High humidity (rain, snow) contributes to signal distortion.</p>
      <p>The system can be improved with a movable ultrasonic sensor stand, which will help
determine the width of the object in order to calculate the distance to avoid an obstacle.</p>
      <p>The scientific novelty of the work lies in the fact that a new method of collecting objects in
the room and a system have been developed that allows organizing the collection process with
minimal energy consumption.</p>
      <p>The practical significance of the work lies in the fact that the software has been developed,
which makes it possible to assess the adequacy and efectiveness of the developed method in
the real work of the system.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Kirichek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Skrupsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tiahunova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Timenko</surname>
          </string-name>
          ,
          <article-title>Implementation of web system optimization method</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2608</volume>
          (
          <year>2020</year>
          )
          <fpage>199</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tiahunova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Tronkina</surname>
          </string-name>
          , G. Kirichek,
          <string-name>
            <given-names>S.</given-names>
            <surname>Skrupsky</surname>
          </string-name>
          ,
          <article-title>The neural network for emotions recognition under special conditions</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2864</volume>
          (
          <year>2021</year>
          )
          <fpage>121</fpage>
          -
          <lpage>134</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Semerikov</surname>
          </string-name>
          , I. Teplytskyi,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yechkalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kiv</surname>
          </string-name>
          ,
          <article-title>Computer simulation of neural networks using spreadsheets: The dawn of the Age of Camelot</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2257</volume>
          (
          <year>2018</year>
          )
          <fpage>122</fpage>
          -
          <lpage>147</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Kirichek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Harkusha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Timenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kulykovska</surname>
          </string-name>
          ,
          <article-title>System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2546</volume>
          (
          <year>2020</year>
          )
          <fpage>138</fpage>
          -
          <lpage>148</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Waveshare-Electronics</surname>
          </string-name>
          , AlphaBot,
          <year>2021</year>
          . URL: http://www.waveshare.com/wiki/AlphaBot.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <source>Designing Embedded Systems with Arduino: A Fundamental Technology for Makers</source>
          , Springer, Singapore,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-10-4418-2.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Boranbaev</surname>
          </string-name>
          ,
          <article-title>Development of a robot for transporting small-sized objects based on a AVR microcontroller (Razrabotka robota dlya transportirovki malogabaritnykh obyektov na baze mikrokontrollera AVR), Molodoy uchenyy (</article-title>
          <year>2016</year>
          )
          <fpage>277</fpage>
          -
          <lpage>286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Chase</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pandya</surname>
          </string-name>
          ,
          <article-title>A review of active mechanical driving principles of spherical robots</article-title>
          ,
          <source>Robotics</source>
          <volume>1</volume>
          (
          <year>2012</year>
          )
          <fpage>3</fpage>
          -
          <lpage>23</lpage>
          . URL: https://www.mdpi.com/2218-6581/1/1/3. doi:
          <volume>10</volume>
          . 3390/robotics1010003.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Taniguchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Isobe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Hafi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hagiwara</surname>
          </string-name>
          , T. Taniguchi,
          <article-title>Autonomous planning based on spatial concepts to tidy up home environments with service robots</article-title>
          ,
          <source>Advanced Robotics</source>
          <volume>35</volume>
          (
          <year>2021</year>
          )
          <fpage>471</fpage>
          -
          <lpage>489</lpage>
          . doi:
          <volume>10</volume>
          .1080/01691864.
          <year>2021</year>
          .
          <volume>1890212</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Avseeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Larina</surname>
          </string-name>
          ,
          <article-title>Development of a learning algorithm for a mobile robot in order to detect obstacles in a confined space (Razrabotka algoritma obucheniya mobil'nogo robota v tselyakh obnaruzheniya prepyatstviy v zamknutom prostranstve</article-title>
          ),
          <source>Voronezh State University of Engineering Technologies</source>
          <volume>79</volume>
          (
          <year>2017</year>
          )
          <fpage>65</fpage>
          -
          <lpage>67</lpage>
          . doi:
          <volume>10</volume>
          .20914/
          <fpage>2310</fpage>
          -1202-2017-3-
          <fpage>65</fpage>
          -67.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Yudintsev</surname>
          </string-name>
          ,
          <article-title>Synthesis of a neural network path planning system for a group of mobile robots (Sintez neyrosetevoy sistemy planirovaniya trayektorii dlya grupp mobilnykh robotov)</article-title>
          ,
          <source>Systems of Control, Communication and Security</source>
          (
          <year>2019</year>
          )
          <fpage>163</fpage>
          -
          <lpage>186</lpage>
          . doi:
          <volume>10</volume>
          .24411/
          <fpage>2410</fpage>
          -9916-2019-10406.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Blum</surname>
          </string-name>
          , Exploring Arduino®
          <article-title>: Tools and Techniques for Engineering Wizardry</article-title>
          , Wiley,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Antipov</surname>
          </string-name>
          ,
          <article-title>Robot platform RedBoard (Robot-platforma</article-title>
          <source>RedBoard)</source>
          ,
          <year>2015</year>
          . URL: https: //cxem.net/uprav/uprav66.php.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>V.</given-names>
            <surname>Petin</surname>
          </string-name>
          ,
          <article-title>Designing with an Arduino controller (Proyekty s ispolzovaniyem kontrollera Arduino)</article-title>
          ,
          <source>BHV-Peterburg</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bista</surname>
          </string-name>
          ,
          <article-title>Understanding and design of an arduino-based pid controller</article-title>
          ,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .25772/ 790F-
          <fpage>JP22</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>