<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Information Control Systems &amp; Technologies, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Feeling artificial intelligence. Attention engine borrowed from living beings model</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anatolii Kargin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetyana Petrenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ukrainian State University of Railway Transport</institution>
          ,
          <addr-line>Feuerbach sq., 7, Kharkiv, 61050</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>23</volume>
      <issue>25</issue>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Feeling AI (FAI) as a kind of Hybrid AI aimed to fill niche of service provided by robots is discussed. Using Hybrid AI based on symbolic and machine learning techniques combined with "Emotional Shell" and pretrained set of different skills is problematic due to limited computer facilities supporting robotic operating autonomously in real physical world. This article proposes, in contrast to symbolic approach, set up FAI on the cognitive response-making model borrowed from living beings that demonstrate the smart behavior in condition of the lack information using the simplest nervous net facilities. Based on findings of neurobiology and cognitive science about behavior of hydra, bee, and creature with higher nervous organization, the attention mechanism model supporting response-making is shown. Interaction of four cognitive models such as perception, drive, emotion, and attention when cognitive engine processes the knowledge presented by set of independent response prototypes is given. Possibilities of borrowed model formalized on fuzzy Certainty Factor (CF) are discussed on robotic application.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Feeling artificial intelligence</kwd>
        <kwd>model borrowed from living beings</kwd>
        <kwd>cognitive model</kwd>
        <kwd>perception</kwd>
        <kwd>emotion</kwd>
        <kwd>attention</kwd>
        <kwd>robotic</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        resources, and code pertaining to the skills that robots have already been taught [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. GenAI claims
major role in supporting service given by robots, too. Besides the emotional collaboration with
human, the GenAI is going to do reasoning. For this, the robot inference system uses symbolic
knowledge about semantic relationships between objects in an image, basic common sense, and
other. Such AI with symbolic decision-making engine, pre-trained set of different skills in physical
world supported by foundation model and "Emotional Shell" is one type of Hybrid AI [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Usage this
approach to design FAI for autonomous IM is limited due to its symbolic decision-making model and
large language model [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Along with this, neuroscientific studying the animals relies on another
approach to provide insight into basic psychological mechanisms about shaping behavior and reveal
the role of cognitive functions in these processes [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. These findings could not readily apply
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Nevertheless, to set up FAI on model of cognitive decision-making mechanism is in demand for
two reasons. Firstly, functions under certain conditions in autonomous mode without supporting
Internet is required from nova day robots [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13-15</xref>
        ]. Secondly, there are lot small applications of robot
implemented on not powerful computer facilities which can't carry out the soft used large language
model or symbolic engine with common sense knowledge bases. Both issues may be overcome by
approach aimed to design Hybrid AI for robots based on the models borrowed from lowest
hierarchical levels of cognitive architecture of living being. In [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ], hypothetical neuronal
architecture of the mammalian brain where cognitive processes divided over different layers is
proposed. Architecture is used by neuroscientists to explain the goal-directed behavior of animals as
a product created by cognitive processes. It has distinguishable dividing into three layers: reactive,
adaptive, and contextual. The reactive layer as independent of upper ones plays both roles: supports
the basic functionality and generates signals that drive, modulate and engage the higher control
layers. Reactive layer blueprint of FAI was proposed in [
        <xref ref-type="bibr" rid="ref16 ref9">9, 16</xref>
        ]. Main principles of blueprint
organization have been borrowed from living being with simplest nervous net (hydra) which
behavior sufficiently well studied and published by scientists in neurobiology [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. Basic function
of reactive layer blueprint, or rather, response making is provided by cognitive processes such as
drive, emotions, attention and perceptions. Adaptation some of these processes was described in [
        <xref ref-type="bibr" rid="ref16 ref28 ref9">9,
16, 19, 29, 31</xref>
        ], this paper is dedicated to attention function.
      </p>
      <p>In this paper, attention model borrowed from living being as one of function of lower reactive
layer of cognitive structure to implement in new Hybrid AI model is proposed. Section 2 will present
Section 3
describes three models of attention borrowed from living beings belonging to three levels of
development: hydra with simplest nervous net, bee with more complex nervous system, and
mammalian with develop mechanism processing data from sensors different modality. In section 4,
experiment with robot layer blueprint is discussed. When robot makes
decision, the step-by-step procedure using sophisticated attention mechanism which implement the
scrutiny of environment to receive lack information for continue its motion is described.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem Discussion</title>
      <p>
        Perception system of FAI blueprint based on cognitive perception model introduced early in [
        <xref ref-type="bibr" rid="ref16 ref17 ref18">16-19</xref>
        ]
is lying on passive method of data from sensors acquisition. Perception system computes the
meaning of current situation where IM is. To do this, data from sensors about IM environment, at
first, are granulated and mapped into primary verbal definition, and then generalized on the base of
domain knowledge presented by experts in the form of natural language word meanings. The model
takes into account the main features of wildlife perception systems. First, at each moment of time,
the meaning is calculated not of the complete situation, but of some fragment of the IM environment,
allocated by the attention mechanism. The meaning of the complete situation is formed sequentially
by moving attention focus from one fragment of the environment to another. Secondly, a
sequentially formed description of the meaning of the complete situation is supported by else one
cognitive mechanism. It is data aging. Thanks to this mechanism, the confidence that the calculated
meaning of the situation corresponds to the real situation at the current time is adjusted by ageing
of certainty over time. In [19] shown, how environment states certainty depends on aging over time
of data from sensors and how it impacts on IM decision-making risks in dynamic situations. If data
aging is not taken into account, then the meaning of the complete situation formed by a time
sequence of its fragments may not correspond to reality because the situation's fragments been long
time before had gone. In this paper we will discuss directly attention mechanism model borrowed
from living being and how it should be implemented in FAI architecture.
      </p>
      <p>
        Cognitive response-making model borrowed from living beings lying on the findings of scientists
in the field of neurobiology and cognitive psychology is underpinning to do this. Hypothetical
neuronal architecture of the mammalian brain where cognitive processes divided over different
layers is proposed in [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. Basic functionality of reactive layers is provided by sensorimotor
responses and organized as a set of independent Response Prototypes (RPs) that support needs of
living beings. Such organization has permitted to do primary step toward creating FAI bottom layer
model, namely, the reactive layer blueprint which is base of autonomy behavior of IM [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. To clarify
the contribution and role of cognitive functions in response making, the living being with simplest
nervous net which behavior sufficiently well studied and published by scientists in neurobiology
have been chosen. Hydra is simplest of all living being has nervous system that control their
behavior. Instead of a brain, hydra have the most basic nervous system in nature, a nerve net in
which neurons spread throughout its body [20, 21
possible to let out more or less complex behavior. No less important argument for choosing a hydra
as an object to be borrowed response-making mechanism is affordance to sufficiently complete
information about the dependence of behavior on activities of neurons, sensors, inner states and
epitheliomuscular cells. Today extensive information has been accumulated about experiments with
hydra [
        <xref ref-type="bibr" rid="ref12 ref20 ref21">12, 21, 22</xref>
        ].
      </p>
      <p>Studies of hydra behaviors scraped from different science sources have been summarized by
ourselves and presented as structured set of its RPs. We have introduced the behavior elementary
unit as a solitary RP (Fig. 1) and presented behavior over time interval as sequence of these units,
named compound RP.</p>
      <p>The solitary RP has been proposed in the form of triple: Sensory Template (ST), Response Making
unit
environment situation or inner states, the AT has been defined on a set of epitheliomuscular tissues
(actuators) and RM presents function of neuron net connected ST with AT. In our model, the set of
all RPs reported by neuroscientists had been presented by solitary RPs and divided onto subsets
according to kinds of living being needs. We distinguished at least the next four hydra's needs:
self-polyp contraction under
to comfort need. Full list of behavior prototypes except above mentioned includes some more solitary
reactive layer of FAI as a set independent RPs divided into k+1 subsets have been proposed. The lth
subset includes RPs which serve lth need, only. In fact, no any partition into subsets is, it done to
show that response-making engine when processing ith RP belonging to lth subset uses state local
vectors which are associated with lth need (Fig. 1). But RPs are processed asynchronously and
independent one from another: simultaneously gradual and independent, the Certainty Factors (CFs)
of all three components of RP using feedback through emotion about degree of need satisfaction are
tuned. Attention plays major role to support autonomous response making in conditions of
uncertainty situations and incomplete knowledge.</p>
      <p>
        The ST of RP (Fig. 1) is structured set of Knowledge Granules (KGs). It presents knowledge about
what ST is and how this template should be computed. The math foundation of presentation and
processing of KG is CF [
        <xref ref-type="bibr" rid="ref16 ref9">9, 16</xref>
        ]. The value cfKGsens:templ(t) of CF shows degree of matching the sensory
data at current time t to meaning of KG. The STs of all RPs, assembled together form KB of FAI'
perception system as a holistic hierarchical structure in which there are no duplicate KGs.
Nevertheless, the meaning of the separate templates is preserved, and they remain independent one
from others. At zero level of KB the KGs present directly data from sensors. At first, second, and so
on levels the KGs present knowledge about states of different parts of environment. Moreover, KGs
of high level can define the situation by using KGs not only zero level but KGs of different lower
levels which give sense of fragments of surroundings of IM. Thus, as KB has multi-level hierarchical
structure of KGs presenting knowledge about environment states, the major function fabstract of
abstraction engine is continuously in real time (t) processing data from sensors (x), computing the
degrees cfKGitempl(t) of all KGs (templates) matched with current state of environment. Algorithm of
realizing the fabstract , namele, of computing the fuzzy CFs of all N STs in (1) are given in [
        <xref ref-type="bibr" rid="ref16">16, 19</xref>
        ].
cfKGievent(t)=f1attent(cfKGitemp(t), cfatten_field(t)), cfatten_field(t)= f2attent(cfemotion(t), cfKG templ(t)), cfKGitemp(t) =
fabstract(x, KB), i=1,2,..N
(1)
      </p>
      <p>Attention mechanism implements two functions. First one f1attent computes CF of KG (ST) being
under attention field focus and sends to response-making KG (Fig. 1) messages cfKGievent(t) when sensor
data match to ST. Second function f2attent manages movements of attention field in space of all STs,
decides which ST will be next under attention focus and computes strength of fields impact
cfatten_field(t) on it.</p>
      <p>The purpose of this article is explanation both above attention functions and examine theirs on
examples of IM tasks. Before proceeding to the formal attention model, we will focus on examples
ch
should be borrowed into attention model. Then, we will discuss implementation of these principles
into formal models of both attention functions f1attent and f2attent and will demonstrate their using when
IM does response-making.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Attention models 3.1.</title>
    </sec>
    <sec id="sec-4">
      <title>Turning on attention</title>
      <p>
        The living beings and IMs can use two kinds of sensors: the first perceives properties of environment
directly on the its body and the second perceives properties located on the distance from body. For
example, thermoreceptors perceive temperature in points of environment distributed over body
surface. Ultrasonic sensor obtains data from remote points of space. In spite of hydra hasn't last kind
of sensors it has capability to receive information from removed points by moves the parts of body,
namely, tentacles. An attention functions f1attent and f2attent in (1) do this as they manage the moving
attention focus over different locations of environment. When attention has been focused on some
of them the abstraction function fabstract in (1) answers to question "Does sensors data in this location
match to any ST?" So, scrutiny need is serviced by attention mechanism and head to find extra
information that satisfy RP of another need. Below in this subsection, condition of appearing of
scrutiny need that launches attention is discussed.
required behavior, the need should have capability to exhibit itself, for example, as some source of
nervous activity of living being. In our model the drive playing this role is presented by KG which
can be in different states from excited to inhibited. Examples of hydra show different types of inner
processes
and high-level causes inhibited drive' states of the feed need. Transition between states is gradual
process. The scrutiny need drive has another mechanism of exciting.
the assumption that hydra, despite their simplest neuro net, has mechanism of emotions. So,
longterm unsuccessful hunting of hungry hydra causes, firstly, non-hunting but scrutiny behavior, and,
Same
conclusion was figured out about another needs. These observations can be explained on the basis
of the mechanism of emotions [
        <xref ref-type="bibr" rid="ref16 ref9">9, 16</xref>
        ]. Drive of some need is growing but no one of RP serving this
ne ST of these prototypes. In
this case, energy of negative emotion (frustration) is being headed on lighten the matching the vague
or incomplete sensory data with STs of RPs belonging to this need. Secondly, when long time
previously mentioned emotion impact give positive effect (drive' excite level doesn't fade
down despite the helping of emotions) the accumulated energy of negative emotion increases the
excite level of drive of another, first of all, scrutiny need. In this case environment should be explored
more carefully by RPs of scrutiny need with goal to find out location of situation fragment that match
ST at least of one RP belonging to unsatisfied need.
environment explorer managed by attention helping overcome incompleteness information.
3.2.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Attention management principle borrowed from hydra</title>
      <p>When hydra explores environment, it on regular base repeats compounded RP consists of three steps
behavior. At the first step, body elongation to maximum length is realized. Then hydra is nodding.
And at third step it slowly extending their tentacles. It's worth noting that direction of nodding at
second step at current time is different from been executing at same step previously. Direction of
nodding is changed consistently from one implementation of nodding action to the next. This fact
can be elucidated on the base of hypothesis that hydra owns mechanism of attention which manages
the order of RPs are applied to explore the environment. Apparently, such behavior expands space
around hydra where may be prey and therefore increases the likelihood of successful hunting. When
scrutiny need actions give positive result (prey has been found and touched by tentacle) then the RPs
of feed need begin to carry out. The first RP "Firing into the prey by tentacle's nematocysts" of
This success ceases the negative emotion
drive is fade. Otherwise, negative emotion is
growing what entails increasing activation level of scrutiny need' drive, and, consequently, strength
of attention field on next three-steps cycle of scrutiny behavior will increase, too. The elongation,
nodding and extending actions will be more impressive and cover more space around hydra what
increase the chance of well hunting. In this way, both the scrutiny need RPs and attention
mechanism help to find RP. So, have being
proposed following set of PRs of scrutiny need (body elongation (Pr1), nodding forward (Pr2),
nodding left (Pr3), nodding right (Pr4), nodding back (Pr5)), and extending tentacles (Pr6)), and also
two compound RPs (somersaulting (Pr7) and looping (Pr8)). For discussion of major idea, it is enough
to consider only four prototypes of nodding in four different directions and two compounded
prototypes of moving in space, namely, looping and somersaulting. In Fig. 2, Pr1, and Pr2, and so on
are identifiers of relevant prototypes listed above.</p>
      <p>In contrast to others needs, RPs (Fig. 1) maintaining the scrutiny need are endosensing type. It
means when no one RPs matching to current environment state, consequently no "anything" that
can from outside to trigger the action. In this case, it is required something internal process that
activates an any action. In hydra this role plays the specific intrinsic processes, for example,
concentration of molecules or ions of a certain type or electrical signal of neurons. These intrinsic
processes have nature of field which affects neurons of living being. Thus, base concept of our model
is attention field which triggers the special internal Sensitive Elements (SE) carrying out function of
ST in exosensing RP (Fig. 1). In Fig. 2, SEs of endosensing RPs are stand for by number 1, 6, 9, and 10
inside circles. Two of them with identifiers 9 and 10 depict SEs present generalized knowledge about
kinds of nodding (9) and movement (10) types. Other circuses with numbers 2, 3, 4, 5, 7, and 8 stand
for KGs of exosensing RP. As well as ST of exosensing RP, a SE represent knowledge only about
internal situation that can trigger the RP. However, in contrast to KGs presenting ST, a SE has two
particularities. First, it gets excited state when satisfied both following conditions: the SE is under
attention focus and internal situation matches to this SE. Second, when SE being under attention
focus gets excited state, this shifts attention field such way that focus is located onto another nearest
neighbor, as shown in Fig. 3.
focus covers the next nearest SE with number 6 (Fig. 3.b).</p>
      <p>
        In Fig. 3.c shown, next case when attention focus shifts further on SE with number 6 after SE 9
begins to match the situation and gets arousal state. Our conceptual model of attention based on data
collected in [
        <xref ref-type="bibr" rid="ref18 ref19 ref20">18, 20, 21</xref>
        ]. Several intrinsic periodical pr
we have used to explain attention phenomena. One process with long period approximately T1 = 5
- 6 min excites the scrutiny' drive (Dr) which launches an attention field (Fig. 2). Other short
periodical process approximately T2 = 0.5 - 1.0 min excites SEs with numbers 1, 6, 9, 10. Fig. 2 shown
first stage of the scrutiny need processing when its drive has gotten arousal state what causes
focusing attention around SE 1. When short intrinsic periodical process excites inputs of all SEs only
one of them, the 1st SE, takes excited state as it is under attention focus. Ultimately, the RP with
name Pr1 takes arousal state and attention focus shifts to the 9th SE (Fig. 2). When after T2 time,
next excite impetus of short intrinsic periodical process is generated only 9th SE get arousal state
because both conditions have been satisfied. It is under attention focus and waiting for exciting of
its input. Arousal state of 9th SE excites the inputs of KGs with identifiers 2, 3, 4, 5 and, besides this,
causes shifts focus attention further on the 6th SE. To decide whose turn among 2nd, 3rd, 4th, or 5th
      </p>
    </sec>
    <sec id="sec-6">
      <title>Attention management principle borrowed from honey bees</title>
      <p>
        When honey bees searching out way to flower patches, the attention mechanism helps in same way
as well as it did for hydra. In contrast to the hydra, the structure of SEs specifying the
order of actions is not constant, it is created each time by memorizing waggle dance of scout bees
[
        <xref ref-type="bibr" rid="ref22 ref23">23, 24</xref>
        ]. The scout bees provide the following information by means of the waggle dance: the quality
of the food source, the distance of the source from the hive and the direction of the source. Scout bee
runs in a straight line the direction which is related to the vertical on the hive and indicates the
direction on food sourc . The duration of the dance indicates the
distance to the food source [
        <xref ref-type="bibr" rid="ref22">23</xref>
        ]. So, waggle dance creates structure of SEs (Fig. 4).
      </p>
      <p>The rout from hive to the flower patches divided into two segments. First segment heads bees in
direction at an angle 30° in relation to the direction to the sun. This direction the bees have to keep
during two flight time intervals. Two sensitive elements SE1 and SE2 of knowledge structure in Fig.
4 are pointed this. After this, the direction of flight is changed so that bees have to move under 60°
in relation to the sun during three times intervals. In Fig. 4,
scrutiny need is activated by the emotion produced by forager bees need to carry out its mission.
When next impetus of intrinsic periodical process excites the inputs of all SEs only SE1 will get excite
state because it is under attention focus. The SE1 belong to RP with identifier 1 which action template
attention field focus has been shifted from SE 1 to SE 2. After time period the next impetus of intrinsic
periodical process excites the SE 2 and shifts attention field on SE 3. The SE 2 belongs to the same
RP 1 with same action keeping previous mode of motion. So, both above examples show how
attention shapes scrutiny behavior of hydra and bee.
3.4.</p>
    </sec>
    <sec id="sec-7">
      <title>Attention management principle borrowed from mammals</title>
      <p>
        Mammals have more developed attention mechanism maintaining dynamic STs. It processes data
from sensors and creates dynamic process characteristics of which match with dynamic. Essentially,
attention builds footprint of ordered events and comperes it with the ST [
        <xref ref-type="bibr" rid="ref24 ref25 ref26">25-27</xref>
        ]. Below, that function
is regarding on simplified example of vision modality when simple geometrical shapes in black are
reflected on the retina of animal eye [
        <xref ref-type="bibr" rid="ref26">27</xref>
        ].
      </p>
      <p>One of the possible approaches to present such dynamic ST is as follow. After detecting a spot,
the eye fovea bypasses some shape along the black and white boundary. To model this, we pretend
that mobile Inner Sensitive Matrix (ISM) with n x m sensors is moving along the spot boundary and
produces path data in the form of turning angles and lengths of the path sections Fig. 5.</p>
      <p>
        When detecting the boundary of the contour, the ISM movement is under control of system with
feedback based on data about deviation of center of ISM from boundary. For correction of deviations
from the boundary of the figure, PID control algorithm was used [
        <xref ref-type="bibr" rid="ref27">28</xref>
        ]. As shown in Fig. 5, control
algorithm supports the movement in such a way that the right three columns of ISM are located
above the darkened surface (inside figure), and the left three columns of sensors are outside the
figure. Of the possible strategies, the simplest one was adopted: a clockwise bypass.
      </p>
      <p>During movement the ISM generates sequence of events presenting the changes of motion (angles
of turn and distances traveled to the turn). In the example, for the demonstration purposes, the
accuracy of approximation of the boundary is adopted 15 degrees. In Fig. 6 and Fig. 7, grey-shaded
circles graphically show zero- KGs, representing data from ISM . The identifiers inside
circles are indicated by numbers corresponding to the angles of turn (on the left part of figures) and
distances (four right KGs). The maximum angles of turn are 180° to the right. The right or left turns
in the range is regarded as I KG with identifier 0.0.
Turns to the right in the range [+7.5 ,+22.5 ] describes KG 0.15 and so on until 0.180. As
ISM control algorithm corrects missteps within [-7.5°, +7.5°], these devi
In Fig. 6 and Fig. 7, the SEs of the zero-level are
given only for the right turn, for clarity.
divided into four granules corresponding to 1, 2, 3, 4 units of distance. In Fig. 6 and Fig. 7 zero-level
KGs 0.1, 0.2, 0.3, 0.4 represent knowledge about these . The KGs of upper levels
present generalized knowledge about fragments of the path traveled by ISM. For example, at first
level KG with the identifier 1.l represents the generalized knowledge about arbitrary path length
(doesn't matter which of fourth possible length is). In Fig. 6, three of third level KGs with identifiers
3.l_90, 3.l_120, and 3.l_150 represent the knowledge about fragments of ISM path. For example,
sense of KG 3.l_90 is "turns right on 90°are made after moving along straight section of some</p>
      <p>In Fig. 6 and Fig. 7, colored circles show SEs which may be under attention field focus and are
used by attention mechanism when it activates the state of ST of certain RP. Color arrows between
these SEs show direction of movement of attention focus when corresponded SE receives activation
of its input from intrinsic process. In contrast to attention models of simplest living being (hydra and
bee in Fig. 2 and Fig. 4), the role of signals from unknown intrinsic periodical process plays events
generated by KG</p>
      <p>In Fig. 6, rectangle figure is represented by only four same KGs 3.l_90 and, bypass the rectangle
figure by ISM have been reflecting into bypass by attention' focus of SEs 1-1, 2-1, 3-1, 4-1. Their
activated states correspond to activation of ST presenting rectangle figure (rect). Similar, the bypass
of right triangle figure with angles of 90°, 30° and 60° is represented by sequence of three SEs 3.l_90,
3.l_120, and 3.l_90 what activates the structure of three SEs 1-2, 2-2, 3-2, shown in green, by
attention mechanism. The same reasoning may carry out about equilateral triangle with angles of
60° representing by a structure of three SEs 1-3, 2-3, 3-3, shown in blue. Fig. 7 shows the knowledge
example used by attention mechanism when it is necessary to recognize visual figures with different
length of straight segments. Two figures a rectangle with side lengths of 1 and 2 units and a
rectangular triangle with angles of 90°, 30° and 60° and lengths of sides 1, 2 and 3 units. The structure
of the four SEs 1 4, 2 4, 3 4, 4 4, highlighted in purple, describes rectangle. The structure of the
four SEs 1 5, 2 5, 3 5, highlighted in orange, describes triangle. At the second level of knowledge
structure the SEs, for example, 2.l_90, have sense
of two SEs of zero level 0.90 and 0.1. Nothing else this stru
Fig. 6.</p>
    </sec>
    <sec id="sec-8">
      <title>4. Experiments with</title>
      <p>supported robot
appliance
of attention
mechanism
in</p>
      <p>
        FAI
The experiments are devoted to the study the cognitive mechanisms of attention and illustrated on
example of a warehouse robot (co-bot [
        <xref ref-type="bibr" rid="ref28">29</xref>
        ]), namely a fragment of knowledge required for safe
crossing of an unregulated intersection to continue moving along a given warehouse route. When
co-bot is on the entrance road of the intersection, in order to continue motion according its plan it
must recognize situation. The FAI cognitive perception system of co-bot based in addition to other
sensors include video camera established on rotary platform. Different routes along which co-robot
may continue motion show by signs having different shapes. Experiments carried out with five
routes hence five different signs have been used: equilateral triangle with 2x2x2 length of sides,
rectangular triangle with 1x2x2.2, triangle with 1-2-3, equilateral rectangle with 2x2 and rectangle
with 2x4. At intersection, the black shaded figures of signs were fixed on front, left and right white
walls. Co-bot stopped at intersection line can locate their and catch its picture by video camera.
Cobot mission is cargo with given route. In this paper we describe one step of whole plan of mission
implementation, namely, response-making when co-bot has arrived to intersection and figures out
about action required to continue motion with its route. If sign corresponding given route is on right
wall co-bot must to turn right and continue move in that direction. If required sign is on left wall
cobot must to do left turn, and move straight when sing is on front wall.
out this assignment include set of RPs which shape co-bot reactions and structure of SE-KGs
managing attention as shown above. Set of RPs, for example, for one actual route marked by rect,
(rectangle with does matter of length of side) includes only three prototypes, example of which is
shown in (2) in the form of fuzzy rule. In (2) CF_rout_rect, CF_response_makin_mission,
CF_direc_left, CF_action_left, are linguistic variables universally defined on universe of CF [-1.0,
+1.0] with three terms high, low and zero [19]. Rule (2) present RP with structure (Fig. 1) where ST is
presented by first three components of IF part of rule, RM is presented by fourth component of IF
part, and AT is presented by THEN part of rule.
      </p>
      <p>Ri IF event(rect) and CF_rout_rect is high and CF_direc_left is high
and CF_response_makin_mission is high (2)</p>
      <p>
        When co-bot is loaded by cargo which marked by delivery rout,
accorded to rout, for example, CF_rout_rect ( rout_rect =+1.0, tL = 0, tR = 0), where is certainty, tL is
time interval elapsed since the last change of certainty value; tR is the time interval elapsed since the
receiving of new data. inguistic variables in (2) CF_direc_left is computed on
the base of data from sensors about rotary platform position (where video camera is aimed on). The
rotary platform position is changed by attention mechanism, and else one component of STs in (2),
namely, event(rect) is directly shaped by attention, too. This stands for as soon as attention
mechanism recognizes sign of rectangle on visual picture received from camera it activates state of
SE with name rect. This event launches processing fuzzy rules including this component (2). Model
of processing this type of fuzzy rules is done in [
        <xref ref-type="bibr" rid="ref29">30</xref>
        ].
      </p>
    </sec>
    <sec id="sec-9">
      <title>5. Conclusion</title>
      <p>FAI with base cognitive layer version, as a kind of hybrid AI model, can be implemented on not
powerful computer facilities that satisfier to small robotic applications destined autonomously carry
out its mission in condition of lacks information. Attention cognitive function plays major role to
reduce size of application. Scrutiny of environment under attention management permits replace
huge KB about robot reactions in different possible situations without loss of autonomous response
making. Presenting in this article the attention cognitive model borrowed from simplest living being
is integrated with others cognitive functions such as needs, drives, emotions and perception into
response making cognitive engine. In contrast to the symbolic approach when KB have to has
prototypes for all situation to shape response, the cognitive engine overcome lack knowledge by
simultaneously and independent tuning characteristics of existing set of prototypes using feedback
through emotions about degree of need satisfaction. Attention mechanism as component of this
process provides both the investigation of environment by changing behavior and improving
possibility to match existing templates. Cognitive engine can carry out this job in real time on inner
robot hardware facilities because proposed model of computations are small set of arithmetic
operations per each prototype.</p>
      <p>In the future, it is planned two ways of continue researches. In theory, we will construct the
following adaptive layer of FAI by borrowing cognitive models from living being that has simplest
brain in contrast to hydra that hasn't it. It will be jellyfish as object under research. In practice, we
will expand the list of robot application under confirmation functionality and testing to develop the
general recommendation and computer aided software for FAI apps design.</p>
    </sec>
    <sec id="sec-10">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Huang</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Rust</surname>
          </string-name>
          , Artificial Intelligence in Service,
          <source>J. of Service Res., 21</source>
          <volume>2</volume>
          (
          <year>2018</year>
          )
          <fpage>155</fpage>
          -
          <lpage>172</lpage>
          . doi:
          <volume>10</volume>
          .1177/1094670517752459.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Reis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Melao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Costa</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Jorge</surname>
          </string-name>
          ,
          <source>High-Tech Defense Industries: Developing Autonomous Intelligent Systems, Appl. Sci., 11</source>
          <volume>4920</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .3390/app11114920.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Huang</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Rust</surname>
          </string-name>
          ,
          <article-title>Engaged to a Robot? The Role of AI in Service</article-title>
          ,
          <source>J. of Service Res., 24</source>
          <volume>1</volume>
          (
          <year>2021</year>
          )
          <fpage>30</fpage>
          -
          <lpage>41</lpage>
          . doi:
          <volume>10</volume>
          .1177/1094670520902266.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4] . URL: https://www.mckinsey.com/capabilities/quantumblack/our
          <article-title>-insights/the-state-of-ai-in-2023- generative-AIs-breakout-year.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Czerwinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mcduff</surname>
          </string-name>
          ,
          <article-title>Building an AI That Feels: AI systems with emotional intelligence could learn faster and be more helpful</article-title>
          ,
          <source>IEEE Spectrum 58 5</source>
          (
          <year>2021</year>
          )
          <fpage>32</fpage>
          -
          <lpage>38</lpage>
          . doi:
          <volume>10</volume>
          .1109/MSPEC.
          <year>2021</year>
          .
          <volume>9423818</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Guizzo</surname>
          </string-name>
          , Types of Robots. Categories frequently used to classify robots,
          <year>2023</year>
          . URL: https://robotsguide.com/learn/types-of-robots.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Levine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hausman</surname>
          </string-name>
          ,
          <article-title>The global project to make a general robotic brain</article-title>
          ,
          <source>IEEE Spectrum</source>
          ,
          <year>2024</year>
          . URL: https://spectrum.ieee.org/global-robotic-brain.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Zimmermann</surname>
            <given-names>A.</given-names>
          </string-name>
          et al.,
          <source>Emerging Tech Impact Radar: Artificial Intelligence</source>
          ,
          <year>2024</year>
          . URL: https://www.gartner.com/doc/reprints?id=
          <fpage>1</fpage>
          -
          <lpage>2HEDBY2V</lpage>
          &amp;
          <article-title>ct=240425&amp;st=sb.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kargin</surname>
          </string-name>
          , T. Petrenko,
          <article-title>Knowledge Distillation for Autonomous Intelligent Unmanned System</article-title>
          , in: W. Pedrycz, S.-M. Chen (Eds.),
          <source>Advancements in Knowledge Distillation: Towards New Horizons of Intelligent Systems, Studies in Computational Intelligence</source>
          ,
          <volume>1100</volume>
          , Springer International Publishing,
          <year>2023</year>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verschure</surname>
          </string-name>
          ,
          <article-title>Distributed adaptive control: a theory of the mind, brain, body nexus</article-title>
          ,
          <source>Biol. Inspired Cogn. Archit</source>
          . (
          <year>2012</year>
          )
          <fpage>55</fpage>
          -
          <lpage>72</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.bica.
          <year>2012</year>
          .
          <volume>04</volume>
          .005.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verschure</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pennartz</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Pezzulo,</surname>
          </string-name>
          <article-title>The why, what, where, when and how of goal-directed choice: neuronal and computational principles</article-title>
          ,
          <source>Phil. Trans. R. Soc.</source>
          ,
          <volume>369</volume>
          (
          <year>2014</year>
          ). doi:
          <volume>10</volume>
          .1098/rstb.
          <year>2013</year>
          .
          <volume>0483</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Santina</surname>
          </string-name>
          et al.,
          <article-title>Awareness in robotics: An early perspective from the viewpoint of the EIC Pathfinder Challenge "Awareness Inside"</article-title>
          , arXivLabs, (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.2402.09030.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Unmanned</surname>
            <given-names>Systems</given-names>
          </string-name>
          ,
          <year>2024</year>
          . URL: https://novatel.com/industries/unmanned-systems.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Autonomy &amp; Uncrewed</given-names>
            <surname>Systems</surname>
          </string-name>
          .
          <source>The Future of Autonomy is Human-Centered</source>
          ,
          <year>2024</year>
          . URL: https://www.lockheedmartin.com/en-us/capabilities/autonomous-unmanned
          <source>-systems.html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Wang</surname>
          </string-name>
          , From Unmanned Systems to Autonomous Intelligent Systems, Engineering,
          <volume>12 5</volume>
          (
          <year>2022</year>
          )
          <fpage>16</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.eng.
          <year>2021</year>
          .
          <volume>10</volume>
          .007.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>-Enabled Autonomous</surname>
          </string-name>
          Systems, in:
          <source>Conference Proceedings of 2022 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)</source>
          , Alamein New City, Egypt,
          <year>2022</year>
          , pp.
          <fpage>88</fpage>
          -
          <lpage>93</lpage>
          . doi:
          <volume>10</volume>
          .1109/GCAIoT57150.
          <year>2022</year>
          .
          <volume>10019235</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.R.</given-names>
            <surname>Szymanski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yuste</surname>
          </string-name>
          ,
          <article-title>Mapping the Whole-Body Muscle Activity of Hydra vulgaris</article-title>
          ,
          <source>Current Biology</source>
          ,
          <volume>29</volume>
          <fpage>11</fpage>
          (
          <year>2019</year>
          )
          <year>1807</year>
          1817. doi:
          <volume>10</volume>
          .1016/j.cub.
          <year>2019</year>
          .
          <volume>05</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dupre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yuste</surname>
          </string-name>
          ,
          <article-title>Non-overlapping Neural Networks in Hydra vulgaris</article-title>
          ,
          <source>Current Biology</source>
          ,
          <volume>27</volume>
          (
          <year>2017</year>
          )
          <fpage>1085</fpage>
          1097. doi:
          <volume>10</volume>
          .1016/j.cub.
          <year>2017</year>
          .
          <volume>02</volume>
          .049. in
          <source>: Proceedings of 11-th International Conference "Information Control Systems &amp; Technologies" (ICST</source>
          <year>2023</year>
          ), CEUR Workshop Proceedings 3513,
          <string-name>
            <surname>Odesa</surname>
          </string-name>
          , Ukraine,
          <year>2023</year>
          , pp.
          <fpage>291</fpage>
          -
          <lpage>301</lpage>
          , online http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3513</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.R.</given-names>
            <surname>Szymanski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yuste</surname>
          </string-name>
          ,
          <article-title>Mapping the Whole-Body Muscle Activity of Hydra vulgaris</article-title>
          ,
          <source>Current Biology</source>
          ,
          <volume>29</volume>
          <fpage>11</fpage>
          (
          <year>2019</year>
          )
          <year>1807</year>
          1817. doi:
          <volume>10</volume>
          .1016/j.cub.
          <year>2019</year>
          .
          <volume>05</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>K. N.</given-names>
            <surname>Badhiwala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Primack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Juliano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Robinson</surname>
          </string-name>
          ,
          <article-title>Multiple neuronal networks coordinate Hydra mechanosensory behavior</article-title>
          ,
          <source>eLife</source>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .7554/eLife.64108.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>N.</given-names>
            <surname>Firth</surname>
          </string-name>
          ,
          <article-title>Entire nervous system of an animal recorded for the first time</article-title>
          ,
          <source>NewScientist</source>
          ,
          <year>2017</year>
          . URL: https://www.newscientist.com/article/2127625-entire
          <article-title>-nervous-system-of-an-animalrecorded-for-the-first-time/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>B.</given-names>
            <surname>Yuce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Packianather</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mastrocinque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Lambiase</surname>
          </string-name>
          ,
          <source>Honey Bees Inspired Optimization Method: The Bees Algorithm, Insects, 4</source>
          <volume>4</volume>
          (
          <issue>2013</issue>
          )
          <fpage>646</fpage>
          662. doi:
          <volume>10</volume>
          .3390/insects4040646.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>K.</given-names>
            <surname>Von Frisch</surname>
          </string-name>
          ,
          <article-title>Bees: their vision, chemical senses, and language</article-title>
          , Comstock Publishing Associates, 2nd ed.,
          <year>1971</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [25]
          <string-name>
            <surname>R.M. Solso</surname>
            ,
            <given-names>O.H.</given-names>
          </string-name>
          <string-name>
            <surname>MacLin</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.K. MacLin</surname>
          </string-name>
          , Cognitive Psychology, 8th ed.,
          <source>Pearson Education Limited</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>H.R.</given-names>
            <surname>Schiffman</surname>
          </string-name>
          ,
          <article-title>Sensation and Perception: An Integrated Approach</article-title>
          , 5th ed., Jonh Wiley&amp;Sons,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>E. B.</given-names>
            <surname>Goldstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brockmole</surname>
          </string-name>
          , Sensation and Perception, 10th ed.,
          <source>Cengage Learning</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kargin</surname>
          </string-name>
          , T. Petrenko,
          <article-title>Spatio-Temporal Data Interpretation Based on Perceptional Model</article-title>
          , in: V.
          <string-name>
            <surname>Mashtalir</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Ruban</surname>
          </string-name>
          , V. Levashenko (Eds.),
          <article-title>Advances in Spatio-Temporal Segmentation of Visual Data</article-title>
          ,
          <source>Studies in Computational Intelligence</source>
          ,
          <volume>876</volume>
          , Springer, Cham,
          <year>2020</year>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>159</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -35480-
          <issue>0</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lockheed</surname>
          </string-name>
          ,
          <source>The Future of Autonomy. Isn't Human-Less. It's Human More</source>
          ,
          <year>2022</year>
          . URL: https://www.lockheedmartin.com/en-us/capabilities/autonomous-unmanned
          <source>-systems.html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kargin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Petrenko</surname>
          </string-name>
          ,
          <article-title>Goal-Driving Control as a Base Model of the Feeling Artificial Intelligence</article-title>
          ,
          <source>in: Conference Proceedings of 2023 IEEE 13th International on Electronics and Information Technologies (ELIT)</source>
          , Lviv, Ukraine,
          <year>2023</year>
          , pp.
          <fpage>118</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELIT61488.
          <year>2023</year>
          .
          <volume>10310904</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>