<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modelling of Grey Wolf Optimization Algorithm Using 2D P Colonies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniel Valenta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucie Ciencialová</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Miroslav Langer</string-name>
          <email>g@fpf.slu.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ludeˇk Cienciala</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Computer Science, Silesian University in Opava</institution>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Research Institute of the IT4Innovations Centre of Excellence, Silesian University in Opava</institution>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The P colonies are a well-established version of the P systems, the computational device based on membrane computing. One branch of the research of the P colonies focuses on the possibility to consider the twodimensional environment, in which the agents act, and the 2D P colonies were introduced. 2D P colonies showed to be suitable for the simulations of various (not only) multi-agent systems, and natural phenomena, like the flash floods. However, the agents of the 2D P colony are able to communicate via the environment, by leaving some special symbols in it, this may not be sufficient for simulating some more complex communities of the agents, for example the hunting pack of the grey wolves. In this paper, we introduce a formal model of the 2D P colonies with the blackboard as an extension of the 2D P colonies. We also allow the agents to use simple relational operations to compare real numbers that can be stored inside special symbols. We use this model to simulate the Grey wolf algorithm.</p>
      </abstract>
      <kwd-group>
        <kwd>pack algorithm</kwd>
        <kwd>P system</kwd>
        <kwd>2D P colony</kwd>
        <kwd>optimization</kwd>
        <kwd>multi-agent system</kwd>
        <kwd>blackboard</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        2D P colony (see [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) is a variant of P colonies (see [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]),
a community of agents set in the two dimensional
environment. It is a very simple membrane system originally
derived from the P systems (see [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]).
2D P colony consists of a finite number of agents living in
a shared environment. The environment of 2D P colony is
represented by a 2D grid of square cells. In each cell, there
is a multiset of objects. Each agent is represented by a
finite collection of objects enclosed with a membrane. The
agent has programs consisting of rules. These rules are of
three types: they may change the objects of the agents and
they can be used for interacting with the joint shared
environment of the agents and movement rule. The direction
of the movement of the agent is determined by the contents
of cells surrounding the cell in which the agent is placed.
The program can contain at most one motion rule. There
is one more condition set to achieve the greatest simplicity
in agent’s behaviour. If the agent moves, it cannot
communicate with the environment. So, if the program contains a
motion rule, then the other rule is an evolution rule. The
number of objects inside each agent is set by definition and
it is usually a very small; one, two or three.
      </p>
      <p>The agent itself has information neither about its position
nor about the position of the other agents in the
environment. The communication between the agents is also
possible only via the environment by leaving special symbols
in it. These factors are limiting the 2D P colonies in
simulating more advanced communities of agents. For the
remote information exchange, we add the agents the
possibility to store and read the information from a blackboard.
The blackboard is a read/write device with an
unchangeable structure given by definition. The agents can change
only defined parts of the blackboard.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Grey Wolf Optimization Algorithm</title>
      <p>Grey wolf optimization algorithm (GWO) (see [4]) is
already well–established meta-heuristic optimization
technology. It is inspired by social dynamics found in packs
of grey wolves and by their ability to dynamically create
hierarchies in which every member has a clearly defined
role. Each wolf can fulfil one of the following role:
Alfa is the dominant pair and the pack follows their
lead for example during hunts or while locating a
place to sleep.</p>
      <p>Beta wolves support and respect the Alpha pair
during its decisions.</p>
      <p>Delta wolves are subservient to Alpha and Beta
wolves, follow their orders, and control Omega
wolves. We divide them into scouts – they observe
the surrounding area and warn the pack, sentinels –
they protect the pack when endangered and
caretakers – they provide aid to old and sick wolves.</p>
      <p>Omega wolves help to filter the aggression of the
pack and frustrations by serving as scapegoats.
The primary goal of the wolves is to find and hunt down
prey in their environment. The prey equals the optimal
solution to the given problem. The environment is
represented by a mathematical fitness function characterizing
the problem. The value of the function at the current
position of the particular wolf represents the highest-quality
prey. The wolf with the best value is ranked as Alpha, the
second one as Beta, third as Delta, and all the other are
Omegas.</p>
      <p>The hunting technique of a wolf pack can be divided into
5 steps:</p>
      <p>Search for the prey – wolves are attempting to find the
most valuable prey with respect to the effort required
to hunt it successfully.</p>
      <p>Exploitation of the prey – wolves are attempting to
draw attention to themselves and to separate the prey
from its herd.</p>
      <p>Encircling prey – the attempt to push the prey into a
situation from which it cannot escape.</p>
      <p>The prey is surrounded – it can no longer escape.
The attack – wolves attack the weak spots of the prey
(belly, legs, snout) until it succumbs to fatigue.
Afterwards, they bring it down and crush its windpipe.
The algorithm is inspired by this process and smoothly
transitions between scouting and hunting phases. In the
scouting phase, the pack extensively scouts its
environment through many random movements so that the
algorithm does not get stuck in a local extreme, while in
the hunting phase, the influence of random movements
is slowly reduced and pack members draw progressively
closer to the discovered extreme. To maintain the
divergence between those phases, each wolf is assigned vectors
~A and C~.</p>
      <p>~A is a vector with components rand ( 1; 1) a;
where rand( 1; 1), generates a random number between
1 and 1 and where
a = 2</p>
      <p>im2aix ;
while i is the algorithm’s current iteration, and imax is the
maximum number of iterations.</p>
      <p>Vector ~A is random value between 2 and 2. We can
see its impact in the Fig. 1. With growing iterations, it is
more likely that its value will be between -1 and 1. That
makes it more likely for a wolf to be hunting. Another
component supporting the scouting phase is vector</p>
      <p>C~ = rand(0; 2);
where rand(0; 2), generates a random number between
0 and 2. Vector C~ is like vector ~A, but iterations don’t
influence it. It helps the wolves behave more naturally.
Analogously, in nature, wolves encounter various
obstacles which prevent them from approaching prey
comfortably. Vectors ~A and C~ encourages wolves to prefer
scouting, or hunting, and so to avoid local optima regardless of
the algorithm’s current iteration.</p>
      <p>Wolves’ positions within the environment are updated
based upon the estimated location of the prey using Alpha,
Beta, and Delta wolves as guides.</p>
      <p>Let X~ j(i) be a position vector of wolf j in i-th iteration.
The position vector of wolf j is updated as follows:
X~ j (i + 1) =</p>
      <p>X~1 + X~2 + X~3
3
;
where i is the current iteration of the algorithms, and X~1,
X~2, X~3 are new potential position vectors of Alpha, Beta,
and Delta wolves obtained from following formulas:
~
X1
~
X2
~
X3
=
=
=</p>
      <p>X~a (i)
X~b (i)
X~d (i)
~ ~
A1 Da
A~2 D~b
~ ~
A3 Dd
where X~a (i), X~b (i), X~d (i) are the position vectors of
Alpha, Beta, and Delta wolves, they are representing the
positions in the environment that are closest to the optimum
in i-th iteration. The vectors A~1; A~2; A~3 are calculated in
the same way as vector ~A. The vectors D~a ; D~b ; D~d are
defining the distance of the wolf j position from the prey
as follows:
~
Da
~
Db
~
Dd
=
=
=
~
C1 X~a (i)
~
C2 X~b (i)
~
C3 X~d (i)</p>
      <p>X~ j (i)
X~ j (i)
X~ j (i)
where j~X j is the vector whose components are the absolute
values of the components of ~X .</p>
      <p>The vectors C~1;C~2;C~3 are computed in the same way as
vector C~, and they influence the weight of the estimated
position of the prey X~a ; X~b ; X~d , increasing or decreasing
it.</p>
      <p>We can see in Fig. 2 that thanks to this principle wolves
have the tendency to move closer towards prey and
encircle it (wolves approach from different directions).
In this subsection we describe the algorithm in
pseudocode. The inputs of the algorithm are dimensions of the
environment of the problem, boundaries of the
environment of the problem, fitness function characterizing the
problem, size of the pack (number of wolves/agents),
number of iterations of the algorithm, termination criteria and
criteria of the fitness function.
– calculate the fitness value of each agent and
determine the social hierarchy – Fig. 3. part 1.
The agent with the best value (closest to the
optimum) is Alpha, second best is Beta, third best
is Delta, and all others are Omega.
– calculate the best solution found thus far by
Alpha, Beta and Delta (X~a (i), X~b (i), X~d (i)) and
average it – Fig. 3. part 2,
– update positions of all the wolves X j(i + 1),
while random vectors ~A and C~ are updated for
each one – Fig. 3. part 3,
– check the termination criterion – Fig. 3. part 4.</p>
      <p>Iterations terminate when fitness function value
reaches a preset value.
e 2 A is the basic environmental object of the 2D P
colony,</p>
      <sec id="sec-2-1">
        <title>Env is a pair (m n; wE ), where m n; m; n 2 N is</title>
        <p>the size of the environment and wE is the initial
contents of the environment, it is a matrix of size m n of
multisets of objects over A feg.</p>
        <sec id="sec-2-1-1">
          <title>Bi; 1 i k, are agents, each agent is a construct</title>
          <p>Bi = (oi; Pi; [o; p]) ; 0 o &lt; m; 0 p &lt; n, where
– oi is a multiset over A, it determines the initial
state (contents) of the agent, joij = 2,
– Pi = pi;1; : : : ; pi;li ; l 1; 1 i k is a finite
set of programs, where each program contains
exactly 2 rules, which are in one of the following
forms each:
a ! b, called the evolution rule, a; b 2 A,
c $ d, called the communication rule,
c; d 2 A,
[aq;r] ! s; 0 q; r
called the motion rule,
2; s 2 f(; ); *; +g,
f 2 A is the final object of the colony.</p>
          <p>A configuration of the 2D P colony is given by the state
of the environment - matrix of type m n with multisets
of objects over A feg as its elements, and by the state
of all agents - pairs of objects from alphabet A and the
coordinates of the agents. An initial configuration is given
by the definition of the 2D P colony.</p>
          <p>A computational step consists of three parts. The first
part lies in determining the set of applicable programs
according to the current configuration of the 2D P colony.
In the second part, we have to select from this set one
program for each agent, in such a way that there is no collision
between the communication rules belonging to different
programs. The third part is the execution of the chosen
programs.</p>
          <p>A change of the configuration is triggered by the
execution of programs and it involves changing the state of the
environment, contents and placement of the agents.</p>
          <p>A computation is non-deterministic and maximally
parallel. The computation ends by halting when there is at
least one agent that has no applicable program.</p>
          <p>The result of the computation is the number of copies of
the final object placed in the environment at the end of the
computation.</p>
          <p>The aim of introducing 2D P colonies is not studying
their computational power but monitoring their behaviour
during the computation.
1,
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Modelling of Grey wolf optimization</title>
      <p>algorithm using 2D P colonies
3</p>
    </sec>
    <sec id="sec-4">
      <title>2D P Colonies</title>
      <p>In this section, we recall the definition of 2D P colonies
and other terms related to them.</p>
      <sec id="sec-4-1">
        <title>Definition 1. A 2D P colony is a construct</title>
        <p>P = (A; e; Env; B1; : : : ; Bk; f ); k
where</p>
      </sec>
      <sec id="sec-4-2">
        <title>A is an alphabet of the colony, its elements are called objects,</title>
        <p>As for the modelling of GWO, some similarities as well as
a few differences have been found. Both are inspired by
the nature, usable for solving optimization problems, and
both are multi-agent system models. For comparison see
the differences / problems:
fE (x; y) =
8 y + 1
&lt;</p>
        <p>fE (x
: fE (x
1; 1)
1; fE (x; y
for x = 0
for m &gt; 0 and n = 0
1)) otherwise</p>
      </sec>
      <sec id="sec-4-3">
        <title>Environmental problem,</title>
      </sec>
      <sec id="sec-4-4">
        <title>Communication problem,</title>
      </sec>
      <sec id="sec-4-5">
        <title>Randomness problem.</title>
        <p>These problems are described in the Table 1.</p>
        <p>The definition of the 2D P colony needs to be adjusted
to meet described requirements. Proposed solutions for
those problems are described in the following subsections.
4.1</p>
        <p>Environmental problem solution
The environment will be a pair of matrix m n and
fitness function fE , where m n, m; n 2 N is the size
of the environment and fE is a mathematical function
with the initial contents of environment. Alphabet V will
contains objects that can store real numbers and common
objects, V = n x ; x 0; x 00; : : : o S fa; b; c; d; e; f ; g; : : : g,
The rules of the programs, which guide the agents,
will compare the number values of the objects using
operators greater than ” &gt; ” and greater or equal to
” ”. The agent Ai will be defined as Ai = (oi; Pi; [o; p]),
joij = 2. For example, environment can be defined as
Env = (4 4; wE ; fE ), with object a in each cell, and
fitness function fE is the Ackermann function
1 a
2 a
3 a
5 a
2 a
3 a
5 a
13 a
3 a
4 a
7 a
29 a
4 a
5 a
9 a
61 a
We extend the 2D P colony by adding the blackboard that
saves the agents’ best fitness values and is always
accessible to read and write by all agents. To simulate GWO
agents cannot decide how to continue with hunting without
knowledge of their position in the environment, because
the fitness value is not enough for calculating the prey’s
estimated position. That is why the approximate position
is stored in the blackboard. The position of each agent is
computed by a function using moving receivers (see
Section 6. for more details) We will use the blackboard as a
means of giving feedback to agents. Agents do not need
to know their position in the environment, all agents who
can contribute to the search will send their solution to the
blackboard points called receivers, and estimation of prey
position is calculated as an average of distances collected
by blackboard points from wolves Alpha, Beta and Delta.
Omega wolves can ping the blackboard if changing
position and get their distance from the prey. If the distance
would decrease compared to the original distance, then the
wolf will move.
The omega wolves movement is based not only on
position of the prey but also on random vectors ~A and C~. In
2D P Colony model the randomness is replaced by
nondeterministic choice between several applicable rules.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Model of numerical 2D P colony with the blackboard</title>
      <p>Let us consider that there is not only multiset of objects in
each environmental cell, but there is also one (natural or
real) number computed by a function (called an
environmental function) depending on the position of the cell and
time. This number can be read by the agent, but it can be
changed only by the environmental function.</p>
      <p>The number can be stored inside the agent inside special
kind of objects (box-objects). To read an environmental
number the agent needs a special reading rule. Using this
rule agent rewrites an object inside of it into the special
object that contains the number that equals the
environmental number. This rule is in the form: a x ; where
a 2 V is an object and x is the environmental number in
the cell, where the agent executing the rule is placed. The
environmental number can be read by more than one agent
at the same time, and each agent can use the reading rule
multiple times in one program. For example, let ab is the
content of agent with program p : a x ; b x , and
5 is the number stored in the environmental cell where the
agent is placed. After execution of the program p the
content of agent is 5 5 .</p>
      <p>If there are two objects with numbers inside the agent,
an execution of particular program can be restricted by
these numbers. Agent can compare the numbers, and
distinguish which rule will be applied to which object.
The programs with such condition look like follows:
hx &gt; y : r1; r2i or hx y : r1; r2i ; where x; y 2 R and
r1; r2 are rules that work with objects with numbers. These
box-objects can be of different kind but both must store a
number.</p>
      <p>The memory of 2D P colony is extended by a
blackboard. It is a pair BB = (~fBB; BM); where ~fBB is a vector
of i functions fi, and BM is a matrix of size i j, i; j 2 N.
fi is a function that fills the members of corresponding row
of matrix BM with values. The function can update only
such members that are not updated by agents.</p>
      <p>To access and affect the blackboard, the agents have the
rules for reading and writing a value placed in a
specified part of the blackboard. These rules use functions
Get(BB[ad]; b) and Update(b; BB[ad]); where BB[ad] is
an address of value on the blackboard which have to
read or written, and b 2 V is the box-object. The get
rule is in a form a ! Get(BB[ad]; b) and the update rule
consist only from the function Update(b; BB[ad]); where
a; b 2 V . When the rule a ! Get(BB[2; 1]; x ) is
executed, the object a inside the agent is evolved into the
object x , and this object is filled in with the number stored
on the blackboard at position [2; 1]. By execution of rule
Update( 15 00; BB[0; 4]), the agent with object 15 00 places
number 15 into the fifth member in the first row of the
blackboard. Only one agent can read or update value in
one position at the same time. If two agents have
applicable programs that affect the same position in
blackboard, then only one of them can execute its program.
The other agent must use another applicable program, or
be inactive if another applicable program does not exist.
The agent able to write on the blackboard is chosen
nondeterministically.</p>
      <p>All newly introduced rules can be combined in the
programs with any kind of rules, except the movement rule,
that can be paired only with the rewriting rule.</p>
      <sec id="sec-5-1">
        <title>Definition 2. A numerical 2D P colony with blackboard</title>
        <p>is a construct</p>
        <p>P = (V; e; Env; A1; : : : ; Ak; BB; f ); k 1, where</p>
      </sec>
      <sec id="sec-5-2">
        <title>V is an alphabet of the colony, its elements are called objects, there are special objects b that can contain an arbitrary number,</title>
        <p>e 2 V is the basic environmental object of the 2D P
colony,</p>
        <sec id="sec-5-2-1">
          <title>Env is a triplet (m n; wE ; fE ), where m n; m; n 2 N</title>
          <p>is the size of the environment, and wE is the initial
contents of the environment. It is a matrix of size m
n of multisets of objects over V feg, and fE is an
environmental function.</p>
        </sec>
      </sec>
      <sec id="sec-5-3">
        <title>Ai; 1 i k, are agents. Each agent is a construct</title>
        <p>Ai = (oi; Pi; [o; p]) ; 0 o m; 0 p n, where
– oi is a multiset over V , it determines the initial
state (contents) of the agent, joij = 2,
– Pi = pi;1; : : : ; pi;li ; l 1; 1 i k is a finite
set of programs, where each program contains
exactly 2 rules. Each rule is in the following
form:
a ! b, the evolution rule, a; b 2 V ,
c $ d, the communication rule, c; d 2 V ,</p>
        <p>q; r
[aq;r] ! s; aq;r 2 V; 0
; *; +g, the motion rule,
a x , x 2 R; a; x 2 V; the reading rule
2; s 2 f(; )
If the program contains evolution or
communication rule r1; r2 that each works with objects
with numbers, it can be extended by a condition:
hx &gt; y : r1; r2i ; hx y : r1; r2i ;
– [o; p]; 1 o m; 1 p n; is an initial
position of agent Ai in the 2D environment,
f 2 V is the final object of the colony.</p>
        <p>A configuration of the 2D P colony is given by the state
of the environment - matrix of type m n with pairs -
multiset of objects over V feg, and a number - as its
elements, and by the states of all agents - pairs of objects
from the alphabet V , and the coordinates of the agents. An
initial configuration is given by the definition of the 2D P
colony.</p>
        <p>A computational step consists of three phases. In the
first step, the set of the applicable programs is determined
according to the current configuration of the 2D P colony.
In the second step, one program from the set is chosen for
each agent, in such a way that there is no collision between
the communication rules belonging to different programs.
In the third step, chosen programs are executed, the values
of the environment and on the blackboard are updated.</p>
        <p>A change of the configuration is triggered by the
execution of programs, and updating values by functions. It
involves changing the state of the environment, contents
and placement of the agents.</p>
        <p>A computation is non-deterministic and maximally
parallel. The computation ends by halting when there is no
agent that has an applicable program.</p>
        <p>The result of the computation is the number of copies of
the final object placed in the environment at the end of the
computation.
5.1</p>
        <p>Numerical 2D P Colony with the Blackboard for
GWO
Because of a restricted number of pages of this
contribution we show an idea of the construction of a numerical 2D
P colony with the blackboard that models GWO in finding
extreme of a function with two variables in this section.</p>
        <p>Pgw = (V; e; Env; A1; A2; : : : ; Ak; BB; f ); k 1, where:
V = n b ; b 0; b 00; b 000; b iv; b v; b vio
[
[ e; f ; a0; b; c; d; h; h0; h00; mOK ; mKO; m0KO; m0K0 O [
[ fn; A; B; Dg [ l; l0; l00; l000; liv j Y 2 f(; ); *; +g [
[ kz1z2z3z4 j zi 2 f(; ); *; +g ^ k; i 2 f1; 2; 3; 4g
e 2 V is the basic environmental object,
Env is a triplet (i j; we; f (x; y)), where i; j 2 N,
wE = jar;sj; ar;s = e; 1 r i; 1 s j;
A1; A2; : : : ; Ak are the agents, Ai = (Oi; Pi; [rx; ry]),
where:
– joij = 2,
– P1 = P2 =
– [r; s] are the initial coordinates,</p>
        <p>= Pk, Pi rules are defined below,
BB is the blackboard, to the definition of the
blackboard is devoted a separate subsection of this chapter,
f is the final object, f 2 V .</p>
        <p>The initial agent’s configuration is: ee and its position
is [r; s].</p>
        <p>Programs Pi associated with i-th agent are:
1. e x 0; e ! Get(BB[alpha; x ]; i x 2 R is
number placed in the environmental cell [r; s], alpha is
address of current value y of alpha wolf in blackboard;
This program is to read the number from the
environmental cell and the value of current alpha wolf in
blackboard.
2. The following set of programs Compare is to
compare two numbers stored inside the agent:
(a) Dx &gt; y : x ! x ; y 0 ! AE – I am new
Alpha,
(b) x</p>
        <p>y : x 0 ! b; x ! x
(c) D x ! x ; b</p>
        <p>Get(BB[beta]; x 00)E,
(d) Dx &gt; y : x ! x ; y 00 ! BE – I am new Beta,
(e) x</p>
        <p>y : x 00 ! c; x ! x
(f) D x ! x ; c</p>
        <p>Get(BB[BB[gamma]; x 000]; i,
(g) Dx &gt; y : x ! x ; y 000 ! C</p>
        <p>E – I am new</p>
        <p>Delta,
(h) x</p>
        <p>y : x 000 ! d; x ! x iv – I am Omega,
3. If there is A; B or C inside the agent, the agent updates
the blackboard using programs:
(a)
(b)
(c)
(d)</p>
        <p>Update( x ; BB[alpha]); A ! a0 ,
Update( x ; BB[beta]); B ! a0 ,
Update( x ; BB[gamma]); C ! a0 ,
x ! e; a0 ! e ,
4. If the agent is an omega wolf, the agent is supposed
to move so as to approach the prey. The agent reads
its distance from the prey computed by the function
and placed on the blackboard, and it moves in a
random direction. The direction is generated in such a
way, that the agent creates the object 1 with a low
index formed from four directions in random order. 1
means that the agent will move in the first direction.
(a) d ! 1w; x iv ! Get(BB[dist. from prey]; x v) ,
w = z1z2z3z4; zi 2 f); (; *; +g,
z1 6= z2 6= z3 6= z4
(b) 1w $ e; x v ! x v ,
(c)
*2 e e e 3 +
4 e Xz1z2z3z4 e 5 ! zX ; e ! zX ,</p>
        <p>e e e</p>
        <p>X 2 f1; 2; 3; 4g; zX 2 f); (; *; +g
Then the agent puts the object with movement
sequence into environmental cell and reads its distance
from the prey from new location.</p>
        <p>(a) zX ! z0X ; x v $ e ,
(b) hz0X ! z0X0 ; e ! hi,
(c) Dz0X0 $ x v; h ! Get(BB[dist. from prey]; x vi)E,
5. If the new value is smaller then value from
previous location, the agent consumes object
corresponding to direction and object with movement sequence
and rewrite its content to ee.</p>
        <p>(a) Dx</p>
        <p>y : x v ! mOK ; y vi ! h0E,
(b) Dx &gt; y : x vi ! mKO; y v ! h0E,
(c) hmOK ! mOK ; h0 $ z0X0 i,
6. If the new distance is greater then old one, the agent
consumes object corresponding to the direction and
moves to original location and rewrite the object 1
with movement sequence into object 2 with the same
movement sequence, and the agent continues with
investigation. If the agent moves back and there is
object 4 with movement sequence, it did not find a
smaller distance to the prey and it stops working.
(a) hmKO ! mKO; h0 $ z0X0 i,
mKO ! m0KO; z0X00 ! e ,
m0KO ! m0KO; e $ Xw ,
m0KO ! m0K0 O; Xw ! (X + 1)w , X 2 f1; 2; 3g
m0K0 O ! Get(BB[dist. from prey]; x v); Xw $ e ,
X 2 f2; 3; 4g,
m0KO ! f ; e $ 4w ,</p>
        <p>The computation is possibly non-halting because of the
three agents (alpha, beta and gamma) can always find
applicable program. The positions of these three agents
are important to estimate prey position by the Blackboard
(more in Section 5.2).
5.2</p>
        <p>Blackboard
The blackboard for GW O is a structure defined as follows:</p>
        <p>BBGWO = ( f~nc; [v~1; v~2]), where:
dimension of both vectors v~1; v~2 is j = max(7; k), k
1 is the number of agents, in this case, a matrix of
type i j, i; j 2 N, is represented by a vector of these
vectors.
v~1 is a vector with elements that can be named</p>
      </sec>
      <sec id="sec-5-4">
        <title>AlphaValue, BetaValue, DeltaValue, AlphaPosition,</title>
      </sec>
      <sec id="sec-5-5">
        <title>BetaPosition, DeltaPosition, preyPosition.</title>
        <p>If j &gt; 7, then elements with index greater than 7 are
without a name, they are addressed by its position.
The first three elements are serviced by the agents, so
blackboard function for the first row only copy their
values, if they are not updated by agents in current
step of computation.</p>
        <p>– initial content of v~1 is 0 in each element.
v~2 is a vector with elements named
A00sDistanceFromPrey, A01sDistanceFromPrey,
. . . , A0ksDistanceFromPrey, k is number of agents
(wolves), the elements without name can be
addressed only by its position in blackboard matrix.</p>
        <p>– initial content of v~2 is 0 in each element.
f cn is a vector of functions ( f nc1(i); f nc2(i)) for
manipulating the vectors v~1 and v~2, where f nc1(i)
updates i th element of vector v~1 and f nc2(i)
updates i th element of vector v~2, 0 i j, j is a
dimension of vectors v~1 and v~2:</p>
        <p>8 identity
i f nc1(i) = &lt; BPosition
: f nc1(3)+ f nc1(4)+ f nc1(5)</p>
        <p>3
ii f nc2(i) = j f nc1(preyPosition)
i = index of agent Ai.
i = f0; 1; 2g;
i = f3; 4; 5g;
i = 6:</p>
        <sec id="sec-5-5-1">
          <title>BPositionj, for</title>
          <p>Auxiliary function BPosition is described in the section
Receivers.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Receivers - from theoretical model into real life</title>
      <p>A communication between agents and blackboard is
realized by receivers. We equip our model with two receivers
that are listening signals coming from the agents. The
abilities of receivers are crucial for functioning of the
functions of the blackboard, because they are providers of
values of Bposition function. We can assume, that receivers
can "see" position of each agent but for wide areas it is not
very realistic. For our model we choose another approach.
We introduce time into our model - it takes some time to
signal from agents to reach receivers.</p>
      <p>rcv - the blackboard has two receivers. Both receivers
are located in the environment. Their initial position
are on the opposite sides of the boundary points of the
environment Env (positions [0; 0] and [m 1; n 1],
where m n; m, n 2 N, is the size of the environment).
Receivers’ positions are updated in each derivation
step and receivers circulate around the environment
as follows:</p>
      <p>– x &lt; 1; y &lt; n : x = x; y = y + 1,</p>
      <p>Receivers are listening to agents’ signal. If both
receivers receive the same message from the agent,
received message is being processed in the following
sequence: computation of auxiliary function BPosition,
execution of contents part of message.</p>
      <p>– BPosition = x 2 R1 \ R2, where R1, R2 are
circles with centre at receivers’ position and radius
r = now sent, where now is time of receiving
the message, and sent equals to timestamp, x is
chosen randomly in the intersect area. The
intersection shapes are changed over time due to
the movements of the receivers.</p>
      <p>At this point, it is important to focus on the use of the
blackboard by the agents. Agents can use blackboard’s
functions above.</p>
      <p>
        If the agent concludes that it is Alpha, it rewrites field
v~1[0] using communication program 3 in Fig. 7 on the left
side. In the same way, Beta and Delta wolves can rewrite
field v~1[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] using the same function. On the right side of
Fig. 7, the agent concludes that it is Omega, and it will
try to move with blackboard’s assistance using
communication program 1.
7
      </p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>The 2D P colonies as defined are not able to simulate the
behaviour of the complex multi-agent systems. The agents
of the 2D P colony are not able to communicate in other
way than via the environment. They also have no
information about their own position and position of others in the
environment. These deficiencies can be solved by adding
the universal communication device, the blackboard, into
the definition of the 2D P colony.</p>
      <p>The Grey wolf optimization algorithm is meta-heuristic
optimization technology inspired by the social dynamics
of the packs of grey wolves. The looking for the extreme
of a function is inspired by hunting the prey by the pack of
the wolves.</p>
      <p>We introduced the formal definition of the 2D P colony
equipped with a blackboard and presented the ability of
this formal model to simulate the Grey wolf algorithm.
The blackboard serves not only as a communication
device, but also as a device capable to set the particular roles
of the agents simulating the wolves, and successfully find
the extreme of the function represented by the discrete
environment of the 2D P colony.</p>
      <p>Acknowledgements. This work was supported by The
Ministry of Education, Youth and Sports from the
National Programme of Sustainability (NPU II) project
„IT4Innovations excellence in science - LQ1602“.
Research was also supported by the SGS/11/2019 Project
of the Silesian University in Opava.
[4] Mirjalilia, S., Mirjalilib, S.M., Lewisa, A.: Grey Wolf
Optimizer. Advances in Engineering Software 69, 46—
61(2014)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Cienciala</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ciencialová</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perdek</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>2D P colonies</article-title>
          . In:
          <string-name>
            <surname>Csuhaj-Varjú</surname>
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gheorghe</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rozenberg</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salomaa</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <source>Vaszil Gy. (eds) Membrane Computing. CMC 2012. Lecture Notes in Computer Science</source>
          , vol
          <volume>7762</volume>
          . Springer, Berlin, Heidelberg, pp.
          <fpage>161</fpage>
          -
          <lpage>172</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Kelemen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelemenová</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Pa˘un, Gh.:
          <article-title>Preview of P colonies: A biochemically inspired computing model</article-title>
          .
          <source>In: Workshop and Tutorial Proceedings. Ninth International Conference on the Simulation and Synthesis of Living Systems (Alife IX)</source>
          . pp.
          <fpage>82</fpage>
          -
          <lpage>86</lpage>
          . Boston, Massachusetts, USA (September 12-15
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3] Pa˘un, Gh.:
          <article-title>Computing with membranes</article-title>
          .
          <source>J. Comput. Syst. Sci</source>
          .
          <volume>61</volume>
          (
          <issue>1</issue>
          ),
          <fpage>108</fpage>
          -
          <lpage>143</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>