<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>When Rationality Entered Time and Became a Real Agent in a Cyber-Society</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aldo Franco Dragoni</string-name>
          <email>a.f.dragoni@univpm.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Sernani</string-name>
          <email>p.sernani@univpm.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davide Calvaresi</string-name>
          <email>davide.calvaresi@hevs.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Università Politecnica delle Marche</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università Politecnica delle Marche</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Since Artificial Intelligence (AI) applications became mature, there has been growing interest in employing them into complex real equipment, especially in order to implement “Cyber-Physical Systems” (CPS). Since its dawn as a discipline, AI focused on simulating and reproducing human-like mental processes using formal structures, chasing the high-quality of reasoning. However, with the challenges posed by CPS, AI needs to take into account concrete real “timing performances” in addition to abstract reasoning about “time”. The AI definition of “intelligent agent” seems to perfectly apply to CPS. Nevertheless, to be real, intelligent agents need to deal with, reason about and act in time. This paper motivates such needs by deriving the roots of the definition of Real-Time Agent in Philosophy, Control Theory, and AI. Moreover, some examples are provided to demonstrate why RealTime agents are required in the “real world” of CPS. The paper concludes listing the desiderata of Real-Time Agents, wishing for the convergence of Multi-Agents Systems and RealTime Systems.</p>
      </abstract>
      <kwd-group>
        <kwd>Multi-Agent Systems Real-Time Systems</kwd>
        <kwd>Artificial Intelligence</kwd>
        <kwd>Control Theory</kwd>
        <kwd>CyberPhysical Agents</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>learning” (even in the architectural framework of
“neural networks”), “language understanding”, “planning”
and so on. However, the original question has been
often confused and misplaced with the other one: “can
machines behave intelligently?” which is well
understandable, influential, and impressive for the masses.
Brooks [Bro91] noticed that animals exhibit an
intelligent behaviour even without an explicit symbolic
representation of the world where they live in, thus
showing that “reasoning” and “intelligently behaving”
are different concepts not necessarily consequential.
However, modern Cyber-Physical Systems (CPS) are
evolving from the old mathematical notions based on
“Control Theory” (CT) towards explicit AI-based
techniques, thus conceiving hybrid systems that embody
complex reasoning software (produced by the AI
community) on the same digital equipment that govern
sensors and actuators (which are regarded simply as
I/O peripherals of a CPU) through the application
of some pieces of software derived from CT. By
doing so, CPS becomes nothing more that computing
systems running huge AI-based software and some
automation firmware over boards where the input devices
are defined as “sensors” and the output devices
“actuators”. However, unfortunately, this evolution remix
together concepts that history separated, i.e.
“thinking” and “intelligently behaving” thus producing
serious design mistakes, especially w.r.t. to the notion of
“time”. In fact, since the beginnings the AI
community very well understood the importance of reasoning
about time (think for instance of “planning” and
“temporal reasoning”), but almost always neglected the
importance of reasoning in time! On the other hand, CT
dealt with timing performances, but never reasoned
explicitly about anything, notion of “time” included.
This paper explores these ideas showing some
inherent problems of these new hybrid approaches to CPS.</p>
    </sec>
    <sec id="sec-2">
      <title>Rationality and Reality</title>
      <p>Scrolling the opaque pages of memory and going back
to the foundations (Dartmouth workshop - 1956), that
saw protagonists such as Marvin Minsky and John
McCarthy paving the foundation of Artificial Intelligence,
we see that their aim was
...to proceed on the basis of the conjecture
that every aspect of learning or any other
feature of intelligence can in principle be so
precisely described that a machine can be made
to simulate it. An attempt will be made
to find how to make machines use language,
form abstractions and concepts, solve kinds
of problems now reserved for humans, and
improve themselves
It is almost clear that our pioneers’ aim was more that
of simulating humans’ mental processes than that of
acting in the real world; they were more interested
in “rationality” (as mental processes) than in
“reality” (as processes in the physical world); they
(unconsciously?) accepted the ontological distinction
between the two and the conjecture that, once precisely
defined the descriptions of a certain mental ability,
that could as well be implemented into a machine
acting in the real world. They didn’t considered (or were
not aware of) Georg Wilhelm Friedrich Hegel’s famous
claim [Heg21]:
What is rational is real;
and what is real is rational.
i.e. that the two “phases” should better be considered
as two sides of the “same” medal. Of course,
ignoring Hegel’s claim was a pity, but they were completely
innocent because for centuries, after Aristotle,
“Logic” (hence ‘Rationality”) limited itself to considering
“thought” in its formal structure, making abstraction
from every content and, as such, becoming not
capable of being “real”. This abstract conception of the
“Logos” was one of the most influential of the western
world philosophy and culture in the last two
millenniums, and still pervades our time. To our knowledge,
apart from S. John the Evangelist (four centuries after
Aristotle) who, in the prologue of his gospel, wrote:
&gt;En rq¬ ªn å lìgoc,
kaÈ jeäc ªn å lìgoc.
oÝtoc ªn ân rq¬ präc tän jeìn.
pnta di' aÎtoÜ âgèneto,
kaÈ qwrÈc aÎtoÜ âgèneto oÎdà én. ç gègonen
anyone considered “rationality” and “reality” as
ontologically different. Back to our era, the queen of
sciences for understanding the “real” world is “physics”
and in 1905 the most important formula ever written
was published [Ein05]</p>
      <p>Energy = q</p>
      <p>M ass</p>
      <p>c2
1</p>
      <p>Space2
T ime2 c2
(1)
showing that the Universe is not made of five basic
elements, as supposed in the ancient China (Fire, Earth,
Water, Metal, and Wood), instead “reality” is made
of four: Energy; M ass; Space and T ime. Now, if we
accept Hegel’s equation Rationality = Reality then
we have to conclude that Rationality is a function of
Energy; M ass; Space and, in particular, T ime. In this
holistic vision, there is no real difference between the
two questions: “can machines think?” and “can
machines behave intelligently?”, both of them referring
to “Rationality” as two sides of the same coin. Being
“intelligent”, for both a human and a robot, does not
implies just reasoning about time but also reasoning
in time, since being “in time” is the only way to Be
intelligent.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Control Theory and Artificial Intelligence</title>
      <p>In 1948, Norbert Wiener originated a topic named
“Cybernetics” [Wie48] focused on realizing electronic
machines able to replicate animals behaviors. At that
time, computers were not invented yet and
electronics were not able to perform any computation (circuits
were just analogical). However, the main outcomes
of such a discipline were the notion of “feedback” and
the mathematical representations of the relations
between two temporal functions, Input(t) and Output(t),
named “Transfer Functions”.</p>
      <p>Ten years later, after the first generation of
computers, that discipline was named “Control Theory”, the
Input(t) function was converted into a digital stream,
the mathematical “Transfer Functions” started to be
“calculated” on a CPU (instead of being just
“implemented” into a circuit) and, finally, the digital stream
produced by the calculation was converted into an
Output(t) function. Few people noticed that this
change towards the digital era implied that all the
time necessary for the A/D conversion, the
calculation and the D/A conversion should be “immaterial”
for the process to be correct. Furthermore, the pure
material notion of “intelligent behaviour” felt into the
pure immaterial notion of “calculation”, notion that is
deeply entrenched into the Aristotelian and AI’s
conception of “Logos”.</p>
      <p>Surprisingly, the CT’s notion of a real embedded
system, today called CPS, perfectly applies to the AI’s
definition of abstract agenthood. Thus, an
embedded system can be considered an embodiment of an
3. if a Real-Time Agent reasons about time then it
should embody a clock.
1. they need memory, not just registry, otherwise
they could not deal with real byte streams,
2. they need to deal with Energy, Mass, Space and,
especially, Time, otherwise they could not
produce their output (the action in the real world)
and remain in the Plato’s world of ideas.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Real-Time Agent</title>
      <p>If an “intelligent agent” would be just a piece of
software, then its correctness should be evaluated only
in terms of soundness and completeness w.r.t. an
I/O transfer function (Aristotle-minded). But if it
has to be regarded as a Cyber-Physical Agent
(Hegelminded), than its correctness should be evaluated
under a more holistic perspective. The following is to be
considered just a step towards that direction.
The clock might be just a timer or a hourglass with
some kind of sand flowing inside (for instance,
electric charge). If the real-time agent reasons about time
then, probably, its actions/plans change in function of
the current-time/remaining-time. There are two kinds
of deadline-related activities for a real-time agent:
1. apply a constant strategy (e.g. A ) to search for
the best action till the current time t reaches the
deadline d
2. apply a variable strategy which is a function of the
remaining time d t (e.g. RT A (d t)) till d = t,
where RT A is the algorithm Real T imeA
4.1</p>
      <sec id="sec-4-1">
        <title>Examples</title>
        <p>Let us assume that a vacuum cleaner robot is a simple
intelligent real-time agent. Simplifying, it may be
proEnvironment
Byte Stream</p>
        <p>D///A/DDD
Byte Stream A///D/DDD</p>
        <p>Same ontology</p>
        <p>Different ontology</p>
        <p>Actuators</p>
        <p>Action</p>
        <p>Perception
Sensors</p>
        <p>Mass, Energy, Space, Time
Thread (task)</p>
        <p>Buffer of Memory
Looking at Figure2, we might regard each thread as an
agent and we might change each memory buffer with
a communication channel. By doing so what was a
single real-time agent becomes a real-time agency: a
real-time multi-agent system. Does it make sense? Is
it useful? Is it a real change or just a matter of
perspective? In 1986, Marvin Minsky postulated that
intelligent behavior is due to the non-intelligent behavior
of a very high number of agents organized in a
bureaucratic hierarchy - the “Society of the Mind” [Min86].
Such a concept is also known as swarm intelligence.
Minsky also related the number of agents with their
intelligence. The less intelligent the agent are, the
more of them we need to produce an intelligent
behavior. Each agent’s position in the hierarchy and each
agent’s capacity to access the actuators (and the
sensors) is dynamic and influenced by stimuli perceived
from the environment (perceptions ): so, the overall
external behaviour of the society (actions) depends on
4. it does make little sense to talk about agents’
interaction protocols without introducing deadlines,
precedences and resources constraints among the
agents in order to establish their dynamic
priorities (virtual or real) in order to resolve the
conflicts in a “correct manner”.
5. agents’ behaviors should be regulated by
“realtime scheduling” algorithms.
5.1</p>
      </sec>
      <sec id="sec-4-2">
        <title>Examples</title>
        <p>In 1977, Randall Smith introduced the Contract Net
Protocol (CNP) [Smi77] to enable 1-to-1 interactions
in MAS. A first attempt to extend CNP towards
realtime performances has been subsequently proposed by
FIPA [Fip01], who introduced the concept of
“deadlines” (related to the interaction phases) in the CNP.
However, as analyzed by Calvaresi et al. [CDB18], the
pure notion of “negotiation” is not sufficient to ensure
the capability of complying with strict timing
constraints. Agents has to take into consideration the
time while (i) reasoning (e.g., planning and
schedulers), (ii) when negotiating, and (iii) how sending the
messages (e.g., communication time delays). The
situation gets more and more complex when we consider
the contemporaneity of several planning and
interactions among the agents. Hence, after receiving a Call
for Participation, a candidate Participant should ask
itself: “do I have resources to prepare a proposal in the
time interval d t?”, “is this CfP worthwhile w.r.t. all
the other stuff I’ll have probably to manage in the time
interval d t?” and so on. Of course, the answers to
these questions are function of the current time t (d
is fixed in the CfP) which should periodically checked
through the clock.
5.2</p>
        <p>Evaluating performances at the Agent’s
vs. the Agency’s level
The last example gives an idea about how much
complex are the “Real-Time compliant Multi-Agent
Scenarios” [CDB18]. Unfortunately, another difficulty
appears when we try to evaluate the real-time
performances of the Agency: should we evaluate them from
the perspective of each selfish member agent or should
we evaluate the overall Agency’s performances? Of
course, the two perspective are causally independent,
as many studies from “economics” and “distributed
computing” showed in these decades.
6</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>We argued the need not only for “time representation”
in both single- and multi-agent systems, but also for
“timing awareness and performances”. We showed how
rooted in the history of philosophy is our perception of
time and how much it influences our holistic notion of
“Rationality”. Thus, we think it makes little sense to
build Cyber-Physical Agents that do not
check-andreason about time. Without this ability, agents are
not “real” at all. Summarizing, we learned five lessons:
“real agents” need:
1. memory
2. to deal with Time
3. a clock
4. deadlines, precedences and resources constraints
in order to establish their dynamic priorities
5. “real-time scheduling” algorithms to behave
correctly.</p>
      <p>We also might express two desiderata for the future:
1. “Real Agents’ ” design should be more inspired by
“Control Theory”
2. “Multi-Agents Systems” conception should align
with “Real-Time Systems” discipline.
[Bro91]</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>Rodney A.</given-names>
            <surname>Brooks</surname>
          </string-name>
          .
          <article-title>Intelligence without representation</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>47</volume>
          (
          <issue>1</issue>
          ):
          <fpage>139</fpage>
          -
          <lpage>159</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [CDB18]
          <string-name>
            <given-names>Davide</given-names>
            <surname>Calvaresi</surname>
          </string-name>
          , Aldo Franco Dragoni, and Giorgio C. Buttazzo, editors.
          <source>Proceedings of the 1st International Workshop on RealTime compliant Multi-Agent Systems colocated with the Federated Artificial Intelligence Meeting</source>
          , Stockholm, Sweden,
          <year>July 15th</year>
          ,
          <year>2018</year>
          , volume
          <volume>2156</volume>
          <source>of CEUR Workshop Proceedings. CEUR-WS.org</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Ein05] [Fip01] [Heg21] [Min86] [Smi77]
          <string-name>
            <given-names>A.</given-names>
            <surname>Einstein</surname>
          </string-name>
          .
          <article-title>Zur elektrodynamik bewegter körper</article-title>
          .
          <source>Annalen der Physik</source>
          ,
          <volume>17</volume>
          :
          <fpage>891</fpage>
          -
          <lpage>921</lpage>
          ,
          <year>1905</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Fip. FIPA Iterated Contract Net Interaction Protocol Specification. FIPA</surname>
          </string-name>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>G. W. F.</given-names>
            <surname>Hegel</surname>
          </string-name>
          .
          <article-title>Grundlinien der philosophie des rechts</article-title>
          .
          <year>1821</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Marvin</given-names>
            <surname>Minsky</surname>
          </string-name>
          .
          <source>The Society of Mind. Simon &amp; Schuster</source>
          , Inc., New York, NY, USA,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Reid G.</given-names>
            <surname>Smith.</surname>
          </string-name>
          <article-title>The contract net: A formalism for the control of distributed problem solving</article-title>
          .
          <source>In Proceedings of the 5th International Joint Conference on Artificial Intelligence - Volume 1, IJCAI'77</source>
          , pages
          <fpage>472</fpage>
          -
          <lpage>472</lpage>
          , San Francisco, CA, USA,
          <year>1977</year>
          . Morgan Kaufmann Publishers Inc.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [TUR50]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>TURING. I</surname>
          </string-name>
          .
          <article-title>-computing machinery and intelligence</article-title>
          .
          <source>Mind</source>
          , LIX(
          <year>236</year>
          ):
          <fpage>433</fpage>
          -
          <lpage>460</lpage>
          ,
          <year>1950</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Wie48]
          <string-name>
            <given-names>Norbert</given-names>
            <surname>Wiener</surname>
          </string-name>
          .
          <article-title>Cybernetics, or, Control and communication in the animal and the machine / Norbert Wiener</article-title>
          . Technology Press [Cambridge, Mass.],
          <year>1948</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>