<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ABOD3: A Graphical Visualisation and Real-Time Debugging Tool for BOD Agents</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Andreas Theodorou Department of Computer Science University of Bath Bath</institution>
          ,
          <addr-line>BA2 7AY</addr-line>
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <fpage>60</fpage>
      <lpage>61</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>II. BEHAVIOUR ORIENTED DESIGN</title>
      <p>
        Behaviour Oriented Design (BOD) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] takes inspiration
both from the well-established programming paradigm
objectoriented design (ODD) and Behaviour-Base AI (BBAI) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],
to provide a concrete architecture for developing complete,
complex agents (CCAs), with multiple conflicting goals and
mutual-exclusive means of achieving those goals. BBAI was
first introduced by Brooks [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], where intelligence is
decomposed into simple, robust modules, each expressing
capabilities, actions such as movement, rather than mental entities
such as knowledge and thought.
      </p>
      <p>Bryson’s BOD is a cognitive architecture which promotes
behaviour decomposition, code modularity and reuse, making
the development of intelligent agents easier. BOD describes
the agent’s behaviour into multiple modules forming a
behaviour library. Each module can have a set of expressed
behaviours called acts, actions, perceptions, learning, and
memory. Behaviour modules also store their own memories,
i.e. sensory experiences.</p>
      <p>Action selection is forced by competition for resources. If
no such competition exists, the behaviour modules are able to
work in parallel enforcing the long-term goals of the agent</p>
      <p>POSH planning is the action selection for reactive planning
derivative of BOD. POSH combines faster response times
similar to reactive approaches for BBAI with goal-directed
plans. A POSH plan consists of the following plan elements:
1) Drive Collection (DC): It contains a set of Drives and
is responsible for giving attention to the highest priority
Drive. To allow the agent to shift and focus attention,
only one Drive can be active in any given cycle.
2) Drive (D): Allows for the design and pursuit of a specific
behaviour as it maintains its execution state. The trigger
is a precondition, a primitive plan element called Sense,
determining if the drive should be executed by using a
sensory input.
3) Competence (C): Contains one or more Competence
Elements (CE), each of which has a priority and a
releaser. A CE may trigger the execution of another
Competence, an Action Pattern, or a single Action.
4) Action Pattern (AP): Used to reduce the computational
complexity of search within the plan space and to allow
a coordinated fixed sequential execution of a set of
Actions.</p>
      <p>B. Instinct</p>
      <p>
        The Instinct Planner is a reactive planner based on the
POSH planner. It includes several enhancements taken from
more recent papers extending POSH [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In an Instinct plan,
an AP contains one or more Action Pattern Elements (APE),
each of which has a priority, and links to a specific Action,
Competence, or another AP.
      </p>
    </sec>
    <sec id="sec-2">
      <title>III. THE PLAN EDITOR</title>
      <p>The editor provides a customisable user interface (UI) aimed
at supporting both the development and debugging of agents.
Plan elements, their subtrees, and debugging-related
information can be hidden, to allow different levels of abstraction and
present only relevant information. The graphical representation
of the plan can be generated automatically, and the user can
override its default layout by moving elements to suit his needs
and preferences. The simple UI and customisation allows the
editor to be employed not only as a developer’s tool, but also
to present transparency related information to the end users,
helping them to develop more accurate mental models of the
agent.</p>
      <p>
        Alpha testers have already used ABOD3 in experiments to
determine the effects of transparency on the mental models
formed by humans [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Their experiments consisted of
a non-humanoid robot, powered by the BOD-based Instinct
reactive planner. They have demonstrated that subjects, if they
also see an accompanying display of the robot’s real-time
decision making as provided by ABOD3, can show marked
improvement in the accuracy of their mental model of a
robot observed. They concluded that providing transparency
information by using ABOD3 does help users to understand
the behaviour of the robot, calibrating their expectations.
      </p>
      <p>Plan elements flash as they are called by the planner and
glow based on the number of recent invocations of that
element. Plan elements without any recent invocations start
dimming down, over a user-defined interval, until they return
back to their initial state. This offers abstracted backtracking
of the calls. Sense information and progress towards a goal are
displayed. Finally, ABOD3 provides integration with videos of
the agent in action, synchronised by the time signature within
the recorded transparency feed.</p>
      <p>ABODE3 provides an API that allows the editor to connect
with planners, presenting debugging information in real time.
For example, it can connect to the Instinct planner by using a
built-in TCP/IP server, see Figure 2.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. CONCLUSION</title>
      <p>We plan to continue developing this new editor,
implementing debug functions such as “fast-forward” in pre-recorded
log files and usage of breakpoints in real-time. A transparent
agent, with an inspectable decision-making mechanism, could
also be debugged in a similar manner to the way in which
traditional, non-intelligent software is commonly debugged.
The developer would be able to see which actions the agent
is selecting, why this is happening, and how it moves from
one action to the other. This is similar to the way in which
popular Integrated Development Environments (IDEs) provide
options to follow different streams of code with debug points.
Moreover, we will enhance its plan design capabilities by
introducing new views, to view and edit specific types of
planelements and through a public beta testing to gather feedback
by both experienced and inexperienced AI developers.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Theodorou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Wortham</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “
          <article-title>Designing and implementing transparency for real time inspection of autonomous robots</article-title>
          ,
          <source>” Connection Science</source>
          , vol.
          <volume>29</volume>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Wortham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Theodorou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “Robot Transparency , Trust and Utility,”
          <source>in ASIB 2016: EPSRC Principles of Robotics</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “
          <article-title>The behavior-oriented design of modular agent intelligence</article-title>
          ,” in System,
          <year>2002</year>
          , vol.
          <volume>2592</volume>
          , pp.
          <fpage>61</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Wortham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Gaudl</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “
          <article-title>Instinct : A Biologically Inspired Reactive Planner for Embedded Environments</article-title>
          ,”
          <source>in Proceedings of ICAPS 2016 PlanRob Workshop</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Stein</surname>
          </string-name>
          , “
          <article-title>Intelligence by Design : Principles of Modularity and Coordination for Engineering Complex Adaptive Agents by,” no</article-title>
          .
          <source>September</source>
          <year>2001</year>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “
          <article-title>Action selection and individuation in agent based modelling</article-title>
          ,”
          <source>in Proceedings of Agent</source>
          <year>2003</year>
          : Challenges in Social Simulation,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Sallach</surname>
          </string-name>
          and C. Macal, Eds. Argonne, IL: Argonne National Laboratory,
          <year>2003</year>
          , pp.
          <fpage>317</fpage>
          -
          <lpage>330</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Brooks</surname>
          </string-name>
          , “Intelligence Without Representation,”
          <source>Artificial Intelligence</source>
          , vol.
          <volume>47</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>159</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gaudl</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “
          <article-title>The Extended Ramp Goal Module: Low-Cost Behaviour Arbitration for Real-Time Controllers based on Biological Models of Dopamine Cells,”</article-title>
          <source>Computational Intelligence in Games</source>
          <year>2014</year>
          ,
          <year>2014</year>
          . [Online]. Available: http://opus.bath.ac.uk/40056/
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Wortham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Theodorou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , “
          <article-title>What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent System</article-title>
          ,” in IJCAI-2016
          <source>Ethics for Artificial Intelligence Workshop</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10] --, “Improving Robot Transparency :
          <article-title>Real-Time Visualisation of Robot AI Substantially Improves Understanding in Naive Observers fsubmittedg</article-title>
          ,”
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>