<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>What if gamified software is fully proactive? Towards autonomy-related design principles ⋆</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Accounting and Finance</institution>
          ,
          <addr-line>Economics</addr-line>
          ,
          <institution>University of Vaasa</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Technology and Innovations, University of Vaasa</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Computational agents are a type of software architectures designed to be autonomous and social, meaning that they can make decisions proactively, reacting also to stimuli from the environment. The use of such architectures is not common in the gamification ifeld area, instead, gamified software has traditionally reactive characteristics, responding to user actions disregarding the possibility of proactive behavior. In this paper, we propose four formal principles for designing autonomous gamified systems, to ensure traceability of gamified outputs, internal consistency of gamification attempts, coherent agent-user interaction, and formal conditions to assess user actions from a rational perspective. We present our initial work on these general principles, highlighting our empirical future work.</p>
      </abstract>
      <kwd-group>
        <kwd>Persuasive technology • Gamification • Software agents • Principles • Formal approaches • Argumentation dialogues</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In the artificial intelligence (AI) field, the sense of “ autonomy ” is not precise,
but the term is taken to mean that software agents’ activities do not require
constant human guidance or intervention [
        <xref ref-type="bibr" rid="ref20">19</xref>
        ]. An object is an agent (e.g. a
software agent) if it serves a useful purpose either to a diferent agent, or to itself,
in which case the agent is called autonomous [
        <xref ref-type="bibr" rid="ref13">12</xref>
        ]. Being those purposes the state
of afairs to be achieved, in other words, the goals of an agent. To exemplify this
notion, let us suppose that a person wants to increase her monthly savings by
counting the accumulated expenses using a financial app . That software is her
agent that keeps her savings count, “adopting” her goal to motivate herself for
saving money. Note that the app has a transient agency, if the user does no
longer needs to save money (e.g. she wins the lottery), such an agent becomes a
simple object with no ascribed agency.
      </p>
      <p>
        An autonomous agent is not dependent on the goals of others, it possesses
goals that are generated from within rather than adopted [
        <xref ref-type="bibr" rid="ref21">20</xref>
        ]. For example, an
autonomous version of that financial app, could change its goal (proactively)
and help her to visualize diferent philanthropic goals, without any guidance or
intervention.
      </p>
      <p>
        A high-level question directing this research connects the aforementioned
autonomous agents and gamification field, what if a gamified software became
fully autonomous?. Empirical answers to this question from a gamification
perspective are scarce, and theoretical frameworks of gamification dealing with this
issue are practically non-existent (see reviews [
        <xref ref-type="bibr" rid="ref12 ref7">6,11</xref>
        ]). From the AI perspective,
“human oversight” is one of the requirements being put forward as a means to
support human autonomy and agency [
        <xref ref-type="bibr" rid="ref14">13</xref>
        ], where theoretical (we will use the
term formal ) guidelines have been proposed, aiming to delineate autonomous
behavior of agents considering responsible and transparent mechanisms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In
fact, high-level principles and guidelines [
        <xref ref-type="bibr" rid="ref15 ref17 ref9">8,14,16</xref>
        ] are commonly used in
gamification, but most of them are not aligned with autonomous software characteristics,
nor serve as grounded specifications for developing actual software. In this
context, our ongoing research proposes four principles for designing autonomous
gamification technology considering: traceability as a mechanism for meaningful
human control, coherence and consistency during the interaction autonomous
agent-human, and rationality of the decisions that a proactive agent makes. In
summary, the proposed principles are:
Principle 1 : Traceability of gamified outputs . Establishes that gamified
afordances (outputs) need to provide a transparent and identifiable explanation
of the persuasive attempt.
      </p>
      <p>Principle 2 : Internal consistency of a gamification attempt . Defines formal
requirements of the informational elements (e.g. content, visualization, etc.)
of a persuasive attempt.</p>
      <p>Principle 3 : Coherent gamified interaction . Characterizes the type of
interaction that a persuasive agent should (or should not) make.</p>
      <p>Principle 4 : Rational persuasive gamification : Determine the formal
conditions that an agent needs to consider if a user action can be considered as
rational.</p>
      <p>
        We formalize these principles in propositional logic to be used by designers
of formal mechanisms of decision-making for software agents (e.g. [
        <xref ref-type="bibr" rid="ref5">4</xref>
        ] among
others).
2
      </p>
      <p>
        Methods
We use a formal method (framework) based on argumentation-based games [
        <xref ref-type="bibr" rid="ref18">17</xref>
        ]
for describing the agent-user interaction. In this paper, a user model is a tuple:
U = ⟨B, I, Be, PI , PB, ⪯ B, ⪯ I ⟩. In which the probability distributions PI and
PB relate the subjective probability of intentions and beliefs, and hierarchies of
beliefs and intentions are given by ⪯ B, ⪯ I .
Ag
 gf
 gm
      </p>
      <p>g-move
Gamified story-scenario
tOapkteioanlo1:an sUesleerction</p>
      <p>Option 1:
take a loan
Option 2:
modest
consumption</p>
      <p>
        In this paper, we consider gamification mechanisms (gm) e.g. avatars, stories
and leader boards, and gamification feedback (gf ), which are visual afordances
presented to a user e.g. rewards, feedback messages, etc. This classification of
gamification afordances follows a taxonomy presented in [
        <xref ref-type="bibr" rid="ref6">5</xref>
        ]. We assume that
the agent has two databases Σ gm and Σ gf , containing gm and gf afordances.
We also consider preferences among afordances gm and gf , which is a pre-order
function ⪯ gm and ⪯ gf , we assume a preexisting order. In this context, a user
and an agent exchange information regarding a particular topic T , e.g. about
ifnancial literacy.
      </p>
      <p>
        We use propositional logic with ¬ to express logical negation, x denoting
uncertainty (w.r.t. a true, false valuation), ⊢ deductive inference, and ⊢s semantic
interpretation, and ≡ , ≡ s for syntactic and semantic equivalence. We also use a
handy function for updating information UPD(old, new).
Agent Ag, as a gamified persuasive technology
is oriented to generate as output a gamified assert(p,Persuader )
move (gmove), which is a tuple ⟨sa, cont, vis⟩
formed by: 1) a speech act (sa) that is the assert(q,User)
intended action of the agent within the per- assert(r,User)
suasive exchange as a dialogue, 2) a persua- assert(q1,Persuader ) …
sive content (cont) that is the underlying mes- assert(q2,User)
sage to be transmitted to a user, and 3) a vi- assert(q3,User)
sual cue (vis). sa are predefined actions such … …
as accept, assert, question, reject, ignore (see Fig. 2: Protocol of gamified
pertechnical details in [
        <xref ref-type="bibr" rid="ref6">5</xref>
        ]). We use the nota- suasive interaction
tion gmovetAig→U to express that a gmove was
made from Ag to U at ti, which is omitted if time and move direction is evident
or unnecessary. We also use three handy functions CONT(gmove) = {content},
VIS(gmove) = {visual} and SA(gmove) = { speech act } to return the content
and the visualization of a given gmove. An agent Ag uses as input the
aforementioned three databases Σ T , Σ gf and Σ gm, and a model of the user U ∗ (where
U ∗ ⊆ U , denoting that it may have not perfect information).
      </p>
      <p>Results - Principles for autonomous gamified systems
We define these principles considering three assumptions: 1) a user establishes
a communication with the agent using gamified afordances, 2) the agent has
information about beliefs, intentions, and preferences of the user, and 3) the
user follows some protocols of communication with the agent.</p>
      <p>Principle 1 (Traceable gamified output) A persuader agent should be able
to provide traceable explanations for every gamified output. Formally, if S ⊢s
gmoveAg→U is the gamified move output, and S is the knowledge source of the
move, the following criteria should be fulfilled:</p>
    </sec>
    <sec id="sec-2">
      <title>Formalism</title>
      <p>S ⊆ (Σ gf ∪ Σ gm ∪ U ∗ )
S ̸= ∅</p>
    </sec>
    <sec id="sec-3">
      <title>Explanation</title>
      <p>A gamified output of an agent should be the
consequence of an inference process based on a
set of gamified mechanisms and the user model.</p>
      <p>Determinants of a gamified move should be
identifiable.</p>
      <p>
        This first principle relies on the traceability of the semantic inference ( ⊢s). In
the AI literature, violations of these formalisms have been investigated to avoid
black-box -style algorithms [
        <xref ref-type="bibr" rid="ref19">18</xref>
        ], and handcrafted processes with no back-track
inference mechanism.
      </p>
      <p>Focusing on the internal definition of the gamified output, the following set
of principles establishes basic conditions of consistency of such output.
Principle 2 (Internal consistency of a gamification move)
move is consistent if the following holds:
A gamification</p>
    </sec>
    <sec id="sec-4">
      <title>Formalism</title>
      <p>CONT(gmoveAg→U ) ≡ s
VIS(gmoveAg→U )</p>
      <sec id="sec-4-1">
        <title>CONT(gmoveAg→U ) ⊆ T</title>
      </sec>
      <sec id="sec-4-2">
        <title>VIS(gmoveAg→U ) ∈ ⪯ gm</title>
        <p>∪ ⪯ gf</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Explanation</title>
      <sec id="sec-5-1">
        <title>The content and the visualization of a gamified move should be semantically coherent.</title>
      </sec>
      <sec id="sec-5-2">
        <title>The content of a gamified move should be within the agreed persuasive topic. The visualization should be part of an agreed set of gamified afordances.</title>
        <p>Principle 2 can be seen close as a design principle for visual and content
aspects of a gamification process, in which the information carried by a gamified
afordance has to be consistent with its visualization.</p>
        <p>The next set of principles is related to the gamified interaction between agent
and user, specifically, these formalisms try to avoid design practices against
engagement and commitment of the agent to the users’ goals.</p>
        <p>Principle 3 (Coherent gamified interaction) A gamified persuasive
interaction between U and Ag is coherent if the following conditions hold:</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Formalism Explanation</title>
      <p>A gamified persuasive output should not be
gmoveti ̸= gmoveti+1 repetitive regardless of the state of U
∀ gmovetUi→Ag, A persuasive agent should not ignore
SA(gmovetUi→+1Ag) \ {ignore} petitions from a user.</p>
      <p>SA(gmoveU→Ag) = {ignore}, tWhehnenthae uasgeerntigmnuosrtesupadagtaemtihficeatgiaomnimficaotvioe,n
then UPD(⪯ gf , ⪯ gf ) feedback preferences.</p>
      <p>When a user rejects a gamification move, the
SA(gmoveU→Ag) = {reject}, then agent must update the gamification feedback
UPD(⪯ gf , ⪯ gf ), UPD(U ) preferences and the beliefs of the user model.</p>
      <p>The set of principles defined as coherent gamified interactions establishes a
guideline checklist of what an agent should do when a user communicates directly
through gamified moves.</p>
      <p>
        A last set of principles establishes conditions to assess the rational actions
of a user. In the agents’ literature, the user’s intention to X (e.g. X = walk)
provides the agent with support to believe that s/he will do X, i.e. beliefs and
intentions are in the same “direction” [
        <xref ref-type="bibr" rid="ref3 ref4">3</xref>
        ]. The following formalisms capture the
notion of rationality considering an alignment between beliefs, intentions, and
actions.
      </p>
      <p>Principle 4 (Rational persuasive gamification) An agent Ag persuasive
gamification can be considered rational, incomplete or irrational if the following
holds:</p>
    </sec>
    <sec id="sec-7">
      <title>Formalism Explanation</title>
      <p>⪯IfIC,OthNeTn(gsumcohvmeUo→veAgis) r∈atBioannadl itAsheimnhoitvehereairascghreyanttoi’ofsnpuraselefreifrmrietoddcoeinln,tteaannintdisoinatsi.bselpieafrtthoaft
If CONT(gmoveU→Ag) ∈⪯ I and The model of a user is
CisOiNntTe(ngtmionov-beeUli→efA-gin)c∈omBp,lethteen U ∗ iicnnolntientrneatrwiyoitntho-bttehhleeiepbfre-eliifneefcrromemdodpinelelt.etentiifonasmbuovte is
If CONT(gmoveU→Ag) ∈⪯ I and A move is irrational if the move content is part
¬(CONT(gmoveU→Ag)) ∈ B, then of the hierarchy of preferred intentions but at
such move is irrational the same time is against the set of beliefs.
4</p>
      <p>
        Discussion and future work
Current gamification literature (see reviews [
        <xref ref-type="bibr" rid="ref11 ref12">10,11</xref>
        ]) shows five main trends: 1)
the popularity of a limited number of afordances such as leader-boards, rewarded
points, and textual or visual feedback; 2) goal-orientation of the theoretical
gamification foundation; 3) the extended use of competitiveness and cooperativeness
mechanisms for gamification, and 4) a gradual generalization of tailoring
gamiifed persuasion. However, most of these approaches do not consider the notion
of autonomy or proactiveness of the gamified software.
      </p>
      <p>
        On the AI-side, persuasion is a well-established research track, especially in
the argumentation theory [
        <xref ref-type="bibr" rid="ref10">9</xref>
        ]. Computational persuasion is a highly regulated
process that leads to the design of argumentation-based dialogue games, which
is a protocol -based exchange of information between two agents. Shortcomings
of these dialogues are well-identified (see [
        <xref ref-type="bibr" rid="ref10">9</xref>
        ]), such as the harnessing of finding
optimal decisions (moves) at every game stage, and the use of only exocentric
persuasion [
        <xref ref-type="bibr" rid="ref8">7</xref>
        ] disregarding context information or mental states of a user.
      </p>
      <p>
        We introduced a set of rules aiming to promote transparency of persuasive
attempts (Principle 1), which are in line with the current joint efort that the
European Union and other leading AI countries to develop guidelines for
trustworthy AI 3. In the human-computer interaction field, trustworthiness was
highlighted to be a fundamental principle for system credibility [
        <xref ref-type="bibr" rid="ref17">16</xref>
        ], explainable and
persuasive interfaces [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ]. Consistency between visual and content aspects is
relevant for most of the gamification used in persuasive attempts. Principle 2 is
linked with the work presented in [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ], where Nemery et al. have highlighted
the importance of visual consistency, introduced a set of principles for
persuasive interfaces. The third set of formalisms, Principle 3, establishes a minimum
set of rules for coherent interaction between an agent and a user. We
acknowledge that the type of interactions are linked to argumentation-based dialogues,
which limits the type of potential interactions that other gamification
mechanisms can produce. Nevertheless, Principle 3 is a generalization of diferent
proactive gamification mechanisms that have been investigated in the literature
(see [
        <xref ref-type="bibr" rid="ref6">5</xref>
        ]). Finally, our set of principles for evaluating rational inputs from the
user (Principle 4), which are based on the well-established theory of practical
reason of Bratman in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], establishes a guide for evaluating if the user’s actions
are aligned with the information that an agent possesses about the user.
      </p>
      <p>
        A key limitation of this work is the lack of empirical validation. Our
immediate future work will be to evaluate empirically these principles using a
realworld scenario. Currently, we are designing the following version of the gamified
platform (omitted details for blind review) to support financial decisions. We
also want to further establish an axiomatization of those principles, considering
diferent types of gamification afordances and diferent types of software agents
(e.g. [
        <xref ref-type="bibr" rid="ref6">5</xref>
        ]).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bratman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Intention, plans, and practical reason</article-title>
          . Harvard University Press (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Dignum</surname>
          </string-name>
          , V.: Responsible Artificial Intelligence:
          <article-title>How to Develop and Use AI in a Responsible Way</article-title>
          . Springer International Publishing (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Fishbein</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ajzen</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          : Belief, Attitude, Intention, and
          <article-title>Behavior: An Introduction to Theory and Research</article-title>
          . Addison-Wesley series in social psychology, AddisonWesley Publishing Company (
          <year>1975</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>3 Ethics guidelines for trustworthy AI see https://digital-strategy.ec.europa.eu/en/ library/ethics-guidelines-trustworthy-ai</article-title>
          ,
          <source>last access November</source>
          <volume>29</volume>
          2021
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          4.
          <string-name>
            <surname>Guerrero</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lindgren</surname>
          </string-name>
          , H.:
          <article-title>Practical Reasoning About Complex Activities</article-title>
          . SpringerLink pp.
          <fpage>82</fpage>
          -
          <lpage>94</lpage>
          (
          <year>Jun 2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          5.
          <string-name>
            <surname>Guerrero</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lindgren</surname>
          </string-name>
          , H.:
          <article-title>Typologies of persuasive strategies and content: a formalization using argumentation</article-title>
          .
          <source>In: International Conference on Practical Applications of Agents and Multi-Agent Systems</source>
          . pp.
          <fpage>101</fpage>
          -
          <lpage>113</lpage>
          . Springer (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hamari</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koivisto</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sarsa</surname>
          </string-name>
          , H.:
          <article-title>Does gamification work?-a literature review of empirical studies on gamification</article-title>
          .
          <source>In: 2014 47th Hawaii international conference on system sciences</source>
          . pp.
          <fpage>3025</fpage>
          -
          <lpage>3034</lpage>
          . Ieee (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          7.
          <string-name>
            <surname>de la Hera</surname>
            Conde-Pumpido,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Persuasive gaming: Identifying the diferent types of persuasion through games</article-title>
          .
          <source>International Journal of Serious Games</source>
          <volume>4</volume>
          (
          <issue>1</issue>
          ),
          <fpage>31</fpage>
          -
          <lpage>39</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          8.
          <string-name>
            <surname>Horvitz</surname>
          </string-name>
          , E.:
          <article-title>Principles of mixed-initiative user interfaces</article-title>
          .
          <source>In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems</source>
          . pp.
          <fpage>159</fpage>
          -
          <lpage>166</lpage>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Computational Persuasion with Applications in Behaviour Change</article-title>
          . In: COMMA. pp.
          <fpage>5</fpage>
          -
          <lpage>18</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          10.
          <string-name>
            <surname>Klock</surname>
            ,
            <given-names>A.C.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gasparini</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pimenta</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamari</surname>
          </string-name>
          , J.:
          <article-title>Tailored gamification: A review of literature</article-title>
          .
          <source>International Journal of Human-Computer Studies</source>
          <volume>144</volume>
          ,
          <issue>102495</issue>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          11.
          <string-name>
            <surname>Koivisto</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamari</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The rise of motivational information systems: A review of gamification research</article-title>
          .
          <source>International Journal of Information Management</source>
          <volume>45</volume>
          ,
          <fpage>191</fpage>
          -
          <lpage>210</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          12.
          <string-name>
            <surname>Luck</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>d'Inverno</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>A formal framework for agency and autonomy</article-title>
          .
          <source>In: Icmas</source>
          . vol.
          <volume>95</volume>
          , pp.
          <fpage>254</fpage>
          -
          <lpage>260</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          13.
          <string-name>
            <surname>Methnani</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aler</surname>
            <given-names>Tubella</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Dignum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Theodorou</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Let Me Take Over: Variable Autonomy for Meaningful Human Control</article-title>
          .
          <source>Front. Artif. Intell</source>
          .
          <volume>0</volume>
          (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          14. Nem´ery,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Brangier</surname>
          </string-name>
          , E.:
          <article-title>Set of guidelines for persuasive interfaces: Organization and validation of the criteria</article-title>
          .
          <source>Journal of Usability Studies</source>
          <volume>9</volume>
          (
          <issue>3</issue>
          ) (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          15. Nem´ery,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Brangier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Kopp</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>First Validation of Persuasive Criteria for Designing and Evaluating the Social Influence of User Interfaces: Justification of a Guideline. In: Design, User Experience, and Usability</article-title>
          . Theory, Methods,
          <source>Tools and Practice</source>
          , pp.
          <fpage>616</fpage>
          -
          <lpage>624</lpage>
          . Springer, Berlin, Germany (Jul
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          16.
          <string-name>
            <surname>Oinas-Kukkonen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harjumaa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Persuasive systems design: Key issues, process model, and system features</article-title>
          .
          <source>Communications of the Association for Information Systems</source>
          <volume>24</volume>
          (
          <issue>1</issue>
          ),
          <volume>28</volume>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          17.
          <string-name>
            <surname>Parsons</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wooldridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amgoud</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>An analysis of formal inter-agent dialogues</article-title>
          .
          <source>In: Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1</source>
          . pp.
          <fpage>394</fpage>
          -
          <lpage>401</lpage>
          . ACM (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          18.
          <string-name>
            <surname>Rudin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead - Nature Machine Intelligence</article-title>
          .
          <source>Nat. Mach. Intell</source>
          .
          <volume>1</volume>
          ,
          <fpage>206</fpage>
          -
          <lpage>215</lpage>
          (May
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          19.
          <string-name>
            <surname>Shoham</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Agent-oriented programming</article-title>
          .
          <source>Artificial intelligence</source>
          <volume>60</volume>
          (
          <issue>1</issue>
          ),
          <fpage>51</fpage>
          -
          <lpage>92</lpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          20.
          <string-name>
            <surname>Wooldridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jennings</surname>
            ,
            <given-names>N.R.</given-names>
          </string-name>
          :
          <article-title>Agent theories, architectures, and languages: A survey</article-title>
          .
          <source>SpringerLink</source>
          pp.
          <fpage>1</fpage>
          -
          <lpage>39</lpage>
          (
          <year>Aug 1994</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>