<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Probabilistic Model for Personality Trait Focused Explainability</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammed N. Alharbi</string-name>
          <email>malharbi2016@fau.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shihong Huang</string-name>
          <email>shihong@fau.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Garlan</string-name>
          <email>garlan@cs.cmu.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer &amp; Electrical, Engineering and</institution>
          ,
          <addr-line>Computer Science</addr-line>
          ,
          <institution>Florida Atlantic University</institution>
          ,
          <addr-line>Boca Raton FL</addr-line>
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute for Software Research, School of Computer Science, Carnegie Mellon University</institution>
          ,
          <addr-line>Pittsburgh PA</addr-line>
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>- Explainability refers to the degree to which a software system's actions or solutions can be understood by humans. Giving humans the right amount of explanation at the right time is an important factor in maximizing the effective collaboration between an adaptive system and humans during interaction. However, explanations come with costs, such as the required time of explanation and humans' response time. Hence it is not always clear whether explanations will improve overall system utility and, if so, how the system should effectively provide explanation to humans, particularly given that different humans may benefit from different amounts and frequency of explanation. To provide a partial basis for making such decisions, this paper defines a formal framework that incorporates human personality traits as one of the important elements in guiding automated decisionmaking about the proper amount of explanation that should be given to the human to improve the overall system utility. Specifically, we use probabilistic model analysis to determine how to utilize explanations in an effective way. To illustrate our approach, Grid - a virtual human and system interaction game -- is developed to represent scenarios for human-systems collaboration and to demonstrate how a human's personality traits can be used as a factor to consider for systems in providing appropriate explanations.</p>
      </abstract>
      <kwd-group>
        <kwd>explainability</kwd>
        <kwd>human system co-adaptation</kwd>
        <kwd>human computer interaction (HCI)</kwd>
        <kwd>personality traits</kwd>
        <kwd>selfadaptive systems</kwd>
        <kwd>human-in-the-loop</kwd>
        <kwd>model checking</kwd>
        <kwd>probabilistic model</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        As systems become more autonomous and intelligent
through the incorporation of AI techniques and self-adaptive
approaches, it becomes increasingly important for those
systems to be able to “explain” themselves to their human
users and collaborators [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In particular, there are four
main purposes of explainability: (1) explain to justify: use
explanations to justify some results to the human, particularly
when decisions are made suddenly; (2) explain to control:
explanations can help not only to justify, but also to prevent
systems from going wrong; (3) explain to improve: improving
the systems continuously through human involvement; (4)
explain to discover: discovering and gathering new facts that
help us to learn and to gain knowledge. In the context of this
paper, explainability refers to the degree to which a software
system’s actions or solutions can be understood by humans,
and explainability is used to improve a system’s overall utility.
      </p>
      <p>
        While explanation is an increasingly desirable – even,
essential – capability of a system, it is not at all obvious when
and how explanation should be given, particularly since
explanation comes with a cost on human attention and delays
in system-human interaction and the fact that different humans
may need different kinds of explanation. To partially address
this problem this paper defines a formal framework, as
illustrated in Figure 1, for reasoning about the proper amount
of explanation that a system should provide to the human
based on their personality traits. Specifically, leveraging
research in the psychology of human personality, this
framework incorporates two basic personality traits
(Openness and Need for Cognition) as important elements in
a human model that can be used to guide a system in deciding
the appropriate amount of explanation that should be given to
the human in order to improve overall system utility. The
effects of given explanations (which are determined based on
personality traits of the human) affect human-system
coadaptation, represented through the
Opportunity-WillingnessCapability (OWC) model, a commonly used model for
adaptive systems’ reasoning about human-in-the-loop
behavior [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We incorporate our approach into the MAPE-K
architecture [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] to formally model and analyze human
involvement at different stages of system management and
adaptation. To illustrate our approach, Grid – a virtual human
and system interaction game – is developed to represent
scenarios for human-systems collaboration and to demonstrate
how a human’s personality traits can be used as a factor to
consider for systems in providing appropriate explanations.
      </p>
      <p>The organization of the paper is as follows: Section II
describes the research problem and goals, Section III
represents background information and related work, Section
IV shows methodology, Section V shows the Stochastic
Multi-player Games (SMG) model while Section VI shows
results and analysis, Section VII represents discussion and
future work and the last section focuses on the conclusion.</p>
      <p>II.</p>
      <p>PROBLEM STATEMENT AND</p>
      <p>RESEARCH GOALS</p>
      <sec id="sec-1-1">
        <title>A. Problem Statement</title>
        <p>
          A co-adaptation system is symbiotic human-in-the-loop
system where human-system cooperation is required in
achieving shared goals, and system and human actions
mutually impact each other’s behavior in accomplishing
coordinated tasks [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. In this context, providing effective
explanations to humans is an important factor in maximizing
the co-adaptation outcomes between the system and the
human [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Maximizing co-adaptation outcomes implies that
the relationship between system and humans has become a
partnership, or collaborative relationship, in which humans
and systems act semi-autonomously – in contrast to traditional
systems that wait for the human's inputs and commands to take
action [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
        <p>
          Given that different humans may benefit from different
amounts and frequency of explanation, in this paper we argue
that adapting the explanation to the particular human through
knowledge of their personality traits can help the system in
determining what are appropriate explanations and, therefore,
maximize the benefits of co-adaptation. In particular, given
that there are tradeoffs in determining what kind of
explanations to give, it is important to be able to tailor the
explanations to the user [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Providing longer and
morefrequent explanations may increase the effectiveness of
collaboration between the system and the human; however,
this comes at the cost of taking more time for humans to
understand the explanations and respond accordingly. Thus,
key questions that must be answered by a system are: What
should the contents of an explanation be, and how frequently
should they be given? Further, how can we formalize and
mechanize the decision process that a system uses in
determining the answers to these questions?
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>B. Research Goals</title>
        <p>In this paper we attempt to answer these questions by
defining a formal framework for reasoning about how a
selfadaptive system should provide explanations based on its
knowledge of a person’s personality traits. This framework
uses probabilistic analysis to decide how explanations should
be given, based on a formal human model that includes
psychologically relevant aspects of personality. Specifically,
we focus on answering the following research question: How
to use knowledge about an individual’s personality traits to
improve the overall system utility?</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>The main contributions of this paper are:</title>
      <p>•</p>
      <p>A formal framework that incorporates human
personality traits and guides adaptive
human-in-theloop systems to decide how much explanations should
be given in order to improve system utility.
•</p>
      <p>An evaluation system based on a collaborative game,
to simulate the effects of decision making under
various scenarios.</p>
      <p>III.</p>
    </sec>
    <sec id="sec-3">
      <title>BACKGROUND AND RELATED WORK</title>
      <p>This section introduces some background on personality
traits, the OWC (Opportunity-Willingness-Capability) model,
model checking of stochastic multi-player games (SMG), and
some of state-of-the-art studies that focus on explainability
and human-system co-adaptation. Section IV will then
illustrate how this background and related work are related to
what we do in this research.</p>
      <sec id="sec-3-1">
        <title>A. Personality Traits</title>
        <p>
          Psychological studies have demonstrated that human
personality traits play a strong role in determining human
behavior [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Personalities can be characterized in terms of
traits that are relatively stable characteristics of a human that
influence our behavior across many situations. An individual's
personality is the combination of traits and patterns that
influence his/her behavior, thought, motivation, and emotion.
It drives individuals to consistently think, feel, and behave in
specific ways.
        </p>
        <p>
          There are, of course, many differences between
individuals; however, personality traits are one of the more
important measurable characteristics that can be used to
distinguish one person from another. In the psychological
literature the Big Five (also called the Five Factor) model of
personality is one of the most widely accepted personality
taxonomies. In the Big Five model, the five dimensions of
personality include extraversion, neuroticism, openness to
experience, agreeableness, and conscientiousness [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>
          Openness to experience is one of the personality traits that
is used to describe individual personality in the Five Factor
Model. Open people tend to be intellectually curious, creative
and imaginative. Open people have a high openness to
embrace new things, fresh ideas, and novel experiences [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          In addition to the Five Factor Model, the psychological
literature also identifies Need for Cognition is an important
distinguishing characteristics of human personality trait
[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ][
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          Need for Cognition (NFC) is defined as the "individual’s
tendency to engage in and enjoy effortful cognitive tasks.”
People with higher NFC levels typically prefer more detail,
while those with low levels of NFC want to quickly
understand the big picture and avoid engaging through more
detail. Based on the NFC 10-item testing instrument [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ][
          <xref ref-type="bibr" rid="ref11">11</xref>
          ],
a score above 80 is generally considered to be High NFC (or
high personality trait), and below 50 is Low NFC.
        </p>
        <p>As we elaborate later, we adopt these two basic personality
traits (Openness to Experience and Need for Cognition) as
important elements in a human model that can be used to guide
a system in deciding the proper amount of explanation that
should be given to the human to improve overall system
utility.</p>
        <p>B. OWC (Opportunity-Willingness-Capability) Model</p>
        <p>
          Prior research in adaptive systems has investigated various
models of humans that can be used at run time to effectively
characterize humans when deciding how best to incorporate
them into a co-adaptive system. One of the more prominent
models is the OWC (Opportunity-Willingness-Capability)
model [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          OWC categorizes human attributes into: (1) Opportunity:
indicates whether a human is available to participate in a
cooperative task with the system (such as whether the human
is physically present). (2) Willingness: identifies the human’s
inclination to perform the task (affected by cognitive load,
human attention, stress level, and motivation). (3) Capability:
defines the human’s abilities and skills that are necessary to
execute the task successfully (affected by level of experience
or training, knowledge of the task, and cognitive or physical
skills) [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          This model has been used effectively in a number of
papers to determine, for example, whether to involve the user
in a task or to carry it out automatically [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ][
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], whether to
proactively gain the user’s attention [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], and when to provide
an explanation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. As we detail later in this paper we use
OWC to capture the co-adaptation attributes of the human (see
Section IV. B).
        </p>
        <p>C. Model Checking Stochastic Multiplayer Games (SMG)
and PRISM</p>
        <p>
          Probabilistic model checking is used as a technique to
analyze the systems that exhibit stochastic behavior.
Stochastic Multi-player Games (SMG) is a form of
probabilistic modelling that allows us to reason quantitatively
about reward-based properties and probability such as time,
usage, and resources in a multi-agent system [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ][
          <xref ref-type="bibr" rid="ref16">16</xref>
          ][
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
Our approach is to use SMG models to reason about the
appropriate amount of explanation that should be given to the
humans based on their personality traits where we model the
system and humans as (cooperating) players in a game.
        </p>
        <p>
          PRISM is “a probabilistic model checker, a tool for formal
modelling and analysis of systems that exhibit random or
probabilistic behavior” [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. PRISM-games is an extension of
PRISM that is used to analyze probabilistic systems where
players can incorporate competitive or collaborative behavior,
modelled as stochastic multiplayer games SMG [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
Analyzing systems using PRISM has been carried out in
variety of application domains, including: security protocols,
communication and multimedia protocols, randomized
distributed algorithms, biological systems and many others.
PRISM can analyze a wide range of quantitative properties of
stochastic models automatically (e.g., "what is the probability
of a failure causing the system to shut down within 4 hours?”).
PRISM further supports the specification and analysis of
properties based on costs and rewards. These allow it to
reason, not only about the probability that a model behaves in
a certain way, but about a wide range of quantitative measures
related to the behavior of the model (e.g., "expected number
of lost messages", "expected time", or "expected power
consumption").
        </p>
        <p>In this paper we use PRISM to dynamically determine
appropriate levels of explanation to maximize expected utility
(expressed as a reward).</p>
        <p>D. Human-in-the-Loop Self-Adaptation and Explainability</p>
        <p>
          Human-system integration or human-system
coadaptation is advancing the fields of human-system
interaction. Integration here means that the relationship
between system and humans has become a partnership or
symbiotic relationship in which humans (i.e., users) and
systems act with autonomy instead of the system waiting for
the user's inputs and commands to take an action.
Selfadaptation refers to a process in which an interactive system
co-adapts its behavior to a human based on its internal model
of the human, dynamic information acquired about the human,
the context of use and its surrounding environment [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ][
          <xref ref-type="bibr" rid="ref5">5</xref>
          ][
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
        <p>
          Several related works have studied explainability focused
on a human-system co-adaptation perspective. In [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] the
authors propose a method that generates verbal explanations
of multi-objective probabilistic planning. This method
explains why a particular behavior is chosen on the basis of
the optimization objectives. Their explainability method relies
on describing the values of the objective of a generated
behavior and, therefore, explaining tradeoffs that were made
to reconcile competing objectives.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], the authors define a formal framework to reason
about explainability of co-adaptive system behaviors and the
situations under which they are warranted. Specifically, they
characterized explainability in terms of explainability cost,
effect, and content. They propose a dynamic adaptation
approach that uses a probabilistic reasoning technique, similar
to ours, in order to determine when the explanations should be
used for the purpose of improving system utility.
        </p>
        <p>
          In another related work [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], the authors use a similar
framework of [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] to reason about explainability of adaptive
system behaviors and the conditions under which they are
warranted. They characterize explainability in terms of the
effects on a human operator’s ability to engage in co-adaptive
actions effectively. They present a decision-making
mechanism to plan in self-adaptation that provides a
probabilistic reasoning tool to determine when explanations
should be used in an adaptation.
        </p>
        <p>While this prior work shares with our research the goal of
reasoning about explanation in the context of human-system
co-adaptation, and also use probabilistic reasoning to account
for inherent uncertainties in our human models, none of these
studies take into consideration specific personality traits of
humans – the main focus of our work.</p>
        <p>IV.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>METHODOLOGY</title>
      <p>
        In this section, we illustrate how we use explanation as a
tactic (or action) that systems can use to improve the
efficiently and effectiveness of human-system co-adaptation
based on human personality traits. We describe also how we
utilize a probabilistic planner [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] to determine the optimal
amount of explanation according to those personality traits.
      </p>
      <sec id="sec-4-1">
        <title>A. Selection of Personality Traits</title>
        <p>An important question is which personality traits to
consider with respect to explanation? As noted earlier, the
psychological literature has classified a variety of important
distinguishing characteristics for human personality.</p>
        <p>
          However, not all traits are relevant to explainability. In this
work we have adopted two personality traits: Need for
Cognition (NFC) and Openness to experience, since there is a
direct relationship between NFC and explainability and
between Openness and capability in OWC [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ][
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] (see
Section IV. B).
        </p>
        <p>
          We use the “Openness to experience” trait as one factor
that affects the human’s capability to continue and complete a
task, since open people tend to be intellectually curious and
have a high level of capability to do creative tasks [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. We
consider the Openness level as an important human factor
since an individual’s Openness level reflects their capability
to engage in cognitive tasks.
        </p>
        <p>
          In our work we assume that the human’s personality traits
are known (for example, by using the NFC 10-item testing
instrument in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ][
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]) and do not change over the time horizon
of a particular set of interactions with the system. While the
traits are assumed to be known, there does, however, remain
some uncertainty about the impact of the amount of
explanation that should be provided to the human, which we
incorporate into our reasoning framework. We will further
assume for concreteness that both selected personality traits
are relevant, and that their weights are equally important
(although the relative importance can be adjusted in the
model).
        </p>
        <p>B. Incorporating the OWC
(Opportunity-Willingness</p>
        <p>Capability) Model</p>
        <p>We use the OWC model (described in Section III.B) to
capture the co-adaptation attributes of the human. In this
paper, the following indicators show the connection between
our model and the OWC model and how the OWC is
incorporated in the context of the collaborative Grid game: (1)
time and location represent the set of variables of the
Opportunity category. Is the player located at the correct
location? Has the timer expired? (2) Human satisfaction
represents the Willingness category. Is the human satisfied
with the given explanation? That category is applied through
the playerFeedback (pF) tactic. (3) Human performance
represents the Capability category. The Capability category
identifies the ability of the human to complete Grid task.</p>
        <p>
          Giving an explanation increases the capability of the human
to successfully carry out that particular task [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>C. Utlizing Model Checking Stochastic Multiplayer Games
(SMG)</p>
        <p>The probabilistic model checker (PRISM-games) is
utilized to formally model our approach. PRISM-games is
particularly suitable for our study because it helps us to reason
quantitatively under unpredictability and uncertainty about
“how much” explanations should be given. The uncertainty
(or stochasticity) that is relevant in this context is about the
proper amount of explanations and the impact of different
amounts of explanations that should be given to the human.</p>
        <p>We model the system (the Grid game described in Section
IV.D below) as a turn-based SMG, which means exactly one
player in each state of the modeled system can choose an
action, where the outcome of that state will be probabilistic.</p>
        <p>Players in a SMG may cooperate to achieve a common goal,
or compete to accomplish different goals. In our examples, we
model two players1, the human and the system, and we assume
that they share a common goal.</p>
        <p>
          We use rPATL, a probabilistic temporal logic, to express
properties of stochastic multi-player games quantitatively.
rPATL helps us to reason about the collective ability of a
group of players to achieve a goal relating to the probability
of an occurring event [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
        </p>
        <p>1 Note that a multiplayer here (i.e., two players) does not mean that the Grid game is a multiuser game. The concept
“multiplayers” in PRISM refers to multiple agents, such as system, human, or environment. In our model, the system and the
human are the only two players and they are working cooperatively (taking turns) to achieve the best possible outcome.</p>
        <p>To illustrate our approach, we defined the Grid game– a
virtual game -- as shown in Figure 2, as a game that embodies
a representative scenario for human-system co-adaptation.</p>
        <p>In the Grid game the system S instructs a player P verbally
to move on a 5×5 grid from the top right corner (start) to the
bottom left corner (end). The game is designed to rely on
explanations, at various levels of detail, to instruct the user on
what tasks to perform and how to perform them.</p>
        <sec id="sec-4-1-1">
          <title>Game objectives:</title>
          <p>Follow the system instruction through a certain path
within a certain maximum amount of time (60
seconds).</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Minimize the time t to complete the task. Traverse an optimal number of blocks to complete the end-to-end task, avoiding obstacles.</title>
      <p>•
•
•
•
•</p>
      <sec id="sec-5-1">
        <title>Game rules:</title>
        <p>The player can move either horizontally or vertically.
Game score (100 points): points are deducted for
traversing extra blocks or moving into or through
obstacle squares (e.g., in Figure 2 there are four
obstacles: the house, a traffic light, a mountain, and a
tree).</p>
        <p>The Grid game can involve the use of five tactics for
interacting with the player, as shown in Table 1. The system
provides two levels of explanation to command the human to
move from one point to another. The choice of level of
explanation is based on the run-time calculation and
explanation generation based on the probabilistic model.</p>
        <p>In this case “less explanation (LExp)” provides an
abbreviated command (e.g., “Go 2 blocks left”), while “more
Commands the human to carry out an action.</p>
        <p>The system further explains information when the
human is confused and loses track.</p>
        <p>The human requests the system to confirm information
that they not entirely sure about.</p>
        <p>Human feedback is collected about his satisfaction for
Helpful, Not helpful,
each given explanation
instructions.</p>
        <p>The human confirms information and follows the
“Yeah”, “Thanks”</p>
        <p>Example
“Go 2 blocks left”
“Move south 4 blocks”
“You will go between a
house and traffic light”
“You go straight, and you
see a car on your left side”
“Should I continue above
“North?”
the tree?”
Neutral
“Okay”
Acknowledgement
lessExplain (lExp)
moreExplain
explanation (mExp)” provides an abbreviated command (e.g.,
“You will go between a house and traffic light”) contains
additional details. The human may request clarification about
a given explanation if they are not entirely sure about it (Chk)
(e.g., “Should I continue above the tree?”). Or the user can
confirm the information and follow the instructions (conf).
The
human also</p>
        <p>gives feedback (pF) about the given
explanation as to whether it was (a) helpful, (b) not helpful, or
(c) neutral. (This supports explanation assessment in the
framework - Figure 1).</p>
        <sec id="sec-5-1-1">
          <title>1) Utility Attributes</title>
          <p>The four utility attributes of the game are: RequiredTime
(t),</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Blocks</title>
      <p>(B),</p>
      <p>LengthOfExplanations
(xL),
and
ExplainEfficiency (xE). B and t are used for calculating the
game score, and xL and xE are used as explainability
attributes. Game score(s) depend on the time elapsed for
completing the game (t), associated with the optimal number
of the blocks (B) that the player is supposed to end the task
with:
•
•
•
•</p>
      <p>RequiredTime
(t): the
total
elapsed
time
for
completing the game.</p>
      <p>Blocks (B): the number of the blocks traversed to
complete the task.</p>
      <p>LengthOfExplanations (xL): the amount of delay (or
time) required to explain.</p>
      <p>ExplainEfficiency
(xE):
a
measurement
that
determines how happy the player is with the given
explanations. xE is associated with the playerFeedback
(pF) tactic which can be one of the following values:</p>
    </sec>
    <sec id="sec-7">
      <title>Helpful, Not helpful, or Neutral.</title>
      <p>2) Tactics Cost/Benefit and Utility Dimensions
on utility dimensions. Different tactics cause an increase in
Time (three seconds for lExp, Chk, and conf; six seconds, for
mExp). The upward ↑ or downward arrow ↓ reflects utility
increments and decrements, respectively. For example, the
lExp tactic increases both t and xL by three seconds, which is
associated with a smaller amount of costs. Human feedback is
collected
about the
user’s
satisfaction for the
given
explanation (lExp) which can be:
a)</p>
    </sec>
    <sec id="sec-8">
      <title>Helpful (H) reflects utility increments (↑),</title>
      <p>b)
c)</p>
      <p>Not helpful (NH) reflects utility decrements (↓),
Neutral (N) reflects neither utility increments nor
decrements (-).</p>
      <sec id="sec-8-1">
        <title>3) Utility Functions</title>
        <p>explanation),
the
system
we
that</p>
        <p>To compare different explainability tactics (i.e., lengths of
rewards, rPATL, which enables us to analyze the utilities of
use
probabilistic temporal logic
with
explainability
can
influence.</p>
        <p>
          rPATL
(described in Section IV.C) is used to reason about the ability
of a group of players (system and human) to collectively
achieve a specific goal [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>In the formal model we define formulas that represent the
accrued
utility
(The</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Scores function</title>
      <p>∪ 
and
the</p>
    </sec>
    <sec id="sec-10">
      <title>ExplainEfficiency Function</title>
      <p>∪</p>
      <p>) as the maximum real
immediate utility that the human can achieve along the whole
task.
60), where B must be greater than or equal to</p>
      <p>high scores to high utility derived by dividing the number of
(tmax=</p>
      <p>(the
optimal number of blocks that the player is supposed to
complete the task with):
∪  ( ) = (1 −  

)  100 where  ≥</p>
      <p>()
The ExplainEfficiency Function</p>
      <p>∪  , as shown in
function (2), maps higher levels of ExplainEfficiency (xE)
derived by dividing the accumulated player Feedback (∑ 
)</p>
      <p>Fig. 3 The strategy we use to model the SMG: the proper amount of explanation</p>
      <p>
        For example, if a human has 75 Openness and 90 is determined based on the three personality levels of the human (represented in light
NFC. The combined human traits are 0.82 which 3b.luHeu).mTahnefteweodbeaxcpklainsactoiollnecatmedouthnatst w(leilslscoarnmbeorhee)lparfeuld, enteeurmtrainl,eodrbnyotuhsienlgpffuulnction
means he has high personality traits (by using function (represented in yellow). The human confirms information that means he moved
(3)). That means the system will explain less 18% of successfully to the next point (conf), or checks/requests the system to clarify
the time (i.e., lExp) and explain more 82% of the time information that they not entirely sure about (chk).
(i.e., mExp) during the task. As another example,
suppose a human with low personality traits has 43
openness and 49 NFC. The combined human traits are 0.46
(using function (3)). That means the system will explain less
54% of the time (i.e., lExp) and explain more 46% of the
time (i.e., mExp) while playing the Grid game.
system took 27 seconds for explanations (xL). At the end of
the task, the score of the player is 75 (by using the function
(1), where B=15 and tmax= 60), and the ExplainEfficiency
(xE) is 43 (by using the function (2), where ∑  is three and
  is seven (which means seven feedbacks are
collected)).
by the total number of feedbacks (  ), where
 [
        <xref ref-type="bibr" rid="ref1">1,0, −1</xref>
        ] represent Helpful, Neutral, Not helpful,
respectively:
∑ 
 
∪ 
(
) ≈ (
)  100
      </p>
      <p>()</p>
      <p>Both personality trait variables (Openness and
NFC) are initialized with some constants (as inputs)
that represent the human traits. Personality traits are
directly mapped to the level of explanation (the
amount), and are used to calculate the probability of
getting explanations in that amount. Function (3)
shows combined personality traits, which will be 0 in
case both traits are 0, or 1 in case the human has the
highest personality traits levels (i.e., 0 → 1).</p>
      <p>
        Opennessmax and NFCmax are 100, which represent the
highest personality trait levels. The values of
personality traits are determined based on the NFC
10item testing instrument in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ][
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] that produces scores
between 0-100.
      </p>
      <p>THE STOCHASTIC MULTI-PLAYER GAMES</p>
      <p>(SMG) MODEL</p>
      <p>We model the Stochastic Multi-player Games (SMG)
model as two players, where the players try to collaboratively
maximize accumulated reward(s): (1) Player SYS specifies
the actions that are controlled by the system (i.e., it represents
the Grid game). (2) Player HUMAN specifies the actions
belonging to the human (i.e., it represents the game player).</p>
      <p>
        The models represent the behavior of a set of agents (or
“players”) that take turns making moves, where the choice of
move is specified probabilistically or non-deterministically. A
game solver for such a system (such as PRISM-games [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ])
determines an optimal strategy for the players by resolving the
non-deterministic transitions in such a way that the expected
reward for each player is maximized (assuming rational play
by each). Figure 3 shows the strategy we use to model the
SMG. The proper amount of explanation is determined based
on the three personality levels of the human (i.e., Low,
Average, or High personality levels (represented in light
blue)). The two explanation amounts (less or more) are
determined by using Function 3 (describes in the previous
section). Human feedback is collected that is of the form
Helpful, Neutral, or Not Helpful (represented in yellow). The
human confirms information that means he moved
successfully to the next point (conf), or checks/requests the
system to clarify information that they not entirely sure about
(Chk).
      </p>
      <p>The Stochastic Multi-player Games (SMG) model consists
of the following four parts:</p>
      <p>Scenario
S: Can you go 2 blocks down?
H: Yeah
S: Then go 2 blocks left.</p>
      <p>H: Could you repeat that?
S: Go west. You will go between a house
and traffic light.</p>
      <p>H: Okay
S: Go after that 2 blocks up.</p>
      <p>H: The human is on the wrong track
S: No, not south. You go north
H: Okay
………
S: Go 2 blocks left
S: Go south 4 blocks.</p>
      <p>H: Okay, thanks a lot.</p>
      <sec id="sec-10-1">
        <title>4) Example Scenario</title>
        <p>Tactics
(lExp)
(conf)
(lExp)
(Chk)
(mExp)
(conf)
(lExp)</p>
        <p>Figure 2 and Table III show an example dialogue of a
scenario between the system (S) and a human (H).</p>
        <p>The human spent 42 seconds (t) and used 15 blocks (B) to
finish the task. However, the number of blocks B that the
player is supposed to end the task with is 12 ( ). The
()
NH
H</p>
      </sec>
      <sec id="sec-10-2">
        <title>A. Player Definition</title>
        <p>Player definition includes the declaration of the two
players in the SMG and different modules that each player has
control of. The two players in our game are shown in Listing
1. Player SYS (lines 1-2) specifies the actions that are
controlled by the system (i.e., it represents the Grid game).
Player HUMAN (lines 3-4) specifies the actions belonging to
the human (i.e., it represents the game player). Our Grid game
is played in turns by the two players SYS and HUMAN. Turn
(line 5) is a global variable used as a controller to take turns
between different players, ensuring that only one player can
take an action at each state of the model execution. Tactics are
executed sequentially in our model.</p>
        <p>1. player SYS</p>
        <p>Game, [lExpLow],[ lExpAvg],[ lExpHigh],
[mExpLow],[mExpAvg],[mExpHigh]
2. endplayer
3. player HUMAN</p>
        <p>Play, [conf], [Chk]
4. endplayer
5. global turn:[SYS..HUMAN] init SYS;
6. const SYS=1; const HUMAN=2;</p>
        <p>Listing. 1 Player definition includes the declaration of the two players in
the SMG and different modules that each player has control of</p>
      </sec>
      <sec id="sec-10-3">
        <title>B. Game Model</title>
        <p>Player SYS has control of the Game model, illustrated in
Listing 2. Opportunity elements are used as execution
conditions of different tactics such as: the human is at the
correct location ((x= 1)&amp;(y=1)) and is not involved in a crash,
and the time has not expired (t&lt;60). The Game module is
parameterized by the variables (lines 1-2), which indicate the
state of tactic execution, where false means this tactic is not
in use (i.e., lExp_state, and mExp_state).</p>
        <p>During the system’s turn, the system executes these
tactics sequentially: lExp (lines 5-13), and mExp (lines
1523). For the sake of clarity, we will describe only the
lExpLow tactic to illustrate how tactic execution is modeled.
The other explainability tactics follow the same structure. The
system instructs the human with low personality traits
through executing the command labeled as lExpLow (line 5).
This tactic executes only if:
• It is the turn of the SYS.
• The human traits are low (&lt;0.50).
• The player position is on a certain block (x1,y1).
• The end time of the task has not been reached yet
(t&lt;60).</p>
        <p>If the guard is satisfied, the system will explain more by
flagging mExp_state tactic true with probability
human_traits (line 6). Otherwise, the system will explain less
by flagging lExp_state tactic true with probability
1human_traits (line 7) and the system will:
•</p>
        <p>Commands the player to move to the position (x2,y2).
• Increases the time 3 seconds (xL'=xL+3)&amp;(t'=t+3).
• Flags the lExp tactic as true (lExp_state'= true).
•</p>
        <p>Updates the value of the variable turn, changing
control to the human player (turn’=HUMAN).</p>
        <p>Similarly, the system instructs the human with average
personality traits through executing the command labeled as
lExpAvg (line 8-10), or the system instructs the human with
high personality traits through executing the command labeled
as lExpHigh (line 11-13).</p>
        <p>The human wins by executing the command labeled as win
(line 25). That means the human (turn=SYS) has arrived at the
bottom left corner ((x1= 1)&amp;(y1=1)) within the time limit
(t&lt;60). However, the human loses the game by executing the
command labeled as lose (line 26) when the end time of the
task has been reached (t=60).
1. global lExp_state: bool init false;
2. global mExp_state: bool init false;
3. …
4. module Game
5. [lExpLow] (turn=SYS)&amp;(human_traits&lt;.5)&amp;(x1= 5)
&amp;(y1=5)&amp;(t&lt;60)
6. -&gt;human_traits:(mExp_state'= true)
7. + 1-human_traits:(x2'=5) &amp; (y2'=3)&amp;(xL'=xL+3)
&amp; (t'=t+3)&amp;( lExp_state'= true)&amp;(turn'=HUMAN);
8. [lExpAvg] (turn=SYS)&amp;(x1= 5)&amp;(y1=5)&amp;(t&lt;60)
9. -&gt;0.5:(x2'=5)&amp;(y2'=3)&amp;(xL'=xL+3)&amp;(t'=t+3)
&amp;(lExp_state'= true)&amp;(turn'=HUMAN)
10. +0.5:(mExp_state'= true);
11. [lExpHigh] (turn=SYS)&amp;(human_traits&gt;.8)</p>
        <p>&amp;(x1= 5)&amp;(y1=5)&amp;(t&lt;60)
12. -&gt;human_traits:(mExp_state'= true)
13. + 1-human_traits:(x2'=5) &amp; (y2'=3)&amp;(xL'=xL+3)
&amp; (t'=t+3)&amp;( lExp_state'= true)&amp;(turn'=HUMAN);
14. …
15. [mExpHigh] (turn=SYS)&amp;(human_traits&gt;.8)&amp;</p>
        <p>(conf_state= false)&amp;(Chk_state= true)&amp;(t&lt;60)
16. -&gt;human_traits:(mExp_state'=true)&amp;(xL'=xL+6)
&amp;(t'=t+6)&amp;(Chk_state'= false)&amp;(turn'=HUMAN)
17. + 1-human_traits: (lExp_state'= true);
18. [mExpAvg] (turn=SYS)&amp;(conf_state= false)</p>
        <p>&amp;(Chk_state= true)&amp;(t&lt;60)
19. -&gt;0.5:(mExp_state'= true)&amp;(xL'=xL+6)&amp;(t'=t+6)
&amp;(Chk_state'= false)&amp;(turn'=HUMAN)
20. +0.5:( lExp_state'= true);
21. [mExpLow] (turn=SYS)&amp;(human_traits&lt;.5)</p>
        <p>&amp;(conf_state= false)&amp;(Chk_state= true)&amp;(t&lt;60)
22. -&gt;human_traits:( lExp_state'= true)
23. + 1-human_traits:(mExp_state'=true)&amp;(xL'=xL+6)
&amp;(t'=t+6)&amp;(Chk_state'= false)&amp;(turn'=HUMAN);
24. …
25. [win] (turn=SYS)&amp;(x1= 1)&amp;(y1=1)&amp;(t&lt;60)</p>
        <p>-&gt; (win'=true)&amp;(turn'=0);
26. [lose] ((turn=SYS)|(turn=HUMAN))&amp;(t=60)</p>
        <p>-&gt; (win'=false)&amp;(loser'= true)&amp;(turn'=0);
27. endmodule
28. …</p>
        <p>Listing. 2 Game Model</p>
      </sec>
      <sec id="sec-10-4">
        <title>C. Play Model</title>
        <p>Player HUMAN has control of the Play model, illustrated
in Listing 3. The encodings of the HUMAN module are similar
to those of the SYS module. The Play module is
parameterized by variables (lines 1-2), which indicate the state
of tactic execution, where false means this tactic is not in use
(e.g., Chk_state, and conf_state). Personality Traits are
initialized with values that represent the human’s personality
(lines 3-5).</p>
        <p>During the human’s turn, the human can execute one of
these tactics: conf (line 8), and Chk (line 10). We explain only
the conf tactic to illustrate how tactic execution is modeled.
The human confirms (conf) and follows the system
instructions (i.e., the human moves successfully from the 1st
point to the second) by executing the command labeled as
conf. This tactic executes only if:
• It is the turn of the HUMAN.
•
•</p>
        <p>The system instructs the player to move to the position
(x2,y2).</p>
        <p>The end time of the task has not been reached yet
(t&lt;60).
1. global Chk_state: bool init false;
2. global conf_state: bool init false;
3. const int INIT_OPN; const int INIT_NFC;
4. global human_Open: [1..100] init INIT_OPN;
5. global human_NFC:[1..100] init INIT_NFC;
6. …
7. module Play
8. [conf] (turn=HUMAN)&amp;(x2= 5)&amp;(y2=3)&amp;(t&lt;60)
-&gt;(x1'=5) &amp; (y1'=3)&amp; (t'=t+3)&amp;(B'=B+2)
&amp;(pF'=pF+1)&amp;(pfMAX'=pfMAX+1)&amp;(conf_state'= true)
&amp;(lExp_state'= false)&amp;(turn'=SYS);
9. …
10. [Chk] (turn=HUMAN)&amp;(conf_state= false)&amp;(x1= 5)
&amp; (y1=3)&amp;(t&lt;60)
-&gt;(Chk_state'= true)&amp;(t'=t+3)&amp;(pF'=pF1)
&amp;(pfMAX'=pfMAX+1)&amp;(turn'=SYS);
11. [wrong] (turn=HUMAN)&amp;(x2= 3)&amp;(y2=5)&amp;(loser=false)
&amp;(t&lt;60)-&gt; (x1'=3) &amp; (y1'=1) &amp;(t'=t+3)
&amp; (B'=B+2)&amp;(pF'=pF-1)&amp;(pfMAX'=pfMAX+1)
&amp;(conf_state'= false) &amp;(turn'=SYS);
12. [crash] (turn=HUMAN)&amp; ((x1=obj1x &amp; y1=obj1y)
| (x1=obj2x &amp; y1=obj2y)| (x1=obj3x &amp; y1=obj3y)
| (x1=obj4x &amp; y1=obj4y))-&gt;(turn'=SYS);
13. endmodule
14. …</p>
        <p>Listing. 3 Play Model</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>If the guard is satisfied, the player:</title>
      <p>•</p>
    </sec>
    <sec id="sec-12">
      <title>Moves to the position (x1,y1).</title>
      <p>• Increases the time three seconds (t'=t+3).
• Increases the number of Blocks by two (B'=B+2).
•</p>
      <p>Gets the player feedback (pF= 1 means the explanation
was helpful).
• Increases the player feedback counter pfMAX by 1.
• Flagging the conf tactic true (conf_state'= true).</p>
      <p>Moreover, the tactic wrong (line 11) will be executed
when the human moves in the wrong direction, and the tactic
crash (line 12) will be executed when the human moves to
one of the obstacle squares (the house, a traffic light, a
mountain, or a tree in Figure 2).</p>
      <sec id="sec-12-1">
        <title>D. Utility Profile and Reward Structure</title>
        <p>Utility functions are described in Section IV.D and
illustrated in Listing 4. Formulas and reward structures are
used to encode the utility functions that allow us to quantify
the utilities of different task states.</p>
        <p>The Scores function, ∪ , as in lines (1-2), represents the
encoded Function (1) as described in Section IV.D.
ExplainEfficiency function ∪ , as in lines (3-4) , represents
the encoded Function (2) described in Section IV.D. Line 5
shows the encoded combined traits function (3).</p>
        <p>1. rewards "Scores"
[win] true:(1-(B/tMax))*100;
[lose] true:0;
[crash] true: -5;
2. endrewards
3. rewards "ExplainEfficiency"
[win] true:(pF/pfMAX)*100;
[lose] true:(pF/pfMAX)*100;
4. endrewards
5. formula human_traits =</p>
        <p>(human_Open+ human_NFC)/(Max_Open+Max_NFC);</p>
        <p>Listing 4. Utility profile and reward structure: formulas and reward
structures are used to encode the utility functions that allow us to quantify the
utilities of different task states. Formulas calculate system utility of the
different states.</p>
      </sec>
    </sec>
    <sec id="sec-13">
      <title>VI. RESULTS AND ANALYSIS</title>
      <p>
        In this section, we illustrate how our modeling framework
can produce optimal decisions with respect to how adaptive
systems should explain to the human based on their
personality traits. Specifically, we use SMG models of
explainability to determine the expected outcome utilities of
using different explainability tactics (i.e., explanation
amounts) based on the personality traits of the human. Our
modeling is done as a simulation (or set of “experiments” in
PRISM terms). We use rPATL to ask PRISM a variety of
questions such as “what is the maximum/minimum
probability a human with high/low personality traits can
guarantee to win with high/low utilities?” [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
      <p>Table IV and Figure 4 show the analysis results of 44
rounds run on PRISM. All possible combinations of
personality traits are taken into consideration, where high
traits are (&gt;80) (represented by orange color), average traits
are (≥50 and ≤80) (represented by blue-gray color), and low
traits are (&lt;50) (represented by gray color). Plot (a) shows the
44 simulations of different personality traits and the given
amounts of explanations (LengthOfExplanations (xL)) to
complete the task. The average of different personality traits
and the amounts of explanations (xL) is shown in Plot (b).
39% of the iterations (17 rounds) of humans with high
personality traits (&gt;80) needed more explanations to finish the
task with an average of 21 seconds. 32% of the iterations (14
rounds) of humans with low personality traits (&lt;50) needed
less amount of explanations with an average of 20 seconds.
The remaining 30% of the iterations (13 rounds) belongs to a
human with average personality traits (≥50 and ≤80), where
they use average amounts of explanation with an average of
19 seconds to complete the task. Table 5 shows the average
of different utilities based on the three personality trait levels.</p>
      <p>
        We can conclude from the results that a human with high
personality traits needs more detailed information (i.e.,
explanations), while a human with low personality traits needs
less detailed explanation. These conclusions are all consistent
with psychology studies (discussed in Section III. A) that
human with higher personality trait levels typically prefer
more explanations, while those with low levels of personality
trait want to quickly understand the big picture and avoid
engaging through more explanations [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ][
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>In this research we presented an approach based on
probabilistic model checking of SMGs to determine how
much explanation should be given to the human based on their
personality traits. Providing the right amount of explanation
to the right human is an important factor to maximize
coadaptation between the system and the human during their
interaction.</p>
      <p>There is a number of limitations of this research that future
research can address based on the foundations that we have
described in this paper. These explanation decisions are ideal
scenarios without having actual proof of that in reality. To
address this, the most important next step is to conduct an
empirical study to validate these models on actual real-world
systems with humans in the loop.</p>
      <p>As we explained earlier, there are many reasons to use
explainability and improving a system’s overall utility is one
of the main reasons (see Section I). Explainability can help not
only to improve the systems continuously through human
involvement, but also to justify some information given to the
human, particularly when decisions are made suddenly.
Gaining more information improves the capability of the
human to perform a task. Our results in this paper suggest one
of the next steps of research is to go beyond the length of</p>
      <p>Plot (a): The 44 simulations of different personality traits and
the given amounts of explanations (LengthOfExplanations (xL))
to complete the task.</p>
      <p>Plot (b): The average of different personality traits and the
amounts of explanations (xL)</p>
      <p>Fig. 4 Results of 44 rounds run on PRISM show that human with
higher personality trait levels typically prefer more explanations, while
those with low levels of personality trait prefer less explanations
explanations, and examine in more detail questions such as
how explanations should be presented: graphically, textually,
verbally? A further extension of this research is to have more
detailed models that allow the system to determine in a more
nuanced way the ideal contents of the explanations that should
be considered.</p>
      <p>VIII. CONCLUSION</p>
      <p>In this research we presented a formal framework that
incorporates human personality traits as one of the important
elements in guiding automated decision-making about the
proper amount of explanation that should be given to the
human to improve overall system utility. To accomplish our
goal of this paper, we use probabilistic model analysis to
determine how to utilize explanations in an effective way
based on the difference of human’s personality traits. Grid – a
virtual human and system interaction game – was developed
to illustrate our approach, to represent scenarios for
humansystem co-adaptation, and to demonstrate through simulation
how a human’s personality traits can be used as a factor to
consider for systems in providing appropriate explanations.</p>
    </sec>
    <sec id="sec-14">
      <title>ACKNOWLEDGMENT</title>
      <p>This research was supported in part by the NSA under
Award No. H9823018D0008 and Award No. N00014172899
from the Office of Naval Research. Any views, opinions,
findings and conclusions or recommendations expressed in
this material are those of the author(s) and do not necessarily
reflect the views of the NSA or the Office of Naval Research.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          , “
          <article-title>Explainable Artificial Intelligence: a Systematic Review</article-title>
          .” arXiv preprint arXiv:
          <year>2006</year>
          .00093,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          , “
          <article-title>Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI</article-title>
          ),
          <source>” IEEE Access</source>
          , vol.
          <volume>6</volume>
          . pp.
          <fpage>52138</fpage>
          -
          <lpage>52160</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Eskins</surname>
          </string-name>
          and
          <string-name>
            <given-names>W. H.</given-names>
            <surname>Sanders</surname>
          </string-name>
          , “
          <article-title>The multiple-asymmetric-utility system model: A framework for modeling cyber-human systems</article-title>
          ,
          <source>” Proc. 2011 8th Int. Conf. Quant. Eval. Syst. QEST</source>
          <year>2011</year>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>242</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. O.</given-names>
            <surname>Kephart</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Chess</surname>
          </string-name>
          , “
          <article-title>The vision of autonomic computing,” Computer (Long</article-title>
          . Beach. Calif).,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Lloyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          , and E. Tognoli, “
          <article-title>Improving Human-in-the-Loop Adaptive Systems Using Brain-Computer Interaction</article-title>
          ,” Proceedings - 2017
          <source>IEEE/ACM 12th International Symposium on Software Engineering for Adaptive</source>
          and
          <string-name>
            <surname>Self-Managing</surname>
            <given-names>Systems</given-names>
          </string-name>
          ,
          <string-name>
            <surname>SEAMS</surname>
          </string-name>
          <year>2017</year>
          . pp.
          <fpage>163</fpage>
          -
          <lpage>174</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Alharbi</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          , “
          <article-title>A Survey of Incorporating Affective Computing for Human-System Co-adaptation,”</article-title>
          <source>in Proceedings of the 2020 The 2nd World Symposium on Software Engineering</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          , “Explaining explanations in AI,” FAT*
          <fpage>2019</fpage>
          -
          <lpage>Proc</lpage>
          .
          <year>2019</year>
          Conf. Fairness, Accountability, Transpar., pp.
          <fpage>279</fpage>
          -
          <lpage>288</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C. G. H.</given-names>
            <surname>Jung</surname>
          </string-name>
          , “
          <article-title>Psychological Factors Determining Human Behaviour,”</article-title>
          <string-name>
            <given-names>Collect. Work. C.G.</given-names>
            <surname>Jung</surname>
          </string-name>
          , Vol.
          <volume>8</volume>
          Struct. Dyn. Psyche, pp.
          <fpage>114</fpage>
          -
          <lpage>126</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Sadowski</surname>
          </string-name>
          and
          <string-name>
            <given-names>H. E.</given-names>
            <surname>Cogburn</surname>
          </string-name>
          , “
          <article-title>Need for cognition in the big-five factor structure</article-title>
          ,
          <source>” Journal of Psychology: Interdisciplinary and Applied</source>
          , vol.
          <volume>131</volume>
          , no.
          <issue>3</issue>
          . pp.
          <fpage>307</fpage>
          -
          <lpage>312</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>McCrae</surname>
          </string-name>
          , “
          <article-title>Openness to Experience as a Basic Dimension of Personality,”</article-title>
          <string-name>
            <surname>Imagin. Cogn. Pers.</surname>
          </string-name>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Petty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Cacioppo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Petty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Feinstein</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W. B. G.</given-names>
            <surname>Jarvis</surname>
          </string-name>
          , “
          <article-title>Dispositional Differences in Cognitive Motivation : The Life and Times of Individuals Varying in Need for Cognition Dispositional Differences in Cognitive Motivation : The Life and Times of Individuals Varying in Need for Cognition,” Psychol</article-title>
          . Bull., vol.
          <volume>119</volume>
          , no.
          <source>August</source>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>253</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cámara</surname>
          </string-name>
          , G. Moreno, and
          <string-name>
            <given-names>D.</given-names>
            <surname>Garlan</surname>
          </string-name>
          , “
          <article-title>Reasoning about Human Participation in Self-Adaptive Systems,”</article-title>
          <source>Proc. - 10th Int. Symp. Softw. Eng. Adapt. Self-Managing Syst. SEAMS</source>
          <year>2015</year>
          , no.
          <issue>i</issue>
          , pp.
          <fpage>146</fpage>
          -
          <lpage>156</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Javier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Garlan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Schmerl</surname>
          </string-name>
          , “Hey !
          <article-title>Preparing Humans to do Tasks in Self-adaptive Systems</article-title>
          .”
          <source>In Proceedings of the 16th Symposium on Software Engineering for Adaptive</source>
          and
          <string-name>
            <surname>Self-Managing</surname>
            <given-names>Systems</given-names>
          </string-name>
          , Virtual,
          <fpage>18</fpage>
          -
          <lpage>21</lpage>
          May
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cámara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Garlan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Schmerl</surname>
          </string-name>
          , “
          <article-title>Reasoning about When to Provide Explanation for Human-in-the-loop Self-Adaptive Systems,”</article-title>
          <source>In Proceedings of the 2020 IEEE Conference on Autonomic Computing and Self-organizing Systems (ACSOS)</source>
          , Washington, D.C.,
          <volume>19</volume>
          -
          <fpage>23</fpage>
          August
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>P. D. Kwiatkowska</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>“Probabilistic Model Checking: Advances and Applications</article-title>
          ,” Form. Syst. Verif. Springer, Cham., pp.
          <fpage>73</fpage>
          -
          <lpage>121</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Baier</surname>
          </string-name>
          , “
          <article-title>Probabilistic model checking,” Dependable Softw</article-title>
          .
          <source>Syst. Eng.</source>
          , vol.
          <volume>45</volume>
          , no.
          <source>August</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Cheng</surname>
          </string-name>
          and D. Garlan, “
          <article-title>Stitch: A language for architecture-based self-adaptation,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Syst</surname>
          </string-name>
          . Softw.,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kwiatkowska</surname>
          </string-name>
          , G. Norman, and
          <string-name>
            <given-names>D.</given-names>
            <surname>Parker</surname>
          </string-name>
          ,
          <source>“PRISM 4</source>
          .
          <article-title>0: Verification of probabilistic real-time systems</article-title>
          ,
          <source>” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kwiatkowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Norman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parker</surname>
          </string-name>
          , and G. Santos, “
          <article-title>PRISMgames 3.0: Stochastic Game Verification with Concurrency, Equilibria</article-title>
          and Time,
          <source>” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sukkerd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Simmons</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Garlan</surname>
          </string-name>
          , “
          <article-title>Towards explainable multiobjective probabilistic planning</article-title>
          ,
          <source>” Proc. - Int. Conf. Softw. Eng.</source>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>25</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>N.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Adepu</surname>
          </string-name>
          , E. Kang, and
          <string-name>
            <given-names>D.</given-names>
            <surname>Garlan</surname>
          </string-name>
          , “
          <article-title>Explanations for human-onthe-loop: A probabilistic model checking approach</article-title>
          ,
          <source>” Proc. - 2020 IEEE/ACM 15th Int. Symp. Softw. Eng. Adapt. Self-Managing Syst. SEAMS</source>
          <year>2020</year>
          , pp.
          <fpage>181</fpage>
          -
          <lpage>187</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Forejt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kwiatkowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parker</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Simaitis</surname>
          </string-name>
          , “
          <article-title>Automatic verification of competitive stochastic systems,”</article-title>
          <string-name>
            <given-names>Form. Methods</given-names>
            <surname>Syst</surname>
          </string-name>
          . Des., vol.
          <volume>43</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>61</fpage>
          -
          <lpage>92</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>