<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explanation Systems for Influence Maximization Algorithms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amulya Yadav</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aida Rahmattalabi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ece Kamar</string-name>
          <email>2feckamarg@microsoft.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Phebe Vayanos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Milind Tambe</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Venil Loyd Noronha</string-name>
          <email>vnoronhag@usc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Artificial Intelligence in Society, University of Southern California</institution>
          ,
          <addr-line>LA, CA, 90089</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Microsoft Research</institution>
          ,
          <addr-line>Redmond, WA 98052</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <fpage>8</fpage>
      <lpage>19</lpage>
      <abstract>
        <p>The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying “influential” members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of “influential” nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among 200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The field of influence maximization (IM) deals with finding a set of “influential” nodes
in a social network, which can optimally spread influence throughout the network.
Significant progress in this field over the last few years has led to a variety of sophisticated
algorithms [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for finding influential nodes. Recently, [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] evidenced the real-world
usability of IM algorithms by deploying three such algorithms in the real world.
      </p>
      <p>
        As with any technology, evidence of real-world usability of IM algorithms brings
to the fore other important questions that need to be tackled before their widespread
adoption. One of the most important open questions for IM algorithms (which are used
as decision support tools by domain experts such as viral marketers, social workers,
etc.) is how to engender trust by explaining their recommendations to domain experts
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This is important because explaining the rationale behind these algorithms decisions
helps the domain experts decision making process: they can then choose to either use the
Copyright c 2017 for the individual papers by the papers’ authors. Copying permitted for
private and academic purposes. This volume is published and copyrighted by its editors.
algorithm’s recommendation or discard it. Indeed, [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] noted the challenge of explaining
the solutions of HEALER (an IM algorithm) to homeless shelter officials, who wanted
explanations for why HEALER’s solutions were better than their preferred solutions.
This raises the following important question: How to explain solutions generated by IM
algorithms to its intended users? To the best of our knowledge, this problem has not
received any attention in the literature so far.
      </p>
      <p>Designing such an explanation system for IM algorithms is a challenging task for
two reasons. First, instead of building different explanation systems for different IM
algorithms, we want to build a single explanation system which is generalizable, i.e.,
it should be flexible enough to explain the solutions of every possible IM algorithm.
This makes our task challenging because we can no longer rely on “explaining the
algorithm to explain the solution” (since there is no fixed algorithm). Second, most
human end-users have cognitive limitations and therefore, in order to avoid overloading
them with information, we need to ensure that our system’s explanations are relatively
interpretable. However, this involves trading off explanation accuracy (i.e., correctness
of explanation) with its interpretability (as we explain later).</p>
      <p>This paper attempts to address this open problem via four major contributions. First,
we propose a machine learning based paradigm for designing generalizable explanation
systems for IM algorithms. Our paradigm achieves generalizability by using an
algorithm independent classification dataset to build a machine learning classifier, which
generates natural language explanations (in terms of this dataset’s features) for IM
solutions. In order to aid interpretability of the generated explanations, our classification
dataset does not contain features for un-interpretable complicated terms (e.g., marginal
gain). This transforms the problem of explaining IM algorithm solutions into a problem
of explaining the predictions made by that machine learning classifier (as we explain
later). Further, our paradigm exploits a fundamental tradeoff between explanation
accuracy and interpretability (i.e., more accurate explanations are less interpretable, and vice
versa) to generate a Pareto frontier of machine learning classifiers, which is then used
to generate natural language explanations of varying accuracy and interpretability.
Second, we utilize our paradigm to build XplainIM, a suite of explanation systems. Third,
we illustrate the usability of XplainIM by explaining the solutions of HEALER (a recent
IM algorithm) to 200 human subjects on Amazon Mechanical Turk (AMT). Fourth,
we provide extensive evaluation of our AMT results, which shows the effectivenes of
XplainIM.</p>
      <p>
        Motivating IM Algorithm Our proposed explanation system is flexible enough to
explain the solutions of any IM algorithm. However, we motivate our discussion in
the rest of the paper by focusing on HEALER [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], a recent IM algorithm. We choose
HEALER primarily because its paper [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] was the first to pose the question of
explaining the solutions of an IM algorithm. HEALER was designed to select multiple sets
of “influential” nodes sequentially from a homeless youth social network, who could
optimally spread awareness about HIV in their social network. HEALER solves this
problem by modeling it as a Partially Observable Markov Decision Process (POMDP)
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and uses hierarchical ensembling techniques to scale up to real-world network sizes
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Thus, HEALER is a specialized algorithm for solving a more general problem than
standard influence maximization (defined below). However, in order to ensure the
generalizability of our explanation system to other IM algorithms, we ignore the POMDP
style sequential planning aspects of HEALER, and use it as an algorithm for solving the
standard single-shot influence maximization problem: Given a social network graph G,
select a set of K “influential” nodes which optimally spread influence in the network
after T time steps.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        The problem of explaining solutions of IM algorithms was first posed by [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] in the
context of HEALER, but their main focus was on justifying the need for such explanations,
as opposed to providing any practical solutions to this problem. While explanation
systems for POMDPs have been proposed [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], they cannot be applied to explain solutions
of every IM algorithm. To the best of our knowledge, no other previous work has looked
at this problem so far.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Machine Learning based Paradigm for Explanation Systems</title>
      <p>We now define an explanation system formally. Given a social network graph G as
input, an IM algorithm outputs an optimal set of K nodes OG. Moreover, the end-user
of the IM algorithm has a preferred set of K nodes UG, which he/she wrongly thinks is
the optimal solution. Then, an explanation system is defined as follows:
Definition 1. An explanation system for IM algorithms is a function E : G OG
UG ! S which takes a graph G, an IM algorithm’s solution on G (i.e., OG) , and an
end-user’s preferred solution on G (i.e., UG) as input, and produces as output a natural
language string S, which explains why OG is a better solution than UG on graph G.</p>
      <p>
        We believe that Definition 1 accounts for most practical use cases of explanation
systems. For example, [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] noted that homeless shelter officials (i.e., end-users) wanted
explanations for why HEALER’s solutions were better than their preferred solutions.
Next, we explain our novel paradigm for designing explanation systems (for IM
algorithms) in two stages.
      </p>
      <p>First, we transform our original problem of explaining IM solutions into an alternate
problem: “how to explain predictions made by ML classifiers?” Second, having made
this transformation, we provide novel solutions for explaining predictions made by ML
classifiers.</p>
      <p>Step 1: Transformation Consider the following two problems for any given IM
algorithm :
Problem 1. Given as input an IM algorithm , output an explanation system E which
explains why ’s solutions (OG ) are better than UG (for all possible G and UG).</p>
      <p>Problem 2. Given as input (i) an IM algorithm , and (ii) a binary classifier M , which
perfectly (i.e., without any misclassifications) differentiates ’s solutions OG from all
other possible solutions UG (for all possible UG and G), output an explanation system
E M which explains how classifier M differentiates OG from UG.</p>
      <p>We claim that solving Problem 2 is sufficient to solve Problem 1, as formally stated
below (proof in appendix1):
Theorem 1. Problem 2 solution =)</p>
      <p>Problem 1 solution</p>
      <p>As a result, we transform the problem of explaining IM solutions (Problem 1) into
the alternate problem: “how to explain predictions made by ML classifiers?” (Problem
2). This completes the transformation step of our approach. Next, we provide a novel
solution method for solving Problem 2.</p>
      <p>Step 2: Solution Method First, we specify the input to Problem 2: an ML classifier
M which can differentiate between OG and UG. Second, we need to use this classifier
M to generate output for Problem 2: the process of explaining the predictions made by
classifier M needs to be specified. We focus on the input specification in this section
(output generation is explained in Section 5).</p>
      <p>To train classifier M , each possible IM solution (i.e., set of K nodes) is mapped to
a set of network-centric feature values (e.g., degree, mutual connections, etc.), and is
labeled as either being “optimal” (if it is the IM algorithm’s solution) or “sub-optimal”
(for any other solution). These network-centric features generalize across all IM
algorithms, as they only depend on the network based characteristics of any possible IM
solution. This creates a labeled dataset (details in Section 6) which is then used to train
the ML classifier M which differentiates an optimal solution from a sub-optimal
solution.</p>
      <p>
        We use decision trees [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] as our ML classifier of choice. The key advantage of
decision trees over other classifiers is their interpretability, as their predictions can be
explained to end-users as a conjunction of decision rules, where each rule involves a
single feature in the dataset. In many applications, including our own, this
interpretability is preferred over other methods that may have higher accuracy but are relatively
un-interpretable [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Unfortunately, while decision trees are more interpretable than other ML classifiers
(e.g., neural nets), the explanations for their predictions (i.e., a conjunction of decision
rules) can overload human users with too much information to process (as we show
in our experiments). Thus, there is a need to ensure interpretability of the generated
explanations.</p>
      <p>We address this challenge by introducing “interpretable decision trees”, which
allows us to trade off the prediction accuracy of a decision tree with its interpretability
in a quantifiable manner. Further, we exploit the competing objectives of prediction
accuracy and interpretability to generate a Pareto frontier of decision trees. This Pareto
frontier allows us to correctly differentiate between OG and UG, while maintaining
different levels of interpretability. We then utilize trees from this Pareto frontier to
generate explanations for the IM algorithm’s solutions (see Section 4). The novelty of our
1 See http://bit.ly/2kXDhE4 for proofs
work lies in the versatility of our explanation system, which is able to provide
explanations of varying interpretability upon demand. Specifically, our explanation system is
flexible enough to dynamically increase the interpretability of the generated explanation
(by using more “interpretable” decision trees to generate the explanation), if the
enduser is unconvinced about previously supplied explanations. We now formally define
interpretable decision trees.</p>
    </sec>
    <sec id="sec-4">
      <title>4 Interpretable Decision Trees</title>
      <p>Decision trees take an input a labeled dataset with N datapoints (Xi; Y i) 8i 2 f1; N g,
where Xi is a p-dimensional feature vector for the ith datapoint, and Y i is the label
associated with the datapoint. In our domain, Xi represents the vector of
networkcentric feature values corresponding to the ith IM solution (i.e., set of K nodes) in
our dataset. Similarly, Y i = 1 if the ith solution in our dataset is the IM algorithm’s
solution, and it is 0 otherwise.</p>
      <p>While a decision tree’s prediction for Xi can be explained as a conjunction of all
the feature splits that Xi traverses on its way to a leaf node, this explanation is
uninterpretable, as it overloads end-users with too much information (we validate this in
our experiments). Thus, in order to ensure interpretability of the generated explanations,
we introduce the notion of interpretable decision trees, which allows us to tradeoff the
explanation accuracy with its interpretability in a quantifiable manner. Next, we define
the interpretability score, which quantifies the interpretability of a decision tree.</p>
      <p>Interpretability Score We associate with each feature t 2 f1; pg an
interpretability weight wt 2 [0; 1] which quantifies the interpretability of feature t. We estimate
wt values 8t 2 f1; pg by surveying AMT workers (see Section 6). Further, let xtD 8t 2
f1; pg denote a binary variable which indicates whether a split with feature t is used
at any branch node in the decision tree D or not. Then, the interpretability score of
p p
decision tree D is defined as: ID = ( P xtDwt)=( P xtD).</p>
      <p>t=1 t=1</p>
      <p>Thus, trees which use fewer number of features to split (the denominator in ID’s
definition) have higher ID scores (as it is easier to explain the tree’s predictions with
fewer features). Moreover, trees which use features having more interpretable weights
have higher ID scores. Next, we define the A&amp;I score of a decision tree, which captures
the tradeoff between its prediction accuracy and interpretability.</p>
      <p>A&amp;I Score For decision trees (and most other classification algorithms),
interpretability and prediction accuracy (measured by ID and F1 score2, respectively) are
two competing objectives. Increasing the prediction accuracy of a decision tree leads to
a loss of its interpretability, and vice versa.</p>
      <p>The A&amp;I score of decision tree D captures the tradeoff between these competing
objectives and is defined as follows: A&amp;ID( ) = F1D + ID. Here, is a parameter
which controls the tradeoff between the F1 and ID scores. Higher values lead to
highly interpretable decision trees with low prediction accuracy, and vice versa. Next,
we explain an algorithm which finds tree D which maximizes the A&amp;I( ) score (i.e.,
it optimally trades off between F1 and ID scores for the tradeoff level specified by ).
2 F1 score = 2T P=(2T P + F P + F N )</p>
      <sec id="sec-4-1">
        <title>Algorithm 1: Maximize A&amp;I score</title>
        <p>Input: , Dataset (Xi; Y i) 8i 2 f1; N g</p>
        <p>Output: D , the A&amp;I score maximizing decision tree
1 (T rain; V alidation; T est) = Split(Dataset);
2 T ree = CART (T rain);
3 for level 2 P runingLevels(T ree) do
4 P runedT ree = P rune(T ree; level);
5 rlevel = A&amp;I( ; P runedT ree; V alidation);
6 OptLevel = argmaxjrj;
7 D = P rune(T ree; OptLevel);
8 return D ;</p>
        <p>
          Maximizing A&amp;I score For each value, we use Algorithm 1 to build a decision
tree (on the dataset from Section 3) which maximizes A&amp;I( ). First, we randomly
split our dataset into a training, validation, and test set in Step 1(in our experiments,
we use a 80:10:10 split). Next, we use CART, a state-of-the-art decision tree algorithm
[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], to learn a decision tree on our training set in Step 2. Normally, CART outputs
deep decision trees, which have high prediction accuracies on the training set. However,
deeper trees have lower ID scores, as they have more number of features. As a result,
we use our validation set to find the optimal pruning level of CART’s decision tree,
which maximizes A&amp;I( ). Specifically, for each possible pruning level , we prune
CART’s decision tree to that level in Step 4. We use this pruned decision tree’s F1 score
on the validation set to calculate the pruned tree’s A&amp;I( ) score on the validation set
(Step 5). Finally, we select the optimal pruning level which maximizes A&amp;I( ) on the
validation set (Step 6), and prune CART’s decision tree to this level (Step 7). To account
for randomization in splitting of the dataset into the training, validation and test set, we
average over 50 such splits.
        </p>
        <p>Decision Tree Pareto Frontier Instead of generating a single decision tree (using
Algorithm 1), we exploit the accuracy-interpretability tradeoff by varying the value of
to generate multiple decision trees. We do this because of two reasons. First,
endusers may find explanations generated from one decision tree hard to understand, and
thus, we may need a more interpretable tree (measured by ID) to generate the
explanations. Second, explanations can only be generated using decision trees which correctly
differentiate between the IM algorithm’s optimal solution (OG) and the end-user’s
suboptimal solution (UG) (else we don’t have a valid classifier M in Problem 2). Thus, if
more interpretable (less accurate) trees fail to differentiate between OG and UG, we
need more accurate (less interpretable) trees to generate explanations.</p>
        <p>Therefore, for each value, we find the optimal D tree using Algorithm 1. Thus,
we get a set of trees , each of which has different F1 and ID scores. Next, we remove
all trees D 2 which are dominated by other trees D0 in terms of both F1 and ID
scores (i.e., there exist trees D0 2 for which either fF1D0 &gt; F1D ^ ID0 &gt; IDg or
fF1D0 &gt; F1D ^ ID0 &gt; IDg holds). This gives us our Pareto frontier of trees.</p>
        <p>Our strategy is to search this Pareto-frontier for a decision tree D , which can
correctly differentiate between OG and UG, and then use D to generate explanations.</p>
        <p>Next, we complete the discussion of our explanation system by proposing XplainIM,
a solution method for Problem 2 which utilizes our Pareto frontier (recall that solving
Problem 2 is sufficient to create an explanation system for IM algorithms).
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Solution for Problem 2: XplainIM</title>
      <p>Problem 2 takes as input a machine learning classifier M (which can differentiate
between OG and UG), and produces as output an explanation system which explains how
classifier M differentiates OG from UG. We propose XplainIM as a solver for Problem
2. We begin by discussing XplainIM’s design.</p>
      <p>XplainIM builds an explanation system for a specific IM algorithm in two stages. In
the offline stage, it takes as input a classification dataset for that algorithm, and uses it
to store a Pareto-frontier of decision trees. This input dataset is created using a fixed set
of network-centric features (Figure 1), irrespective of the specific IM algorithm under
consideration. In the online stage, it takes as input a network G and user solution UG
and uses them to choose classifier M . We now explain how XplainIM chooses this
classifier M .</p>
      <p>Choosing Classifier Given a choice of OG and UG, XplainIM finds the set F of
“feasible trees” in its Pareto frontier which can successfully classify OG as optimal, and
UG as sub-optimal. Each tree in F is a valid classifier for Problem 2, as it can
differentiate between OG and UG. To choose M , XplainIM simply uses the most interpretable
tree in F , i.e., M = argmaxD2F ID.</p>
      <p>Note that it is possible that for some choice of OG and UG, no tree in our Pareto
frontier is feasible. Then, solving Problem 2 is impossible, as we wouldn’t have an
appropriate classifier M . However, this case rarely arises in practice (see Section 6).</p>
      <p>Explanation System Next, having chosen a feasible decision tree as the classifier
M , we now discuss XplainIM’s explanation system, which generates explanations for
“how M differentiates OG from UG”. Xplain relies on a natural language template
(described below) to generate explanations, which is populated at runtime (depending
on OG and UG).</p>
      <p>Template 1 Your solution UG had a ft feature value of UG(ft). On the other hand,
the optimal solution OG had a ft feature value of OG(ft). For solutions to be optimal,
the value of ft feature should be f6; &gt;gbt.</p>
      <p>Given OG and UG, we propose three different methods for populating Template 1.
Each of these methods relies on the observation that the paths taken by OG and UG
(to leaf nodes) in tree M diverge at some branch node t 2 TB (because tree M is
feasible). We use features ft and corresponding thresholds bt (recall Section 4) starting
from this point of divergence t to populate Template 1 to generate explanations for the
end-user. Our three methods are explained below:</p>
      <p>1. XplainIM-A Template 1 is populated using only ft and bt for branch node t ,
which is the first point of divergence between paths of OG and UG in tree M . This
populated template is presented as explanation to the end-user.</p>
      <p>2. XplainIM-B Template 1 is populated using ft and bt for branch node t
(similar to XplainIM-A). However, the choice of classifier M used to generate explanations
is different. Instead of using the feasible tree with the highest ID score as our classifier
M , we use the feasible tree in which the feature ft (present at the branch node t )
has the highest interpretability weight wft . This is because only feature ft is used to
populate Template 1, and a high value of wft would ensure the interpretability of the
generated explanation.</p>
      <p>
        3. XplainIM-Full Instead of providing explanations in terms of a single feature,
XplainIM-Full uses multiple features in its explanation. Starting from branch node t ,
XplainIM-Full populates Template 1 for all features along the two divergent paths of
OG and UG. This method produces explanations which are identical in structure to
ones generated by decision sets [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Evaluation</title>
      <p>We evaluated XplainIM by using it to explain HEALER’s solutions to human
workers on AMT. We provide results from these experiments in three parts. First, we
describe the HEALER specific dataset (which XplainIM uses to build its Pareto-frontier
of trees). Second, we provide results from AMT surveys which were used to estimate
interpretability weights wt for features in our dataset. Third, we provide AMT game
results to evaluate XplainIM’s effectiveness in explaining HEALER’s solutions to AMT
workers.</p>
      <p>Dataset We generate 100 different Watts-Strogatz (WS) networks (p = 0:25; k =
7), each of size 20 nodes. The influence probabilities for network edges were
randomly sampled from a uniform distribution U [0; 1]. We set K = 2, i.e., every set of
two network nodes is a possible IM solution. For each of these 100 networks, we add
HEALER’s solution, along with 20 randomly chosen IM solutions to our dataset with
the labels “optimal” and “suboptimal”, respectively. Next, we map the ith IM solution
in our dataset to a unique feature vector Xi containing the network-centric feature
values (see Figure 1) for that IM solution. This gives us a labeled dataset (with 20 features
and 2100 datapoints), which serves as XplainIM’s input in our experiments.</p>
      <sec id="sec-6-1">
        <title>Feature Name wt</title>
        <p>Betweenness Centrality 0.0
Clustering Coefficient 0.25
Pagerank 0.28
Closeness Centrality 0.71</p>
        <p>Degree 1.0</p>
        <p>Fig. 1: Interpretability Weights for Features</p>
        <p>Interpretability Weights We deployed a survey among 35 AMT workers to
estimate interpretability weights wt for all 20 features in our dataset. Each worker is (i)
provided the definition of each feature; and (ii) given an example of how that feature’s
value is calculated for a sample IM solution. Then, the worker is asked to calculate each
feature’s value for a new IM solution on an unseen network.</p>
        <p>
          For each feature t, we compare the user response distribution Mt (i.e., histogram
of feature values calculated by the 21 workers who passed the validation game) with
the actual value distribution Pt in order to calculate wt. We assume that a feature’s
interpretability correlates heavily with ease of calculating that feature’s value. Thus,
if Mt is very similar to Pt, it implies that AMT workers find feature t to be highly
interpretable. However, significant differences between Mt and Pt implies that
feature t is uninterpretable. Specifically, we set wt = 1 EarthMover(Mt; Pt), where
EarthMover(Mt; Pt) measures the distance between the distributions Mt and Pt [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>Due to a lack of space, Figure 1 shows normalized wt values for 5 out of the 20
features. This figure shows that “Degree” and “Betweenness Centrality” are the most
and least interpretable features, respectively.</p>
        <p>AMT Game Results We now present results from two different game types that
we deployed on AMT to evaluate XplainIM’s effectiveness at explaining HEALER’s
solutions. Both games begin by asking AMT workers to pick their preferred IM solution
UG on a randomly chosen WS network (p = 0:25; k = 7; n = 20). We begin by
explaining the design and motivations of both games.</p>
        <p>Game 1 This game has three different versions, one for each variant of XplainIM.
This game presents explanations generated by XplainIM to AMT workers, and
evaluates the effect of these explanations in improving the worker’s IM solutions UG. The
game proceeds as follows. After the worker picks solution UG, he/she is given an
explanation for why HEALER’s solution OG is better than UG (this explanation is generated
by XplainIM-fA, B or Fullg depending on the game version). Then, the worker is asked
to improve his solution UG based on the explanation shown.</p>
        <p>Game 2 This game presents explanations generated by all three XplainIM variants
to the same AMT worker, and compares the self-assessed interpretabilities of these
three explanation schemes. The game proceeds as follows. After the worker picks
solution UG, he/she is given three different explanations (generated using XplainIM-fA,
B &amp; Fullg) for why HEALER’s solution OG is better than UG. Then, the worker is
asked to rate these three explanations (each on a scale of 1-10) in terms of their
interpretability. Further, we also deploy a “No Explanation” game (as a baseline), where
workers are asked to improve their suboptimal solutions UG, without being given any
explanation for UG’s suboptimality. Each of these game versions was deployed with 50
unique AMT workers (to avoid learning bias), each of whom plays on five different WS
networks sequentially. We now discuss evaluation metrics used in this section.</p>
        <p>Evaluation Metrics For Game 1, we compare the interpretability of XplainIM-A,
B &amp; Full’s explanations by measuring the effect of these explanations in improving the
worker’s IM solutions UG. Specifically, we measure the percentage of workers who (in
response to the provided explanation) modified UG to either (i) match OG (Optimal
group); OR (i) correctly followed the provided explanation (Correct group); OR (iii)
moved in the right direction (Close group).</p>
        <p>For example, assume that “Degree” (Figure 1) of UG is 4, and the provided
explanation is “For solutions to be optimal, the Degree should be greater than 9”. If the
worker modifies UG to match OG, he/she is counted in the Optimal group. Otherwise,
0.71
if his/her modified solution’s Degree is greater than 9, he/she is counted in the Correct
group. Finally, if his/her modified solution’s Degree is greater than 4 (UG’s original
Degree), but is less than 9, then he/she moved UG in the right direction and is counted
in the Close group.</p>
        <p>Also, we use the failure rate, which is defined as the fraction of times an explanation
system fails to produce an explanation. This happens when there are no feasible decision
trees for a given choice of OG and UG. We calculate an explanation system’s failure
rate on network G by measuring the fraction of possible user solutions UG for which no
tree is feasible in the Pareto frontier. We now present results comparing XplainIM-A, B
&amp; Full with the “No Explanation” treatment. All our comparison results are statistically
significant under t-test.</p>
        <p>Pareto Frontier First, we illustrate the accuracy-interpretability tradeoff exploited
by all XplainIM variants to find Pareto-optimal trees. Figure 2a shows all decision trees
(depicted by individual points) obtained by varying values. The X and Y axes show
the ID and F1 scores, respectively. We prune this set of decision trees to find the set
of Pareto-optimal trees (shown in Figure 2b). These figures clearly depict the inverse
correlation between the two competing objectives of prediction accuracy (F1) and
interpretability (ID).</p>
        <p>Next, we validate our decision of maintaining this Pareto frontier of trees inside
XplainIM (instead of maintaining just a single Pareto-optimal tree). Figure 4a compares
the failure rate of XplainIM-A, B &amp; Full to that of single Pareto-optimal trees. The
Xaxis shows trees of increasing prediction accuracy. The Y-axis shows the average failure
rate (across 100 WS nets). This figure shows that our Pareto-frontier based explanation
systems (XplainIM-A, B &amp; Full) have an average failure rate of 0.01, i.e., XplainIM
generate valid explanations in 99% of situations. However, the minimum failure rate
achieved by a single Pareto-optimal tree is 0.27. This shows that using single trees as our
classifier for Problem 2 (instead of maintaining a Pareto frontier) results in failures 27%
of the times. This signifies the importance of the Pareto frontier in making XplainIM
work.</p>
        <p>XplainIM Results Figure 3a shows Game 2 results. The X-axis shows the three
XplainIM variants. The Y-axis shows average ratings (scale of 1-10) given by workers to
different explanation schemes. This figure shows that XplainIM-A &amp; B’s explanations
average rating was 6.04, as compared to 4.84 for XplainIM-Full (p=0:034). This shows
that workers found XplainIM-A &amp; B’s explanations to be more interpretable than those
generated by XplainIM-Full.
0.6
te0.5
a
reR0.4
u
l
iaF0.3
aeg0.2
re
vA0.1
0</p>
        <p>XplainIM-A XplainIM-B XplainIM-Full</p>
        <p>Different Explanation Systems
(a) Subjective rating comparison
Optimal Correct</p>
        <p>Different Groups of Users
(b) Effect on modified solutions</p>
        <p>Close
This paper solves the open problem of explaining solutions of IM algorithms to its
users. This paper shows that the problem of designing an explanation system can be
equivalently viewed as a problem of explaining an ML classifier’s predictions. We
propose XplainIM, a suite of ML based explanation systems which exploit the
accuracyinterpretability tradeoff to generate natural language explanations. Extensive human
subject experiments show the effectiveness of XplainIM in explaining IM algorithm
solutions to human users.
8</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <sec id="sec-7-1">
        <title>This research was supported by MURI Grant W911NF-11-1-0332.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Borgs</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brautbar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chayes</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucier</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Maximizing Social Influence in Nearly Optimal Time</article-title>
          .
          <source>In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms</source>
          . pp.
          <fpage>946</fpage>
          -
          <lpage>957</lpage>
          . SODA '14,
          <string-name>
            <surname>SIAM</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Lakkaraju</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bach</surname>
            ,
            <given-names>S.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leskovec</surname>
          </string-name>
          , J.:
          <article-title>Interpretable decision sets: A joint framework for description and prediction</article-title>
          .
          <source>In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          . pp.
          <fpage>1675</fpage>
          -
          <lpage>1684</lpage>
          . ACM (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Ling</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Okada</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>An efficient earth mover's distance algorithm for robust histogram comparison</article-title>
          .
          <source>IEEE transactions on pattern analysis and machine intelligence</source>
          <volume>29</volume>
          (
          <issue>5</issue>
          ) (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Loh</surname>
          </string-name>
          , W.Y.:
          <article-title>Classification and regression trees</article-title>
          .
          <source>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <fpage>14</fpage>
          -
          <lpage>23</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Puterman</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          :
          <article-title>Markov Decision Processes: Discrete Stochastic Dynamic Programming</article-title>
          . John Wiley &amp; Sons (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Quinlan</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          :
          <article-title>Induction of decision trees</article-title>
          .
          <source>Machine learning 1(1)</source>
          ,
          <fpage>81</fpage>
          -
          <lpage>106</lpage>
          (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Schneier</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Trust in man/machine security systems</article-title>
          .
          <source>IEEE Security Privacy</source>
          <volume>11</volume>
          (
          <issue>5</issue>
          ),
          <fpage>96</fpage>
          -
          <lpage>96</lpage>
          (
          <year>Sept 2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pynadath</surname>
            ,
            <given-names>D.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hill</surname>
            ,
            <given-names>S.G.</given-names>
          </string-name>
          :
          <article-title>The impact of pomdp-generated explanations on trust and performance in human-robot teams</article-title>
          .
          <source>In: Proceedings of the 2016 International Conference on Autonomous Agents &amp; Multiagent Systems</source>
          . pp.
          <fpage>997</fpage>
          -
          <lpage>1005</lpage>
          . International Foundation for Autonomous Agents and
          <string-name>
            <given-names>Multiagent</given-names>
            <surname>Systems</surname>
          </string-name>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Yadav</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>A.X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rice</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kamar</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grosz</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tambe</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>POMDPs for Assisting Homeless Shelters: Computational and Deployment Challenges</article-title>
          .
          <source>In: International Workshop on Issues with Deployment of Emerging Agent-based Systems (IDEAS)</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Yadav</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wilder</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petering</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rice</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tambe</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Influence maximization in the field: The arduous journey from emerging to deployed application</article-title>
          .
          <source>In: In Proceedings of the International Conference on Autonomous Agents and Multi-agent Systems (AAMAS)</source>
          ,
          <year>2017</year>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>