<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Explanation Systems for Influence Maximization Algorithms</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Amulya</forename><surname>Yadav</surname></persName>
							<email>amulyaya@usc.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Center for Artificial Intelligence in Society</orgName>
								<orgName type="institution">University of Southern California</orgName>
								<address>
									<postCode>90089</postCode>
									<settlement>LA</settlement>
									<region>CA</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Aida</forename><surname>Rahmattalabi</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Center for Artificial Intelligence in Society</orgName>
								<orgName type="institution">University of Southern California</orgName>
								<address>
									<postCode>90089</postCode>
									<settlement>LA</settlement>
									<region>CA</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ece</forename><surname>Kamar</surname></persName>
							<email>eckamar@microsoft.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Microsoft Research</orgName>
								<address>
									<postCode>98052</postCode>
									<settlement>Redmond</settlement>
									<region>WA</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Phebe</forename><surname>Vayanos</surname></persName>
							<email>phebe.vayanos@usc.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Center for Artificial Intelligence in Society</orgName>
								<orgName type="institution">University of Southern California</orgName>
								<address>
									<postCode>90089</postCode>
									<settlement>LA</settlement>
									<region>CA</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Milind</forename><surname>Tambe</surname></persName>
							<email>tambe@usc.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Center for Artificial Intelligence in Society</orgName>
								<orgName type="institution">University of Southern California</orgName>
								<address>
									<postCode>90089</postCode>
									<settlement>LA</settlement>
									<region>CA</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Venil</forename><forename type="middle">Loyd</forename><surname>Noronha</surname></persName>
							<email>vnoronha@usc.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Center for Artificial Intelligence in Society</orgName>
								<orgName type="institution">University of Southern California</orgName>
								<address>
									<postCode>90089</postCode>
									<settlement>LA</settlement>
									<region>CA</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Explanation Systems for Influence Maximization Algorithms</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">3C3ADD71843CB489763F7E6537E46F71</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T23:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying "influential" members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of "influential" nodes needs to be explained to its users. This is a challenging open problem that needs to be solved before these algorithms can be deployed on a large scale. This paper attempts to tackle this open problem via four major contributions: (i) we propose a general paradigm for designing explanation systems for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability; our paradigm treats IM algorithms as black boxes, and is flexible enough to be used with any algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems; (iii) we illustrate the usability of XplainIM by explaining solutions of HEALER (a recent IM algorithm) among ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive evaluation of our AMT results, which shows the effectiveness of XplainIM.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The field of influence maximization (IM) deals with finding a set of "influential" nodes in a social network, which can optimally spread influence throughout the network. Significant progress in this field over the last few years has led to a variety of sophisticated algorithms <ref type="bibr" target="#b0">[1]</ref> for finding influential nodes. Recently, <ref type="bibr" target="#b9">[10]</ref> evidenced the real-world usability of IM algorithms by deploying three such algorithms in the real world.</p><p>As with any technology, evidence of real-world usability of IM algorithms brings to the fore other important questions that need to be tackled before their widespread adoption. One of the most important open questions for IM algorithms (which are used as decision support tools by domain experts such as viral marketers, social workers, etc.) is how to engender trust by explaining their recommendations to domain experts <ref type="bibr" target="#b6">[7]</ref>. This is important because explaining the rationale behind these algorithms decisions helps the domain experts decision making process: they can then choose to either use the algorithm's recommendation or discard it. Indeed, <ref type="bibr" target="#b8">[9]</ref> noted the challenge of explaining the solutions of HEALER (an IM algorithm) to homeless shelter officials, who wanted explanations for why HEALER's solutions were better than their preferred solutions. This raises the following important question: How to explain solutions generated by IM algorithms to its intended users? To the best of our knowledge, this problem has not received any attention in the literature so far.</p><p>Designing such an explanation system for IM algorithms is a challenging task for two reasons. First, instead of building different explanation systems for different IM algorithms, we want to build a single explanation system which is generalizable, i.e., it should be flexible enough to explain the solutions of every possible IM algorithm. This makes our task challenging because we can no longer rely on "explaining the algorithm to explain the solution" (since there is no fixed algorithm). Second, most human end-users have cognitive limitations and therefore, in order to avoid overloading them with information, we need to ensure that our system's explanations are relatively interpretable. However, this involves trading off explanation accuracy (i.e., correctness of explanation) with its interpretability (as we explain later). This paper attempts to address this open problem via four major contributions. First, we propose a machine learning based paradigm for designing generalizable explanation systems for IM algorithms. Our paradigm achieves generalizability by using an algorithm independent classification dataset to build a machine learning classifier, which generates natural language explanations (in terms of this dataset's features) for IM solutions. In order to aid interpretability of the generated explanations, our classification dataset does not contain features for un-interpretable complicated terms (e.g., marginal gain). This transforms the problem of explaining IM algorithm solutions into a problem of explaining the predictions made by that machine learning classifier (as we explain later). Further, our paradigm exploits a fundamental tradeoff between explanation accuracy and interpretability (i.e., more accurate explanations are less interpretable, and vice versa) to generate a Pareto frontier of machine learning classifiers, which is then used to generate natural language explanations of varying accuracy and interpretability. Second, we utilize our paradigm to build XplainIM, a suite of explanation systems. Third, we illustrate the usability of XplainIM by explaining the solutions of HEALER (a recent IM algorithm) to ∼200 human subjects on Amazon Mechanical Turk (AMT). Fourth, we provide extensive evaluation of our AMT results, which shows the effectivenes of XplainIM.</p><p>Motivating IM Algorithm Our proposed explanation system is flexible enough to explain the solutions of any IM algorithm. However, we motivate our discussion in the rest of the paper by focusing on HEALER <ref type="bibr" target="#b8">[9]</ref>, a recent IM algorithm. We choose HEALER primarily because its paper <ref type="bibr" target="#b8">[9]</ref> was the first to pose the question of explaining the solutions of an IM algorithm. HEALER was designed to select multiple sets of "influential" nodes sequentially from a homeless youth social network, who could optimally spread awareness about HIV in their social network. HEALER solves this problem by modeling it as a Partially Observable Markov Decision Process (POMDP) <ref type="bibr" target="#b4">[5]</ref> and uses hierarchical ensembling techniques to scale up to real-world network sizes <ref type="bibr" target="#b8">[9]</ref>.</p><p>Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia Thus, HEALER is a specialized algorithm for solving a more general problem than standard influence maximization (defined below). However, in order to ensure the generalizability of our explanation system to other IM algorithms, we ignore the POMDP style sequential planning aspects of HEALER, and use it as an algorithm for solving the standard single-shot influence maximization problem: Given a social network graph G, select a set of K "influential" nodes which optimally spread influence in the network after T time steps.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>The problem of explaining solutions of IM algorithms was first posed by <ref type="bibr" target="#b8">[9]</ref> in the context of HEALER, but their main focus was on justifying the need for such explanations, as opposed to providing any practical solutions to this problem. While explanation systems for POMDPs have been proposed <ref type="bibr" target="#b7">[8]</ref>, they cannot be applied to explain solutions of every IM algorithm. To the best of our knowledge, no other previous work has looked at this problem so far.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Machine Learning based Paradigm for Explanation Systems</head><p>We now define an explanation system formally. Given a social network graph G as input, an IM algorithm outputs an optimal set of K nodes O G . Moreover, the end-user of the IM algorithm has a preferred set of K nodes U G , which he/she wrongly thinks is the optimal solution. Then, an explanation system is defined as follows: Definition 1. An explanation system for IM algorithms is a function E : G × O G × U G → S which takes a graph G, an IM algorithm's solution on G (i.e., O G ) , and an end-user's preferred solution on G (i.e., U G ) as input, and produces as output a natural language string S, which explains why O G is a better solution than U G on graph G.</p><p>We believe that Definition 1 accounts for most practical use cases of explanation systems. For example, <ref type="bibr" target="#b8">[9]</ref> noted that homeless shelter officials (i.e., end-users) wanted explanations for why HEALER's solutions were better than their preferred solutions. Next, we explain our novel paradigm for designing explanation systems (for IM algorithms) in two stages.</p><p>First, we transform our original problem of explaining IM solutions into an alternate problem: "how to explain predictions made by ML classifiers?" Second, having made this transformation, we provide novel solutions for explaining predictions made by ML classifiers.</p><p>Step 1: Transformation Consider the following two problems for any given IM algorithm ∆: Problem 1. Given as input an IM algorithm ∆, output an explanation system E ∆ which explains why ∆'s solutions (O ∆ G ) are better than U G (for all possible G and U G ).</p><p>Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia Problem 2. Given as input (i) an IM algorithm ∆, and (ii) a binary classifier M , which perfectly (i.e., without any misclassifications) differentiates ∆'s solutions O ∆ G from all other possible solutions U G (for all possible U G and G), output an explanation system</p><formula xml:id="formula_0">E M which explains how classifier M differentiates O ∆ G from U G .</formula><p>We claim that solving Problem 2 is sufficient to solve Problem 1, as formally stated below (proof in appendix<ref type="foot" target="#foot_0">1</ref> ):</p><formula xml:id="formula_1">Theorem 1. Problem 2 solution =⇒ Problem 1 solution</formula><p>As a result, we transform the problem of explaining IM solutions (Problem 1) into the alternate problem: "how to explain predictions made by ML classifiers?" (Problem 2). This completes the transformation step of our approach. Next, we provide a novel solution method for solving Problem 2.</p><p>Step 2: Solution Method First, we specify the input to Problem 2: an ML classifier M which can differentiate between O G and U G . Second, we need to use this classifier M to generate output for Problem 2: the process of explaining the predictions made by classifier M needs to be specified. We focus on the input specification in this section (output generation is explained in Section 5).</p><p>To train classifier M , each possible IM solution (i.e., set of K nodes) is mapped to a set of network-centric feature values (e.g., degree, mutual connections, etc.), and is labeled as either being "optimal" (if it is the IM algorithm's solution) or "sub-optimal" (for any other solution). These network-centric features generalize across all IM algorithms, as they only depend on the network based characteristics of any possible IM solution. This creates a labeled dataset (details in Section 6) which is then used to train the ML classifier M which differentiates an optimal solution from a sub-optimal solution.</p><p>We use decision trees <ref type="bibr" target="#b5">[6]</ref> as our ML classifier of choice. The key advantage of decision trees over other classifiers is their interpretability, as their predictions can be explained to end-users as a conjunction of decision rules, where each rule involves a single feature in the dataset. In many applications, including our own, this interpretability is preferred over other methods that may have higher accuracy but are relatively un-interpretable <ref type="bibr" target="#b8">[9]</ref>.</p><p>Unfortunately, while decision trees are more interpretable than other ML classifiers (e.g., neural nets), the explanations for their predictions (i.e., a conjunction of decision rules) can overload human users with too much information to process (as we show in our experiments). Thus, there is a need to ensure interpretability of the generated explanations.</p><p>We address this challenge by introducing "interpretable decision trees", which allows us to trade off the prediction accuracy of a decision tree with its interpretability in a quantifiable manner. Further, we exploit the competing objectives of prediction accuracy and interpretability to generate a Pareto frontier of decision trees. This Pareto frontier allows us to correctly differentiate between O G and U G , while maintaining different levels of interpretability. We then utilize trees from this Pareto frontier to generate explanations for the IM algorithm's solutions (see Section 4). The novelty of our work lies in the versatility of our explanation system, which is able to provide explanations of varying interpretability upon demand. Specifically, our explanation system is flexible enough to dynamically increase the interpretability of the generated explanation (by using more "interpretable" decision trees to generate the explanation), if the enduser is unconvinced about previously supplied explanations. We now formally define interpretable decision trees.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Interpretable Decision Trees</head><p>Decision trees take an input a labeled dataset with N datapoints (X i , Y i ) ∀i ∈ {1, N }, where X i is a p-dimensional feature vector for the i th datapoint, and Y i is the label associated with the datapoint. In our domain, X i represents the vector of networkcentric feature values corresponding to the i th IM solution (i.e., set of K nodes) in our dataset. Similarly, Y i = 1 if the i th solution in our dataset is the IM algorithm's solution, and it is 0 otherwise.</p><p>While a decision tree's prediction for X i can be explained as a conjunction of all the feature splits that X i traverses on its way to a leaf node, this explanation is uninterpretable, as it overloads end-users with too much information (we validate this in our experiments). Thus, in order to ensure interpretability of the generated explanations, we introduce the notion of interpretable decision trees, which allows us to tradeoff the explanation accuracy with its interpretability in a quantifiable manner. Next, we define the interpretability score, which quantifies the interpretability of a decision tree.</p><p>Interpretability Score We associate with each feature t ∈ {1, p} an interpretability weight w t ∈ [0, 1] which quantifies the interpretability of feature t. We estimate w t values ∀t ∈ {1, p} by surveying AMT workers (see Section 6). Further, let x D t ∀t ∈ {1, p} denote a binary variable which indicates whether a split with feature t is used at any branch node in the decision tree D or not. Then, the interpretability score of decision tree D is defined as:</p><formula xml:id="formula_2">I D = ( p t=1 x D t w t )/( p t=1</formula><p>x D t ). Thus, trees which use fewer number of features to split (the denominator in I D 's definition) have higher I D scores (as it is easier to explain the tree's predictions with fewer features). Moreover, trees which use features having more interpretable weights have higher I D scores. Next, we define the A&amp;I score of a decision tree, which captures the tradeoff between its prediction accuracy and interpretability.</p><p>A&amp;I Score For decision trees (and most other classification algorithms), interpretability and prediction accuracy (measured by I D and F 1 score<ref type="foot" target="#foot_2">2</ref> , respectively) are two competing objectives. Increasing the prediction accuracy of a decision tree leads to a loss of its interpretability, and vice versa.</p><p>The A&amp;I score of decision tree D captures the tradeoff between these competing objectives and is defined as follows:</p><formula xml:id="formula_3">A&amp;I D (α) = F 1 D + αI D .</formula><p>Here, α is a parameter which controls the tradeoff between the F 1 and I D scores. Higher α values lead to highly interpretable decision trees with low prediction accuracy, and vice versa. Next, we explain an algorithm which finds tree D α which maximizes the A&amp;I(α) score (i.e., it optimally trades off between F 1 and I D scores for the tradeoff level specified by α). Maximizing A&amp;I score For each α value, we use Algorithm 1 to build a decision tree (on the dataset from Section 3) which maximizes A&amp;I(α). First, we randomly split our dataset into a training, validation, and test set in Step 1(in our experiments, we use a 80:10:10 split). Next, we use CART, a state-of-the-art decision tree algorithm <ref type="bibr" target="#b3">[4]</ref>, to learn a decision tree on our training set in Step 2. Normally, CART outputs deep decision trees, which have high prediction accuracies on the training set. However, deeper trees have lower I D scores, as they have more number of features. As a result, we use our validation set to find the optimal pruning level of CART's decision tree, which maximizes A&amp;I(α). Specifically, for each possible pruning level , we prune CART's decision tree to that level in Step 4. We use this pruned decision tree's F 1 score on the validation set to calculate the pruned tree's A&amp;I(α) score on the validation set (Step 5). Finally, we select the optimal pruning level which maximizes A&amp;I(α) on the validation set (Step 6), and prune CART's decision tree to this level (Step 7). To account for randomization in splitting of the dataset into the training, validation and test set, we average over 50 such splits.</p><p>Decision Tree Pareto Frontier Instead of generating a single decision tree (using Algorithm 1), we exploit the accuracy-interpretability tradeoff by varying the value of α to generate multiple decision trees. We do this because of two reasons. First, endusers may find explanations generated from one decision tree hard to understand, and thus, we may need a more interpretable tree (measured by I D ) to generate the explanations. Second, explanations can only be generated using decision trees which correctly differentiate between the IM algorithm's optimal solution (O G ) and the end-user's suboptimal solution (U G ) (else we don't have a valid classifier M in Problem 2). Thus, if more interpretable (less accurate) trees fail to differentiate between O G and U G , we need more accurate (less interpretable) trees to generate explanations. Therefore, for each α value, we find the optimal D α tree using Algorithm 1. Thus, we get a set of trees Θ, each of which has different F 1 and I D scores. Next, we remove all trees D ∈ Θ which are dominated by other trees D in terms of both F 1 and I D scores (i.e., there exist trees D ∈ Θ for which either</p><formula xml:id="formula_4">{F D 1 F D 1 ∧ I D &gt; I D } or {F D 1 &gt; F D 1 ∧ I D I D } holds)</formula><p>. This gives us our Pareto frontier of trees.</p><p>Our strategy is to search this Pareto-frontier for a decision tree D * , which can correctly differentiate between O G and U G , and then use D * to generate explanations.</p><p>Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia Next, we complete the discussion of our explanation system by proposing XplainIM, a solution method for Problem 2 which utilizes our Pareto frontier (recall that solving Problem 2 is sufficient to create an explanation system for IM algorithms).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Solution for Problem 2: XplainIM</head><p>Problem 2 takes as input a machine learning classifier M (which can differentiate between O G and U G ), and produces as output an explanation system which explains how classifier M differentiates O G from U G . We propose XplainIM as a solver for Problem 2. We begin by discussing XplainIM's design.</p><p>XplainIM builds an explanation system for a specific IM algorithm in two stages. In the offline stage, it takes as input a classification dataset for that algorithm, and uses it to store a Pareto-frontier of decision trees. This input dataset is created using a fixed set of network-centric features (Figure <ref type="figure" target="#fig_1">1</ref>), irrespective of the specific IM algorithm under consideration. In the online stage, it takes as input a network G and user solution U G and uses them to choose classifier M . We now explain how XplainIM chooses this classifier M .</p><p>Choosing Classifier Given a choice of O G and U G , XplainIM finds the set F of "feasible trees" in its Pareto frontier which can successfully classify O G as optimal, and U G as sub-optimal. Each tree in F is a valid classifier for Problem 2, as it can differentiate between O G and U G . To choose M , XplainIM simply uses the most interpretable tree in F, i.e., M = argmax D∈F I D .</p><p>Note that it is possible that for some choice of O G and U G , no tree in our Pareto frontier is feasible. Then, solving Problem 2 is impossible, as we wouldn't have an appropriate classifier M . However, this case rarely arises in practice (see <ref type="bibr">Section 6)</ref>.</p><p>Explanation System Next, having chosen a feasible decision tree as the classifier M , we now discuss XplainIM's explanation system, which generates explanations for "how M differentiates O G from U G ". Xplain relies on a natural language template (described below) to generate explanations, which is populated at runtime (depending on O G and U G ).</p><p>Template 1 Your solution U G had a f t feature value of U G (f t ). On the other hand, the optimal solution O G had a f t feature value of O G (f t ). For solutions to be optimal, the value of f t feature should be { , &gt;}b t .</p><p>Given O G and U G , we propose three different methods for populating Template 1. Each of these methods relies on the observation that the paths taken by O G and U G (to leaf nodes) in tree M diverge at some branch node t * ∈ T B (because tree M is feasible). We use features f t and corresponding thresholds b t (recall Section 4) starting from this point of divergence t * to populate Template 1 to generate explanations for the end-user. Our three methods are explained below:</p><p>1. XplainIM-A Template 1 is populated using only f t * and b t * for branch node t * , which is the first point of divergence between paths of O G and U G in tree M . This populated template is presented as explanation to the end-user.</p><p>2. XplainIM-B Template 1 is populated using f t * and b t * for branch node t * (similar to XplainIM-A). However, the choice of classifier M used to generate explanations is different. Instead of using the feasible tree with the highest I D score as our classifier M , we use the feasible tree in which the feature f t * (present at the branch node t * ) has the highest interpretability weight w f t * . This is because only feature f t * is used to populate Template 1, and a high value of w f t * would ensure the interpretability of the generated explanation.</p><p>3. XplainIM-Full Instead of providing explanations in terms of a single feature, XplainIM-Full uses multiple features in its explanation. Starting from branch node t * , XplainIM-Full populates Template 1 for all features along the two divergent paths of O G and U G . This method produces explanations which are identical in structure to ones generated by decision sets <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Evaluation</head><p>We evaluated XplainIM by using it to explain HEALER's solutions to human workers on AMT. We provide results from these experiments in three parts. First, we describe the HEALER specific dataset (which XplainIM uses to build its Pareto-frontier of trees). Second, we provide results from AMT surveys which were used to estimate interpretability weights w t for features in our dataset. Third, we provide AMT game results to evaluate XplainIM's effectiveness in explaining HEALER's solutions to AMT workers.</p><p>Dataset We generate 100 different Watts-Strogatz (WS) networks (p = 0.25, k = 7), each of size 20 nodes. The influence probabilities for network edges were randomly sampled from a uniform distribution U [0, 1]. We set K = 2, i.e., every set of two network nodes is a possible IM solution. For each of these 100 networks, we add HEALER's solution, along with 20 randomly chosen IM solutions to our dataset with the labels "optimal" and "suboptimal", respectively. Next, we map the i th IM solution in our dataset to a unique feature vector X i containing the network-centric feature values (see Figure <ref type="figure" target="#fig_1">1</ref>) for that IM solution. This gives us a labeled dataset (with 20 features and 2100 datapoints), which serves as XplainIM's input in our experiments. Interpretability Weights We deployed a survey among 35 AMT workers to estimate interpretability weights w t for all 20 features in our dataset. Each worker is (i) provided the definition of each feature; and (ii) given an example of how that feature's Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia value is calculated for a sample IM solution. Then, the worker is asked to calculate each feature's value for a new IM solution on an unseen network.</p><p>For each feature t, we compare the user response distribution M t (i.e., histogram of feature values calculated by the 21 workers who passed the validation game) with the actual value distribution P t in order to calculate w t . We assume that a feature's interpretability correlates heavily with ease of calculating that feature's value. Thus, if M t is very similar to P t , it implies that AMT workers find feature t to be highly interpretable. However, significant differences between M t and P t implies that feature t is uninterpretable. Specifically, we set w t = 1 − EarthMover(M t , P t ), where EarthMover(M t , P t ) measures the distance between the distributions M t and P t <ref type="bibr" target="#b2">[3]</ref>.</p><p>Due to a lack of space, Figure <ref type="figure" target="#fig_1">1</ref> shows normalized w t values for 5 out of the 20 features. This figure shows that "Degree" and "Betweenness Centrality" are the most and least interpretable features, respectively.</p><p>AMT Game Results We now present results from two different game types that we deployed on AMT to evaluate XplainIM's effectiveness at explaining HEALER's solutions. Both games begin by asking AMT workers to pick their preferred IM solution U G on a randomly chosen WS network (p = 0.25, k = 7, n = 20). We begin by explaining the design and motivations of both games.</p><p>Game 1 This game has three different versions, one for each variant of XplainIM. This game presents explanations generated by XplainIM to AMT workers, and evaluates the effect of these explanations in improving the worker's IM solutions U G . The game proceeds as follows. After the worker picks solution U G , he/she is given an explanation for why HEALER's solution O G is better than U G (this explanation is generated by XplainIM-{A, B or Full} depending on the game version). Then, the worker is asked to improve his solution U G based on the explanation shown.</p><p>Game 2 This game presents explanations generated by all three XplainIM variants to the same AMT worker, and compares the self-assessed interpretabilities of these three explanation schemes. The game proceeds as follows. After the worker picks solution U G , he/she is given three different explanations (generated using XplainIM-{A, B &amp; Full}) for why HEALER's solution O G is better than U G . Then, the worker is asked to rate these three explanations (each on a scale of 1-10) in terms of their interpretability. Further, we also deploy a "No Explanation" game (as a baseline), where workers are asked to improve their suboptimal solutions U G , without being given any explanation for U G 's suboptimality. Each of these game versions was deployed with 50 unique AMT workers (to avoid learning bias), each of whom plays on five different WS networks sequentially. We now discuss evaluation metrics used in this section.</p><p>Evaluation Metrics For Game 1, we compare the interpretability of XplainIM-A, B &amp; Full's explanations by measuring the effect of these explanations in improving the worker's IM solutions U G . Specifically, we measure the percentage of workers who (in response to the provided explanation) modified U G to either (i) match O G (Optimal group); OR (i) correctly followed the provided explanation (Correct group); OR (iii) moved in the right direction (Close group).</p><p>For example, assume that "Degree" (Figure <ref type="figure" target="#fig_1">1</ref>) of U G is 4, and the provided explanation is "For solutions to be optimal, the Degree should be greater than 9". If the worker modifies U G to match O G , he/she is counted in the Optimal group. Otherwise, Also, we use the failure rate, which is defined as the fraction of times an explanation system fails to produce an explanation. This happens when there are no feasible decision trees for a given choice of O G and U G . We calculate an explanation system's failure rate on network G by measuring the fraction of possible user solutions U G for which no tree is feasible in the Pareto frontier. We now present results comparing XplainIM-A, B &amp; Full with the "No Explanation" treatment. All our comparison results are statistically significant under t-test.</p><p>Pareto Frontier First, we illustrate the accuracy-interpretability tradeoff exploited by all XplainIM variants to find Pareto-optimal trees. Figure <ref type="figure">2a</ref> shows all decision trees (depicted by individual points) obtained by varying α values. The X and Y axes show the I D and F 1 scores, respectively. We prune this set of decision trees to find the set of Pareto-optimal trees (shown in Figure <ref type="figure">2b</ref>). These figures clearly depict the inverse correlation between the two competing objectives of prediction accuracy (F 1 ) and interpretability (I D ).</p><p>Next, we validate our decision of maintaining this Pareto frontier of trees inside XplainIM (instead of maintaining just a single Pareto-optimal tree). Figure <ref type="figure" target="#fig_2">4a</ref> compares the failure rate of XplainIM-A, B &amp; Full to that of single Pareto-optimal trees. The Xaxis shows trees of increasing prediction accuracy. The Y-axis shows the average failure rate (across 100 WS nets). This figure shows that our Pareto-frontier based explanation systems (XplainIM-A, B &amp; Full) have an average failure rate of 0.01, i.e., XplainIM generate valid explanations in ∼99% of situations. However, the minimum failure rate achieved by a single Pareto-optimal tree is 0.27. This shows that using single trees as our classifier for Problem 2 (instead of maintaining a Pareto frontier) results in failures 27% of the times. This signifies the importance of the Pareto frontier in making XplainIM work.</p><p>XplainIM Results Figure <ref type="figure">3a</ref> shows Game 2 results. The X-axis shows the three XplainIM variants. The Y-axis shows average ratings (scale of 1-10) given by workers to different explanation schemes. This figure shows that XplainIM-A &amp; B's explanations average rating was 6.04, as compared to 4.84 for XplainIM-Full (p=0.034). This shows that workers found XplainIM-A &amp; B's explanations to be more interpretable than those generated by XplainIM-Full. Figure <ref type="figure">3b</ref> shows Game 1 results for XplainIM-Full and the "No Explanation" treatment. The X-axis shows the Optimal, Correct, and Close groups. The Y-axis shows the percentage of workers belonging to these groups. This figure shows that only ∼4.6% workers are in the Correct group after seeing XplainIM-Fulls explanations. Surprisingly, even without any explanations, 24.6% workers are in the Correct group (p=3e −9 ), which shows that users perform better without any explanations (compared to XplainIM-Full). This shows that XplainIM-Full's longer explanations (which are based on <ref type="bibr" target="#b1">[2]</ref>) are completely uninterpretable to AMT workers (possibly because they overload workers with too much information). As a result, we now focus our attention on XplainIM-A &amp; B.</p><p>Figure <ref type="figure" target="#fig_2">4b</ref> shows Game 1 results for XplainIM-A &amp; B and the "No Explanation" treatment. This figure shows that with XplainIM-A &amp; B's explanations, ∼8.5% users are in the Optimal group (∼4% with no explanation), 36% are in the Correct group (24% with no explanation), and ∼60% are in the Close group (46% with no explanation) (p=0.05). This establishes that XplainIM-A &amp; B's explanations are far more interpretable, as significantly more AMT workers follow these explanations to improve their solutions. Further, this shows the importance of exploiting the accuracy-interpretability tradeoff (used by XplainIM) in generating explanations to solutions of IM algorithms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Conclusion</head><p>This paper solves the open problem of explaining solutions of IM algorithms to its users. This paper shows that the problem of designing an explanation system can be Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia equivalently viewed as a problem of explaining an ML classifier's predictions. We propose XplainIM, a suite of ML based explanation systems which exploit the accuracyinterpretability tradeoff to generate natural language explanations. Extensive human subject experiments show the effectiveness of XplainIM in explaining IM algorithm solutions to human users.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Algorithm 1 : 4 P 5 r</head><label>145</label><figDesc>Maximize A&amp;I scoreInput: α, Dataset (X i , Y i ) ∀i ∈ {1, N } Output: Dα, the A&amp;I score maximizing decision tree 1 (T rain, V alidation, T est) = Split(Dataset); 2 T ree = CART (T rain); 3 for level ∈ P runingLevels(T ree) do runedT ree = P rune(T ree, level); level = A&amp;I(α, P runedT ree, V alidation); 6 OptLevel = argmaxjrj; 7 Dα = P rune(T ree, OptLevel); 8 return Dα;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: Interpretability Weights for Features</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 :</head><label>4</label><figDesc>Fig. 3: Evaluating interpretability of XplainIM-Full</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Degree is greater than 9, he/she is counted in the Correct group. Finally, if his/her modified solution's Degree is greater than 4 (U G 's original Degree), but is less than 9, then he/she moved U G in the right direction and is counted in the Close group.</figDesc><table><row><cell></cell><cell>0.71</cell><cell></cell><cell></cell><cell></cell><cell>0.71</cell><cell></cell><cell></cell></row><row><cell>F1 Score</cell><cell>0.51 0.61</cell><cell></cell><cell></cell><cell>F1 Score</cell><cell>0.51 0.61</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.41</cell><cell></cell><cell></cell><cell></cell><cell>0.41</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.31</cell><cell></cell><cell></cell><cell></cell><cell>0.31</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.95</cell><cell>1.05</cell><cell>1.15</cell><cell>1.25</cell><cell>0.95</cell><cell>1.05</cell><cell>1.15</cell><cell>1.25</cell></row><row><cell></cell><cell></cell><cell cols="2">Interpretability Score</cell><cell></cell><cell></cell><cell cols="2">Interpretability Score</cell></row><row><cell></cell><cell></cell><cell cols="2">(a) Set of all decision trees</cell><cell></cell><cell cols="4">(b) Trees forming Pareto frontier</cell></row><row><cell></cell><cell></cell><cell cols="6">Fig. 2: Plotting the Pareto Frontier of Decision Trees</cell></row><row><cell cols="4">if his/her modified solution's</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">See http://bit.ly/2kXDhE4 for proofs Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2017" xml:id="foot_1">) August 19th, 2017 -Melbourne, Australia</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">F1 score = 2T P/(2T P + F P + F N ) Proceedings of the</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_4">Proceedings of the 3rd International Workshop on Social Influence Analysis (SocInf 2017) August 19th, 2017 -Melbourne, Australia</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">Acknowledgements</head><p>This research was supported by MURI Grant W911NF-11-1-0332.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Maximizing Social Influence in Nearly Optimal Time</title>
		<author>
			<persName><forename type="first">C</forename><surname>Borgs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brautbar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chayes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lucier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms</title>
				<meeting>the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms</meeting>
		<imprint>
			<publisher>SIAM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="946" to="957" />
		</imprint>
	</monogr>
	<note>SODA &apos;14</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Interpretable decision sets: A joint framework for description and prediction</title>
		<author>
			<persName><forename type="first">H</forename><surname>Lakkaraju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Bach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Leskovec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</title>
				<meeting>the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1675" to="1684" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">An efficient earth mover&apos;s distance algorithm for robust histogram comparison</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Okada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on pattern analysis and machine intelligence</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="issue">5</biblScope>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Classification and regression trees</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">Y</forename><surname>Loh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="14" to="23" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Markov Decision Processes: Discrete Stochastic Dynamic Programming</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Puterman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Induction of decision trees</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Quinlan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine learning</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="81" to="106" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Trust in man/machine security systems</title>
		<author>
			<persName><forename type="first">B</forename><surname>Schneier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Security Privacy</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="96" to="96" />
			<date type="published" when="2013-09">Sept 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The impact of pomdp-generated explanations on trust and performance in human-robot teams</title>
		<author>
			<persName><forename type="first">N</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Pynadath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Hill</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 International Conference on Autonomous Agents &amp; Multiagent Systems</title>
				<meeting>the 2016 International Conference on Autonomous Agents &amp; Multiagent Systems</meeting>
		<imprint>
			<publisher>International Foundation for Autonomous Agents and Multiagent Systems</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="997" to="1005" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">POMDPs for Assisting Homeless Shelters: Computational and Deployment Challenges</title>
		<author>
			<persName><forename type="first">A</forename><surname>Yadav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">X</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kamar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Grosz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tambe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Workshop on Issues with Deployment of Emerging Agent-based Systems (IDEAS)</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Influence maximization in the field: The arduous journey from emerging to deployed application</title>
		<author>
			<persName><forename type="first">A</forename><surname>Yadav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wilder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Petering</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tambe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Autonomous Agents and Multi-agent Systems (AAMAS)</title>
				<meeting>the International Conference on Autonomous Agents and Multi-agent Systems (AAMAS)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page">2017</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
