<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Designing Explanation Interfaces for Transparency and Beyond</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Chun-Hua</forename><surname>Tsai</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Pittsburgh</orgName>
								<address>
									<settlement>Pittsburgh</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Peter</forename><surname>Brusilovsky</surname></persName>
							<email>peterb@pitt.edu</email>
							<affiliation key="aff1">
								<orgName type="institution">University of Pittsburgh</orgName>
								<address>
									<settlement>Pittsburgh</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Designing Explanation Interfaces for Transparency and Beyond</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C372C4AFBD1E5D1DBB59D7F9AEACE40A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:57+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Social Recommendation</term>
					<term>Explanation</term>
					<term>Mental Model</term>
					<term>User Interface</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this work-in-progress paper, we presented a participatory process of designing explanation interfaces for a social recommender system with multiple explanatory goals. We went through four stages to identify the key components of the recommendation model, expert mental model, user mental model, and target mental model. We reported the results of an online survey of current system users (N=14) and a controlled user study with a group of target users (N=15). Based on the findings, we proposed five set of explanation interfaces for five recommendation models (N=25) and discussed the user preference of the interface prototypes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CCS CONCEPTS</head><p>• Information systems → Recommender systems; • Humancentered computing → HCI design and evaluation methods.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Enhancing explainability in recommender systems has drawn more and more attention in the field of Human-Computer Interaction (HCI). Further, the newly initiated European Union's General Data Protection Regulation (GDPR) required the owner of any datadriven application to maintain a "right to the explanation" of algorithmic decisions <ref type="bibr" target="#b0">[1]</ref>, which urging to gain transparency in all existing intelligent systems. Self-explainable recommender systems have been proved to gain user perception on system transparency <ref type="bibr" target="#b16">[17]</ref>, trust <ref type="bibr" target="#b12">[13]</ref> and accepting the system suggestions <ref type="bibr" target="#b6">[7]</ref>. Instead of the offline performance improvements, more and more researches focused on the works of evaluating the system from the user experience, i.e., what is the user perception on the explanation interfaces?</p><p>Explaining recommendations (i.e., enhancing the system explainability) can achieve different explanatory goals which help users to make a better decision or persuading them to accept the suggestions from a system <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b15">16]</ref>. We followed the seven explanatory goals that proposed by Tintarev and Masthoff <ref type="bibr" target="#b16">[17]</ref>: Transparency, Scrutability, Trust, Persuasiveness, Effectiveness, Efficiency, and Satisfaction. Since it is hard to have a single explanation interface that achieves all these goals equally well, the designer needs to make a trade-off while choosing or designing the form of interface <ref type="bibr" target="#b16">[17]</ref>. For instance, an interactive interface can be adapted to increase the user trust and satisfaction but may prolong the decision and explore process while using the system (i.e., lead to decreasing of efficiency) <ref type="bibr" target="#b18">[19]</ref>.</p><p>Over the past few years, several approaches have been discussed to enhance the explainability in the recommender systems. The approaches can be summarized by different styles, reasoning models, paradigms and information <ref type="bibr" target="#b1">[2]</ref>. 1) Styles: Kouki et al. <ref type="bibr" target="#b7">[8]</ref> conducted an online user survey to explore the user preference in nine explanation styles. They found Venn diagrams outperformed all other visual and text-based interfaces. 2) Reasoning Models: Vig et al. <ref type="bibr" target="#b23">[24]</ref> used tags to explain the recommended item and the user's profile. The approach emphasized the factor of why a specific recommendation is plausible, instead of revealing the process of recommendation or data. 3) Paradigms: Herlocker et al. <ref type="bibr" target="#b4">[5]</ref> presented a model for explanations based on the user's conceptual model of the collaborative-based recommendation process. The result of the evaluation indicates two interfaces -"Histogram with grouping" and "Presenting past performance" -improved the acceptance of recommendations. 4) Information: Pu and Chen <ref type="bibr" target="#b12">[13]</ref> proposed explanations tailored to the user and recommendation, i.e., although one recommendation is not the most popular one, the explanation would justify the recommendation by providing the reasons.</p><p>Although many approaches have been proposed to enhance the recommender explainability, bringing explanation interfaces to an existing recommender system is still a challenging task. More recently Eiband et al. <ref type="bibr" target="#b0">[1]</ref> suggested a different approach to improve user mental model (UMM) while bringing transparency (explanations) to a recommender system. The model described the process of a user builds an internal conceptualization of the system or interface along with user-system interactions, i.e., building the knowledge of how to interact with the system. If the model is misguided or opaque, the users will face difficulties in predicting or interpreting the system <ref type="bibr" target="#b0">[1]</ref>. Hence, the researchers suggested to improve the mental model, so the users can gain awareness while using the system as well as the explanation interfaces.</p><p>In this work-in-progress paper, we presented a stage-based participatory process <ref type="bibr" target="#b0">[1]</ref> for integrating seven exploratory goals into real-world hybrid social recommender system. First, we introduced the Expert Mental Model to summarize the key components of each recommendation feature. Second, we conducted an online survey to identify the User Mental Model of seven explanatory goals from the current system users. Third, we did a user study with cardsorting and semi-interview to determine the user's Target User Model. Fourth, we proposed a total of 25 explanation interfaces for five recommendation features and compared the user perceptions across designs. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">BACKGROUND</head><p>We adopted the stage-based participatory framework from Eiband et al. <ref type="bibr" target="#b0">[1]</ref>, which intends to answer two key questions while designing the explainable user interface (UI): a) What to Explain? And b) How to explain? The process can be summarized in four stages. 1) Expert Mental Model: What can be explained? We defined an expert as the recommender system developer. 2) User Mental Model: What is the user mental model of the system based on its current UI? The model should be built through the current recommender system users. 3) Target Mental Model: Which key components of the algorithm do users want to be made explainable in the UI? The target user is the users who are new to the system. 4) Iterative Prototyping: How can the target mental model be reached through UI design. The key is to measure if the proposed explanation interfaces achieved the explanatory goals.</p><p>In this work, we aimed to enhance the explainability in a conference support system -Conference Navigator 3 (CN3). The system has been used to support more than 45 conferences at the time of writing this paper and has data on approximately 7,045 articles presented at these conferences; 13,055 authors; 7,407 attendees; 32,461 bookmarks; and 1,565 social connections. Our work was informed by the results of a controlled user study where we explored an earlier version of the social recommender interface Relevance Tuner <ref type="bibr" target="#b18">[19]</ref> (shown in Figure <ref type="figure" target="#fig_0">1</ref>). It was a controllable interface for the user to fuse weightings of multiple recommendation models and to inspect the explanations.</p><p>A total of five recommendation models were introduced in this study: 1) Publication Similarity: the degree of cosine similarity of users' publication text. 2) Topic Similarity: the overlap of research interests (using topic modeling). 3) Co-Authorship Similarity: the degree of connection, based on a shared network of co-authors. 4) Interest Similarity: the number of papers co-bookmarked, as well as the authors co-followed. 5) Geographic Distance: a measurement of the geographic distance between affiliations. Based on the stage-based participatory framework, we went through the same four stages for each recommendation model to identify the user-preferred user interface design. We aimed to design explanation interfaces for each recommendation model with multiple exploratory goals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">FIRST STAGE: EXPERT MENTAL MODEL</head><p>Instead of interactive recommender <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b22">23]</ref>, we attached an explanation icon next to each social recommendation. The users have a choice of requesting the explanations while exploring or browsing the recommendations. We adopted a hybrid explanation approach <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b11">12]</ref>, which mixed multiple visualizations to explain the details of the recommendation model. We would like to let the users understand both a) the mutual relationship (similarity) between him/herself and the recommended scholar and b) the key component in each recommendation model. We then discussed the Expert Mental Model through the system developing process of the five recommendation models.</p><p>1) Publication Similarity: The similarity was determined by the degree of text similarity between two scholars' publications using cosine similarity. We applied tf-idf to create the vector with a word frequency upper bound of 0.5 and a lower bound of 0.01 to eliminate both common and rarely used words. In this model, the key components were the terms of the paper title and abstract as well as its term frequency.</p><p>2) Topic Similarity: This similarity was determined by matching research interests using topic modeling. We used latent Dirichlet allocation (LDA) to attribute collected terms from publications to one of the topics. We chose 30 topics to build the topic model for all scholars. Based on the model, we then calculated the topic similarity between any two scholars. The key components were the research topics and the topical words of each research topic <ref type="bibr" target="#b24">[25]</ref>.</p><p>3) Co-Authorship Similarity: This similarity approximated the network distance between the source and recommended users. For each pair of the scholar, we tried to find six possible paths for connecting them, based on their coauthorship relationships. The network distance is determined by the average distance of the six paths. The key components were the coauthors (as nodes), coauthorship (as edges) and the distance of connection the two scholars.</p><p>4) CN3 Interest Similarity: This similarity was determined by the number of co-bookmarked conference papers and co-connected authors in the experimental social system (CN3). We simply used the number of shared items as the CN3 interest similarity. The key component is the shared conference papers and authors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5) Geographic Distance:</head><p>This similarity was a measurement of the geographic distance between attendees. We retrieved longitude and latitude data based on attendees' affiliation information. We used the Haversine formula to compute the geographic distance between scholars. The key components are the geographic distance and affiliation information of the scholars.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">SECOND STAGE: USER MENTAL MODEL</head><p>As a first step towards understanding the design factors of explanatory interfaces, we deployed a survey through a social recommender system, Conference Navigator <ref type="bibr" target="#b17">[18]</ref>, and analyzed data from the respondents. We targeted the users who had created an account and interacted with the system in their previous conference attendance (at least using the system for one conference). The survey was initiated by sending an invitation to the qualified users in December 2017. We sent out 89 letters to the conference attendees of UMAP/HT 2016, and a total of 14 participants (7 female) replied to create the pool of participants for the user study. The participants were from 13 different countries; their ages ranged from 20 to 40 (M=31.36, SE=5.04). We did an online survey to collect necessary demographic information and self-reflection about how to design an explanation function in seven explanatory goals <ref type="bibr" target="#b16">[17]</ref>.</p><p>The proposed questions were: How can an explanation function help you to perceive system 1) Transparency -explain how the system works? 2) Scrutability -allow you to tell the system it is wrong? 3) Trust -increase your confidence in the system? 4) Persuasiveness -convince you to explore or to follow new friends? 5) Effectiveness -help you make good decisions? 6) Efficiencyhelp you to make decisions faster? 7) Satisfaction -make using the system fun and useful? We asked the participants to answer each question in 50-100 words, in particular reflecting the explanatory goals of the social recommendation. The data was published in <ref type="bibr" target="#b19">[20]</ref>.</p><p>1) Transparency: 71% of respondents pointed out the reasons of generated social recommendation that help them to perceive higher system transparency, i.e., the personalized explanation, the linkage and data sources, reasoning method and understandability. We then summarized the feedback into five factors: 1) The visualization presents the similarity between my interest and the recommended person. 2) The visualization presents the relationship between the recommended person and me. 3) The visualization presents where did the data were retrieved. 4) The visualization presents more in-depth information on how the score amounts up. 5) The visualization allows me to see the connections between people and understand how they are connected.</p><p>2) Scrutability: Half of the respondents mentioned they needed "inspectable details" to figure out the wrong recommendation. 35% of respondents suggested the mechanism of accepting user feedback on improving wrong recommendations, such as a space to submit user ratings or yes/no options. 14% of respondents preferred a dynamic exploration process to determine the recommendation quality. We then summarized the feedback into four factors: 6) The visualization allows me to understand whether the recommendation is good or not. 7) The visualization presents the data for making the recommendations. 8) The visualization allows me to compare and decide whether the system is correct or wrong. 9) The visualization allows me to explore and then determine the recommendation quality.</p><p>3) Trust: 28% of respondents mentioned that they trusted the system more when they perceived the benefits of using the system. 35% of respondents preferred to trust a system with reliable and informative explanations, more detailed information or understandable. 35% of respondents mentioned they trust a system with transparency or passed their verification. We then summarized the feedback into three factors: 10) The visualization presents a convincing explanation to justify the recommendation. 11) The visualization presents the components (e.g., algorithm) that influenced the recommendation. 5) The visualization allows me to see the connections between people and understand how they are connected.</p><p>4) Persuasiveness: Half of the respondents mentioned the explanation of social familiarity would persuade them to explore novel social connections; namely, when shown social context details or shared interests. 21% of respondents indicated that an informative interface could boost the exploration of new friendship. 28% of respondents preferred a design that inspired curiosity, implicit relationships. We then summarized the feedback into three factors: 12) The visualization shows me the shared interests, i.e., why my interests are aligned with the recommended person. 13) The visualization has a friendly, easy-to-use interface. 14) The visualization inspired my curiosity (to discover more information).</p><p>5) Effectiveness: 64% of respondents mentioned that the aspects of social recommendation relevance helped them to make a good decision. The aspect included explaining the recommendation process, understandable or more informative. 28% of respondents suggested a reminder that a historical or successful decision could help them to make a good decision, i.e., a previously-made user decision and success stories. We then summarized the feedback into three factors: 15) The visualization presents the recommendation process. 5) The visualization allows me to see the connections between people and understand how they are connected. 11) The visualization presents the components (e.g., algorithm) that influenced the recommendation.</p><p>6) Efficiency: 28% of respondents mentioned that a proper highlighting of the recommendation helped to make the decision faster. For example, they are emphasizing the relatedness, identifying the top recommendations or providing success stories. 28% of respondents preferred a tune-able or visualized interface to accelerate the decision process, such as tuning the recommendation features, visualizing the recommendations. However, the explanations may not always be useful. 21% of respondents argued that the explanation would prolong the decision process instead of speeding it up: the user may need to take extra time to examine the explanations. We then summarized the feedback into two factors: 16) The visualization presents highlighted items/information that is strongly related to me. 17) The visualization presents aggregated, non-obvious relations to me.</p><p>7) Satisfaction: The feedback on how an explanation can help the user satisfy the system was varied. Three aspects received an equal 7% of respondents' preferences. That is, users preferred to view the feedback from the community, shown the historical interaction record and provided a personalized explanation. Two aspects received an equal 14% of respondents' preference; i.e., a focus on a friendly user interface and saved decision time. 21% of respondents reported a higher satisfaction on using the explanation as a "small talk topic", i.e., as an initial conversation in a conference. 28% of respondents preferred an interactive interface for perceiving the system to be fun, e.g., a controllable interface. We then summarized the feedback into four factors: 18) The visualization presents the feedback from other users, i.e., I can see how others rated the recommended person. <ref type="bibr" target="#b18">19</ref>) The visualization allows me to tell why does this system recommend the person to me. 1) The visualization presents the similarity between my interest and the recommended person. 13) The visualization is a friendly, easy-to-use interface.</p><p>Based on the result of the online survey, we concluded a total of 19 factors in the second stage of building the user mental model.</p><p>(1) The visualization presents the similarity between my interest and the recommended person. (2) The visualization presents the relationship between the recommended person and me. (3) The visualization presents where the data was retrieved. (4) The visualization presents more in-depth information on how the scores sum up. <ref type="bibr" target="#b4">(5)</ref> The visualization allows me to see the connections between people and understand how they are connected. <ref type="bibr" target="#b5">(6)</ref> The visualization allows me to understand whether the recommendation is good or not. <ref type="bibr" target="#b6">(7)</ref> The visualization presents the data for making the recommendations. <ref type="bibr" target="#b7">(8)</ref> The visualization allows me to compare and decide whether the system is correct or wrong. <ref type="bibr" target="#b8">(9)</ref> The visualization allows me to explore and then determine the recommendation quality. <ref type="bibr" target="#b9">(10)</ref> The visualization presents a convincing explanation to justify the recommendation. <ref type="bibr" target="#b10">(11)</ref> The visualization presents the components (e.g., algorithm) that influenced the recommendation. <ref type="bibr" target="#b11">(12)</ref> The visualization shows me the shared interests, i.e., why my interests are aligned with the recommended person. <ref type="bibr" target="#b12">(13)</ref> The visualization has a friendly, easy-to-use interface <ref type="bibr" target="#b13">(14)</ref> The visualization inspired my curiosity (to discover more information). <ref type="bibr" target="#b14">(15)</ref> The visualization presents the recommendation process clearly. <ref type="bibr" target="#b15">(16)</ref> The visualization presents highlighted items/information that is strongly related to me. <ref type="bibr" target="#b16">(17)</ref> The visualization presents aggregated, non-obvious relations to me. <ref type="bibr" target="#b17">(18)</ref> The visualization presents feedback from other users, i.e., I can see how others rated a recommended person. <ref type="bibr" target="#b18">(19)</ref> The visualization allows me to tell why does this system recommend the person to me.</p><p>We also found some factors across different exploratory goals. For example, Factor 1 were shared by the exploratory goal of Transparency and Satisfaction. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">THIRD STAGE: TARGET MENTAL MODEL</head><p>In this stage, we conducted a controlled lab study for creating the Target Mental Model. The model is used to identify the key components of the recommendation model that the users might want to be explainable in the UI. Since the goal is to identify the information need for new users, we specifically selected subjects who never used the CN3 system. A total of 15 (6 female) participants (N=15) were recruited for this study. They are first, or second-year graduate students (major in information sciences) at the University of Pittsburgh with age ranged from 20 to 30 (M=25.73, SE=2.89). All participants had no previous experience of using the CN system. Each participant received USD$20 compensation and signed an informed consent form.</p><p>We asked the subjects to complete a card-sorting task about their preference for the 19 factors we identified in the second stage. We started by presenting the CN3 system (shown in Figure <ref type="figure" target="#fig_0">1</ref>) to the subjects and introducing the five recommendation models through the Expert Mental Model. After the tutorial, the subjects were asked to do a closed card-sorting that assigns cards into four predefined groups. The four groups are 1) very important; 2) less important; 3) not important and 4) not relevant.</p><p>The survey result is reported in Table <ref type="table" target="#tab_0">1</ref>. We found that for the target users, factor 1, 13, 16 outperformed other factors: more than ten subjects assigned the three factors into the "very important" group. The factor 2, 6, 10, 12, 14, 15 and 19 formed the secondary preference group with at least 10 subject assigning them into "very important" or "less important" groups. The subjects least preferred factor were 3, 7, 11, 18 with at least nine subjects assigning these factors into "not important" or "not relevant" groups.</p><p>Based on the card-sorting result, we found the user preferred an explainable UI is presenting the similarity between his/her interests and the recommended person (F1). The UI should be friendly and easy-to-use (F13) as well as highlighted the items or information that is strongly related to the user (F16). Besides, some factors are also liked by the subjects. For instance, the UI is presenting the mutual relationship (F2), shared interests (F12) and recommendation process (F15). The UI should also allow the user to understand (F6) and justify (F10) the quality of recommendation as well as inspired the curiosity of exploration (F14) and recommendation process (F19). Interestingly, we also found the users were less interested in a UI of presenting the data source (F3) and raw data (F7) as well as the detail of algorithm (F11) and the recommendation feedback from the other users in the same community (F18). Hence, we decide to filter out the factors that were less preferred by the subjects. We choose to keep the factors with more than ten votes in the groups of "Very Important" and "Less Important", which are F1, F2, F6, F10, F12, F13, F14, F15, F16, F19, the chosen factors were highlighted in red in Table <ref type="table" target="#tab_0">1</ref> We can project the factors back to the original explanatory goals. The mentioned percentage of each exploration goal is listed as below: Transparency (40%, 2 out of 5), Scrutability (0%, 0 out of 4), Trust (33%, 1 out of 3), Persuasiveness (67%, 2 out of 3), Effectiveness (33%, 1 out of 3), Efficiency (50%, 1 out of 2) and Satisfaction, (75%, 3 out of 4). That is, the Target Mental model was built through the exploratory goal of (rank from high to low importance) Satisfaction, Persuasiveness, Efficiency, Transparency, Trust, and Effectiveness.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">FOURTH STAGE: ITERATIVE PROTOTYPING</head><p>The fourth stage: interactive prototyping was performed within the same user study as the third stage. After the card-sorting task, we asked the subject to identify the chosen ten factors across some UI prototypes. A total of 25 interfaces (five interfaces for each recommendation model) were exposed in this stage. We used a within-subject design, i.e., all participants required to do a cardsorting task. In each session, the participants were asked to sort the given five interfaces into groups 1 to 5 (1: Strongly Agree, 5: Strongly Disagree), in each exploratory factor. If one interface is not contributing to the factor, the participant can mark it as irrelevant (not applicable). We continued with a semi-interview after the subject completed each session to collect the qualitative feedback. There were a total of five card-sorting sessions for all five recommendation model. At the beginning of each session, we introduced the recommendation model through the Expert Mental Model, i.e., tell the participant how the similarity is calculated and what data were adopted in this process, to make sure the subject understands the details of each recommendation model. After that, we provided five interface printouts, a paper sheet with a table contains 19 exploratory factors and a pen -the subjects were expected to write down rankings on the paper sheet. All subjects took around 80-100 minutes to complete the study.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Explaining Publication Similarity</head><p>The key component of publication similarity is terms and term frequency of the publication as well as its mutual relationship (i.e., the common terms) between two scholars. We presented four visual interface prototypes (shown in Figure <ref type="figure" target="#fig_3">2</ref>) for explaining publication similarity and one text-based interface (E1-1), which simply says "You and [the scholar] have common words in [W1], [W2], [W3]." 6.1.1 E1-2: Two-way Bar Chart. The bar chart is a common approach in analyzing the text mining outcome <ref type="bibr" target="#b14">[15]</ref> using a histogram of terms and term frequency. We extended the design to a two-way bar chart to show the mutual relationship of two scholars' publication terms and term frequency, i.e., one scholar in positive and the other scholar on a negative scale. The design is shown in Figure <ref type="figure" target="#fig_3">2a.</ref> 6.1.2 E1-3: Word Clouds. Word cloud is a common design in explaining text similarity <ref type="bibr" target="#b17">[18]</ref>. We adopted the word cloud design from <ref type="bibr" target="#b25">[26]</ref>, which presented the term in the cloud and the term frequency by the font size. This interface provided two word clouds (one for each scholar) so the user can perceive the mutual relationship. The design is shown in Figure <ref type="figure" target="#fig_3">2b.</ref> 6.1.3 E1-4: Venn Word Cloud. Venn diagram was recognized as an effective hybrid explanation interface by <ref type="bibr">Kouki et al. [8]</ref>. This interface could be considered as a combination of a word cloud and a Venn diagram <ref type="bibr" target="#b21">[22]</ref>, which presents term frequency using the font size. The unique terms of each scholar are shown in a different color (green and blue) while the common terms are presented in the middle, with red color, for determining the mutual relationship. The design is shown in Figure <ref type="figure" target="#fig_3">2c.</ref> 6.1.4 E1-5: Interactive Word Cloud. A word cloud can be interactive. We extend the idea from <ref type="bibr" target="#b17">[18]</ref> and used Zoomdata Wordcloud <ref type="bibr" target="#b26">[27]</ref>,  which follows the common approach to visualize term frequency with the font size. The font color was selected to distinguish the scholars' terms, i.e., different term color for each scholar. A slider was attached to the bottom of the interface that provides real-time interactive functionality to increase or decrease the number of terms in the word cloud. The design was shown in Figure <ref type="figure" target="#fig_3">2d</ref>.</p><p>6.1.5 Results. The card-sorting result was presented in Table <ref type="table" target="#tab_1">2</ref>. We found the E1-4 Venn Word Cloud was preferred by the participants, received 76 votes in Rank 1, which was outperformed other four interfaces. According to the post-session interview, 13 subjects agreed E1-4 is the best interface versus the other four interfaces.</p><p>The supporting reasons can be summarized as 1) the Venn diagram provided common terms in the middle, which highlighted the common terms and shared relationship; 2) it is useful to show non-overlapping terms on the sides (N=5) and 3) the design is simple, easy to understand and require less time to process (N=3). Two subjects mentioned they preferred E1-2 the most due to histograms gives them the "concrete numbers" for "calculating" the similarity, which was harder when using word clouds.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Explaining Topic Similarity</head><p>The key component of topic similarity is research topics and topical words of the scholar as well as its mutual relationship (i.e., the common research topics) between two scholars. We presented four visual interfaces prototypes (shown in Figure <ref type="figure" target="#fig_4">3</ref>) and one text-based prototype for explaining the topic similarity. The text-based interface (E2-1) simply says "You and [the scholar] have common research topics on [T1], [T2], [T3]."</p><p>6.2.1 E2-2: Topical Words. This interface followed the approach of <ref type="bibr" target="#b9">[10]</ref>, which attempted to help users in interpreting the meaning of each topic by presented topical words in a table. We adopted the idea as E2-2 Topical Words that present the topical words in two multi-column tables (each column contains the top 10 words of each topic). The design is shown in Figure <ref type="figure" target="#fig_4">3a</ref>.</p><p>6.2.2 E2-3: FLAME. This interface followed Wu and Ester <ref type="bibr" target="#b25">[26]</ref>, which adopted a bar chart and two word clouds in displaying the opinion mining result. In their design, each bar would be considered as a "sentiment"; then the user can interpret the model by the figure (for the beta value of topic) and table (for the topical words). We extended the idea as E2-3: FLAME that showed two sets of research topics (top 5) and the relevant topic words in two word clouds (one for each scholar). The design is shown in Figure <ref type="figure" target="#fig_4">3b</ref>.</p><p>6.2.3 E2-4: Topical Radar. The E2-4 Topical Radar was used in Tsai and Peter <ref type="bibr" target="#b21">[22]</ref>. The radar chart was presented in the left. We picked the top 5 topics (ranked by beta value from a total of 30 topics) of the user and compared them with the examined attendee through the overlay. A table with topical words was presented in the right so that the user can inspect the context of each research topic. The design is shown in Figure <ref type="figure" target="#fig_4">3c</ref>.</p><p>6.2.4 E2-5: Topical Bars. We adopted several bar charts in this interface as E2-5: Topical Bar. The interface showed top three topics of two scholars (top row and the second row) and the topical information (top eight topical words in the y-axis and topic beta value in x-axis) using a bar chart with histograms. The design was shown in Figure <ref type="figure" target="#fig_4">3d</ref>. 6.2.5 Results. The card-sorting result was presented in Table <ref type="table" target="#tab_1">2</ref>. We found the E2-4 Topical Radar received 86 votes in Rank 1 outperforming all other interfaces. E2-3 ended up being second with most votes in the R2 group. According to the post-session interview, 13 subjects agreed E2-4 is the best interface among all examined interfaces. One subject preferred E2-3, and one subject suggested a mix of E2-3 and E2-4 as the best design. The supporting reasons for E2-4 can be summarized as 1) It is easy to see the relevance through the overlapping area from the Radar chart and the percentage numbers from the table (N=12).</p><p>2) It is informative to compare the shared research topics and topical words (N=9).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Explaining Co-Authorship Similarity</head><p>The key component of co-authorship similarity is coauthors, coauthorship and distance of connections of the scholars as well as its mutual relationship (i.e., the connecting path) between two scholars. We presents the five prototyping interfaces (shown in Figure <ref type="figure" target="#fig_6">4</ref>, E3-1 presented in text below) for explaining publication similarity.</p><p>In addition to four visualized interfaces, we also include one textbased interface (E3-1). That is, "You and [the scholar] have common co-authors, they are [A1], [A2], [A3]."</p><p>6.3.1 E3-2: Correlation Matrix. E3-2 Correlation matrix was inspired by Heckel et al. <ref type="bibr" target="#b3">[4]</ref> that was used to present overlapping user-item co-clusters in a scalable and interpretable product recommendation model. We extended the interface to a user-to-user correlation matrix that the user can inspect the scholar co-authorship network. The design was shown in Figure <ref type="figure" target="#fig_6">4</ref>(a). 6.3.3 E3-4: Strength Graph. E3-4 Strength Graph was inspired by Tsai and Brusilovsky <ref type="bibr" target="#b17">[18]</ref> that tried to present the co-authorship network using D3plus network style <ref type="bibr" target="#b8">[9]</ref>. Nodes and edges are representing authors and co-authorship, respectively. The edge thickness is the weighting of the coauthorship (number of co-worked papers).</p><p>The node was assigned different color by their groups, i.e., the original scholar, target scholar and via scholars. The design was shown in Figure <ref type="figure" target="#fig_6">4</ref>(c).</p><p>6.3.4 E3-5: Social Viz. The E3-5 Social Viz was used in <ref type="bibr" target="#b21">[22]</ref>. There were six possible paths (one shortest and five alternatives). The user will be presented in the left with a yellow circle. The target user will be presented in the right with red color. The circle size represented the weighting of the scholar, which was determined by the appearing frequency in the six paths. For example, the scholar Peter is the only node that scholar Chu can reach scholar Nav, so the circle size was the largest one (size = 6). The design was shown in Figure <ref type="figure" target="#fig_6">4</ref>(d).</p><p>6.3.5 Results. The card-sorting result was presented in Table <ref type="table" target="#tab_1">2</ref>. We found the E3-4 Strength Graph was preferred by the participants, received 45 votes in Rank 1. However, the votes were close with E3-2 Correlation Matrix (37 votes) and E3-3 ForceAtlas2 (32 votes).</p><p>According to the post-session interview, four subjects agreed E3-4 is the best interface versus the other four interfaces. The supporting reasons were the interface highlighted the mutual relations and let the user can understand the path between two scholars. The arrow and edge thickness were also useful. Two subjects supported E3-2, they liked the correlation matrix provided a clear number and correlation information that easier for them to process. Three subjects supported E3-3, they preferred the interface provided a piece of high-level information by giving a "big picture". Also, E3-3 would be good to explore the coauthorship network beyond the connecting path, although the interface was reported to be too complicated as an explanation. Four subjects supported E3-5, they enjoy the simple, clear and "straightforward" connecting path as the explanation for coauthorship network.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.4">Explaining CN3 Interest Similarity</head><p>The key component of CN3 interest similarity is papers and authors of the system bookmarking as well as its mutual relationship (i.e., the common terms) between two scholars. We presented the five prototyping interfaces (shown in Figure <ref type="figure" target="#fig_7">5</ref>, E4-1 presented in the text below) for explaining publication similarity. In addition to four visualized interfaces, we also include one text-based interface (E4-1).  <ref type="bibr" target="#b10">[11]</ref>. We extended the interface to explain shared bookmarks between two scholars. The interface represents the scholars in two sides and the common cobookmarking items (e.g., the five common co-bookmark papers or authors) in the middle. A strong (solid line) or weak (dash line) tie will be used to connect the item was bookmarked by the one-side or two-sides. The design was shown in Figure <ref type="figure" target="#fig_7">5</ref>(a).</p><p>6.4.2 E4-3: Tagsplanations. E4-3 Tagsplanations was proposed by Vig et al. <ref type="bibr" target="#b23">[24]</ref>. The idea is to show both tag, user preference, and relevance that used to recommending movies. We extended the interface to explain the co-bookmarking information. In our design, the co-bookmarked item will be listed and ranked by its social popularity, i.e., how many users have followed/bookmarked the item? The design was shown in Figure <ref type="figure" target="#fig_7">5</ref>(b).</p><p>6.4.3 E4-4: Venn Tags. The study of <ref type="bibr" target="#b7">[8]</ref> has pointed out the user preferred the Venn diagram as an explanation in a recommender system. In the interface of E4-4: Venn Tags, we implemented the same idea with the bookmarked items. The idea is to present the bookmarked item, using an icon, in the Venn diagram. The two sides are the bookmarked item belong to one party. The co-bookmarked or co-followed item will be placed in the middle. The users can hover the icon for detail information, i.e., paper title or author name. The design was shown in Figure <ref type="figure" target="#fig_7">5</ref>(c).</p><p>6.4.4 E4-5: Itemized List. An itemized list has been adopted to explain the bookmark in <ref type="bibr" target="#b20">[21]</ref>. We proposed E4-5: Itemized List that presented the bookmarked or followed items in two lists. The design was shown in Figure <ref type="figure" target="#fig_7">5</ref>(d).</p><p>6.4.5 Results. The card-sorting result was presented in Table <ref type="table" target="#tab_1">2</ref>. We found the E4-4 Venn Tags was preferred by the participants, received 64 votes in Rank 1, which was outperformed all other four interfaces. E4-4 Venn Tags was also be favored by the subject, which received 49 votes. According to the post-session interview, eight subjects agreed E4-4 is the best interface versus the other four interfaces. The supporting reasons can be summarized as 1) the Venn diagram is more familiar or clear than other interfaces (N=4); 2) The Venn diagram is simple and easy to understand (N=4). Three subjects mentioned they preferred E4-3 the most due to the interface provide extra attribution, don't need to hover for detail and easy-to-use.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.5">Explaining Geographic Similarity</head><p>The key component of geographic similarity is location and distance of the two scholars as well as their mutual relationship (i.e., the geographic distance). We presented the five prototyping interfaces (shown in Figure <ref type="figure" target="#fig_8">6</ref>, E5-1 presented in the text below) for explaining the geographic similarity. In addition to four visualized interfaces, we also include one text-based interface (E5-1). That is, "From [Institution A] to [sample]'s affiliation ([Institution B]) = N miles."</p><p>6.5.1 E5-2: Earth Style. Using Google Map <ref type="bibr" target="#b5">[6]</ref> for explaining geographic distance in a social recommender system has been discussed in Tsai and Brusilovsky <ref type="bibr" target="#b20">[21]</ref>. We extended the interface to a different style. In E5-2 Earth Style, we "zoom out" the map to an earth surface and place the two connected icons (with geographic distance) on the map. The design was shown in Figure <ref type="figure" target="#fig_8">6</ref>(a). 6.5.5 Results. The card-sorting result was presented in Table <ref type="table" target="#tab_1">2</ref>. We found the E5-3 Navigation Style was preferred by the participants, received 42 votes in Rank 1. However, the votes are close with E3-5 Label Style (40 votes). According to the post-session interview, six subjects agreed E5-3 is the best interface versus the other four interfaces. But there were three subjects particularly mentioned the navigation function was irrelevant in explaining or exploring the social recommendations. The supporting reasons of E5-3 can be summarized as 1) The map is informative (N=2).</p><p>2) It is useful to see navigation (N=5). Three subjects mentioned they preferred E5-5 the most due to the label contains affiliation information that they can understand the affiliation without extra actions. Although there is no geographic distance information, one subject pointed out he will realize the distance after knowing the affiliation title.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">DISCUSSION AND CONCLUSIONS</head><p>In this work-in-progress paper, we presented a participatory process of bringing explanation interfaces to a social recommender system. We proposed four stages in responding to the challenge questions in identifying the key components of explanation models and mental models. In the first stage, we discussed the Expert Mental Model by discussing the key components (based on the similarity algorithm) of each recommendation model. In the second stage, we reported an online survey of current system users (N=14) and identified 19 explanatory goals as the User Mental Model. In the third stage, we reported the card-sorting results of a controlled user study (N=15) that created the Target Mental Model through the target users' preference of the explanatory factors.</p><p>In the fourth stage, we proposed a total of 25 explanation interfaces for five recommendation models and reported the card-sorting and semi-interview result. We found, in general, the participants preferred visualization interfaces more than the text-based interface. Based on the study, we found E1-4: Venn Word Cloud, E2-4: Topical Radar, E3-4: Strength Graph, E4-4: Venn Tags, E5-3: Navigation Style were preferred by the study participants. We further discussed the top-rated and second-rated explanation interfaces and user feedback in each session. Based on the experiment results, we concluded the design guideline of bringing the explanation interface to a real-world social recommender system.</p><p>A further controlled study will be required to test if the proposed explanation interface can achieve the target mental as we identified in this paper. In our future works, we plan to implement the toprated explanation interfaces and deploy those interfaces to the CN3 system. Moreover, we expect to provide the explanation interfaces with an information-seeking task, so we can analyze how and why does a user adopt the explanation interfaces in exploring the social recommendations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Relevance Tuner+: (A) relevance sliders; (B) stackable score bar; (C) explanation icon; (D) user profiles. The interface supports the user-driven exploration of recommended items in Section A and inspects the fusion in Section B. The user can further inspect the explanation model by clicking Section C, and more profile detail is presented in section D. Our goal is to provide an explanation interfaces for each explanation model. (The scholar names have been pixelated for privacy protection)</figDesc><graphic coords="2,56.32,83.69,499.36,120.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Factor 5 were shared by Transparency, Trust and Effectiveness. Factor 11 was shared by Trust and Effectiveness. Factor 13 was shared by Persuasiveness and Satisfaction.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>(a) E1-2: Two-way Bar Chart (b) E1-3: Word Clouds (c) E1-4: Venn Word Cloud (d) E1-5: Interactive Word Cloud</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The interfaces used to explain the Publication Similarity in the fourth stage.</figDesc><graphic coords="6,58.91,236.91,242.11,115.61" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: The interfaces used to explain the Topic Similarity in the fourth stage.</figDesc><graphic coords="7,58.91,244.52,242.10,87.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>6. 3 . 2 E3- 3 :</head><label>323</label><figDesc>ForceAtlas2. E3-3: ForceAtlas2 was inspired by Garnett et al. [3] that presented Co-authorship graph of NiMCS and related research with both high and low-level network structure and information. Nodes and edges are representing authors and co-authorship, respectively. Graph layout uses the ForceAtlas2 algorithm [3]. Clusters are calculated via Louvain modularity and delineated by color. The frequency of co-authorship is calculated via Eigenvector centrality and represented by size. The design was shown in Figure 4(b).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The interfaces used to explain the Co-Authorship Similarity in the fourth stage.</figDesc><graphic coords="8,58.91,266.16,242.12,183.19" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The interfaces used to explain the CN3 Interest Similarity in the fourth stage.</figDesc><graphic coords="9,58.91,271.01,242.11,143.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: The interfaces used to explain the Geography Similarity in the fourth stage.</figDesc><graphic coords="10,58.91,205.21,242.12,101.29" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>The card-sorting results of the third stage.</figDesc><table><row><cell></cell><cell>Very</cell><cell>Less</cell><cell>Not</cell><cell>Not</cell></row><row><cell></cell><cell>Important</cell><cell>Important</cell><cell>Important</cell><cell>Relevant</cell></row><row><cell>Factor 1</cell><cell>11</cell><cell>1</cell><cell>3</cell><cell>0</cell></row><row><cell>Factor 2</cell><cell>9</cell><cell>5</cell><cell>1</cell><cell>0</cell></row><row><cell>Factor 3</cell><cell>0</cell><cell>2</cell><cell>10</cell><cell>3</cell></row><row><cell>Factor 4</cell><cell>1</cell><cell>8</cell><cell>3</cell><cell>3</cell></row><row><cell>Factor 5</cell><cell>5</cell><cell>4</cell><cell>6</cell><cell>0</cell></row><row><cell>Factor 6</cell><cell>7</cell><cell>6</cell><cell>2</cell><cell>0</cell></row><row><cell>Factor 7</cell><cell>3</cell><cell>2</cell><cell>9</cell><cell>1</cell></row><row><cell>Factor 8</cell><cell>4</cell><cell>3</cell><cell>3</cell><cell>5</cell></row><row><cell>Factor 9</cell><cell>7</cell><cell>2</cell><cell>4</cell><cell>2</cell></row><row><cell>Factor 10</cell><cell>3</cell><cell>9</cell><cell>2</cell><cell>1</cell></row><row><cell>Factor 11</cell><cell>0</cell><cell>6</cell><cell>6</cell><cell>3</cell></row><row><cell>Factor 12</cell><cell>4</cell><cell>6</cell><cell>5</cell><cell>0</cell></row><row><cell>Factor 13</cell><cell>13</cell><cell>2</cell><cell>0</cell><cell>0</cell></row><row><cell>Factor 14</cell><cell>0</cell><cell>13</cell><cell>2</cell><cell>0</cell></row><row><cell>Factor 15</cell><cell>4</cell><cell>7</cell><cell>3</cell><cell>1</cell></row><row><cell>Factor 16</cell><cell>10</cell><cell>5</cell><cell>0</cell><cell>0</cell></row><row><cell>Factor 17</cell><cell>3</cell><cell>6</cell><cell>3</cell><cell>3</cell></row><row><cell>Factor 18</cell><cell>1</cell><cell>5</cell><cell>5</cell><cell>4</cell></row><row><cell>Factor 19</cell><cell>1</cell><cell>10</cell><cell>3</cell><cell>1</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>The card-sorting results of the fourth stage.</figDesc><table><row><cell cols="4">R1 R2 R3 R4 R5</cell><cell>Not Applicable</cell><cell>Total Votes</cell></row><row><cell cols="4">E1-1 19 25 21 19 44</cell><cell>22</cell><cell>150</cell></row><row><cell cols="4">E1-2 23 37 17 30 26</cell><cell>17</cell><cell>150</cell></row><row><cell cols="4">E1-3 7 16 42 44 19</cell><cell>22</cell><cell>150</cell></row><row><cell cols="3">E1-4 76 32 27 2</cell><cell>0</cell><cell>13</cell><cell>150</cell></row><row><cell cols="4">E1-5 19 31 33 28 20</cell><cell>19</cell><cell>150</cell></row><row><cell cols="4">E2-1 12 8 14 21 60</cell><cell>35</cell><cell>150</cell></row><row><cell>E2-2 6</cell><cell>2</cell><cell cols="2">9 73 36</cell><cell>24</cell><cell>150</cell></row><row><cell cols="3">E2-3 24 78 28 7</cell><cell>2</cell><cell>11</cell><cell>150</cell></row><row><cell cols="4">E2-4 86 31 13 11 0</cell><cell>9</cell><cell>150</cell></row><row><cell cols="4">E2-5 13 21 70 14 11</cell><cell>21</cell><cell>150</cell></row><row><cell cols="2">E3-1 13 5</cell><cell cols="2">9 18 69</cell><cell>36</cell><cell>150</cell></row><row><cell cols="4">E3-2 37 26 17 36 20</cell><cell>14</cell><cell>150</cell></row><row><cell cols="4">E3-3 32 38 29 28 11</cell><cell>12</cell><cell>150</cell></row><row><cell cols="4">E3-4 45 41 37 11 0</cell><cell>16</cell><cell>150</cell></row><row><cell cols="4">E3-5 15 32 41 36 11</cell><cell>15</cell><cell>150</cell></row><row><cell cols="4">E4-1 8 11 6 31 64</cell><cell>30</cell><cell>150</cell></row><row><cell cols="4">E4-2 17 61 48 16 2</cell><cell>6</cell><cell>150</cell></row><row><cell cols="4">E4-3 49 41 41 11 3</cell><cell>5</cell><cell>150</cell></row><row><cell cols="3">E4-4 64 28 41 7</cell><cell>1</cell><cell>9</cell><cell>150</cell></row><row><cell>E4-5 8</cell><cell>5</cell><cell cols="2">6 65 46</cell><cell>20</cell><cell>150</cell></row><row><cell cols="4">E5-1 20 7 13 24 55</cell><cell>31</cell><cell>150</cell></row><row><cell cols="4">E5-2 16 22 6 45 36</cell><cell>25</cell><cell>150</cell></row><row><cell cols="4">E5-3 42 16 44 11 6</cell><cell>31</cell><cell>150</cell></row><row><cell cols="4">E5-4 15 49 36 18 4</cell><cell>28</cell><cell>150</cell></row><row><cell cols="4">E5-5 40 35 26 20 3</cell><cell>26</cell><cell>150</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>That is, "You and [the scholar] have common bookmarking, they are [P1], [P2], [P3]."</figDesc><table><row><cell>6.4.1 E4-2: Similar Keywords. E4-2 Similar Keywords was proposed</cell></row><row><cell>and deployed in Conference Navigator</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Bringing Transparency Design into Practice</title>
		<author>
			<persName><forename type="first">Malin</forename><surname>Eiband</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hanna</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Bilandzic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Julian</forename><surname>Fazekas-Con</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mareike</forename><surname>Haug</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Heinrich</forename><surname>Hussmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">23rd International Conference on Intelligent User Interfaces</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="211" to="223" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A taxonomy for generating explanations in recommender systems</title>
		<author>
			<persName><forename type="first">Gerhard</forename><surname>Friedrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Zanker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI Magazine</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="90" to="98" />
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Publication trends in neuroimaging of minimally conscious states</title>
		<author>
			<persName><forename type="first">Alex</forename><surname>Garnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Grace</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Judy</forename><surname>Illes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PeerJ</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">e155</biblScope>
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Scalable and interpretable product recommendations via overlapping co-clustering</title>
		<author>
			<persName><forename type="first">Reinhard</forename><surname>Heckel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michail</forename><surname>Vlachos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Parnell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Celestine</forename><surname>Dünner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Data Engineering (ICDE), 2017 IEEE 33rd International Conference on. IEEE</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1033" to="1044" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Explaining collaborative filtering recommendations</title>
		<author>
			<persName><forename type="first">Jonathan</forename><forename type="middle">L</forename><surname>Herlocker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joseph</forename><forename type="middle">A</forename><surname>Konstan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Riedl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2000 ACM conference on Computer supported cooperative work</title>
				<meeting>the 2000 ACM conference on Computer supported cooperative work</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="241" to="250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="https://developers.google.com/maps/documentation/directions/intro" />
		<title level="m">Google Maps Directions API</title>
				<imprint>
			<publisher>Google Inc</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Inspectability and Control in Social Recommenders</title>
		<author>
			<persName><forename type="first">Bart</forename><forename type="middle">P</forename><surname>Knijnenburg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Svetlin</forename><surname>Bostandjiev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O'</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alfred</forename><surname>Donovan</surname></persName>
		</author>
		<author>
			<persName><surname>Kobsa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">6th ACM Conference on Recommender System</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="43" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">User preferences for hybrid explanations</title>
		<author>
			<persName><forename type="first">Pigi</forename><surname>Kouki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><surname>Schaffer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jay</forename><surname>Pujara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">O</forename><surname>'donovan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lise</forename><surname>Getoor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eleventh ACM Conference on Recommender Systems</title>
				<meeting>the Eleventh ACM Conference on Recommender Systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="84" to="88" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><surname>Lawrence</surname></persName>
		</author>
		<ptr target="https://codepen.io/choznerol/pen/evaYyv" />
		<title level="m">Customize D3plus network style</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Hidden factors and hidden topics: understanding rating dimensions with review text</title>
		<author>
			<persName><forename type="first">Julian</forename><surname>Mcauley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jure</forename><surname>Leskovec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th ACM conference on Recommender systems</title>
				<meeting>the 7th ACM conference on Recommender systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="165" to="172" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="http://halley.exp.sis.pitt.edu/cn3/portalindex.php" />
		<title level="m">Conference Navigator</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>Paper Tuner</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A generalized taxonomy of explanations styles for traditional and social recommender systems</title>
		<author>
			<persName><forename type="first">Alexis</forename><surname>Papadimitriou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Panagiotis</forename><surname>Symeonidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yannis</forename><surname>Manolopoulos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="555" to="583" />
			<date type="published" when="2012">2012. 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Trust-inspiring explanation interfaces for recommender systems</title>
		<author>
			<persName><forename type="first">Pearl</forename><surname>Pu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="542" to="556" />
			<date type="published" when="2007">2007. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Do social explanations work?: studying and modeling the effects of social explanations in recommender systems</title>
		<author>
			<persName><forename type="first">Amit</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dan</forename><surname>Cosley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd international conference on World Wide Web</title>
				<meeting>the 22nd international conference on World Wide Web</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1133" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">tidytext: Text mining and analysis using tidy data principles in r</title>
		<author>
			<persName><forename type="first">Julia</forename><surname>Silge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Robinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of Open Source Software</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">37</biblScope>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Evaluating the effectiveness of explanations for recommender systems</title>
		<author>
			<persName><forename type="first">Nava</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Judith</forename><surname>Masthoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">User Modeling and User-Adapted Interaction</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="399" to="439" />
			<date type="published" when="2012-10-01">2012. 1 Oct. 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Explaining recommendations: Design and evaluation</title>
		<author>
			<persName><forename type="first">Nava</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Judith</forename><surname>Masthoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender systems handbook</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="353" to="382" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Providing Control and Transparency in a Social Recommender System for Academic Conferences</title>
		<author>
			<persName><forename type="first">Chun-Hua</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Brusilovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization</title>
				<meeting>the 25th Conference on User Modeling, Adaptation and Personalization</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="313" to="317" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Beyond the Ranked List: User-Driven Exploration and Diversification of Social Recommendation</title>
		<author>
			<persName><forename type="first">Chun-Hua</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Brusilovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">23rd International Conference on Intelligent User Interfaces</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="239" to="250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Explaining Social Recommendations to Casual Users: Design Principles and Opportunities</title>
		<author>
			<persName><forename type="first">Chun-Hua</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Brusilovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion</title>
				<meeting>the 23rd International Conference on Intelligent User Interfaces Companion</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">59</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Exploring Social Recommendations with Visual Diversity-Promoting Interfaces</title>
		<author>
			<persName><forename type="first">Chun-Hua</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Brusilovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">TiiS</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1" to="1" />
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Explaining Recommendations in an Interactive Hybrid Social Recommender</title>
		<author>
			<persName><forename type="first">Chun-Hua</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brusilovsky</forename><surname>Peter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference on Intelligent User Interface</title>
				<meeting>the 2019 Conference on Intelligent User Interface</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Visualizing recommendations to support exploration, transparency and controllability</title>
		<author>
			<persName><forename type="first">Katrien</forename><surname>Verbert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Denis</forename><surname>Parra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Brusilovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Erik</forename><surname>Duval</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2013 international conference on Intelligent user interfaces</title>
				<meeting>the 2013 international conference on Intelligent user interfaces</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="351" to="362" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Tagsplanations: explaining recommendations using tags</title>
		<author>
			<persName><forename type="first">Jesse</forename><surname>Vig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shilad</forename><surname>Sen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Riedl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th international conference on Intelligent user interfaces</title>
				<meeting>the 14th international conference on Intelligent user interfaces</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="47" to="56" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">FLAME: A Probabilistic Model Combining Aspect Based Opinion Mining and Collaborative Filtering</title>
		<author>
			<persName><forename type="first">Yao</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Ester</surname></persName>
		</author>
		<idno type="DOI">10.1145/2684822.2685291</idno>
		<ptr target="https://doi.org/10.1145/2684822.2685291" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (WSDM &apos;15)</title>
				<meeting>the Eighth ACM International Conference on Web Search and Data Mining (WSDM &apos;15)<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="199" to="208" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Flame: A probabilistic model combining aspect based opinion mining and collaborative filtering</title>
		<author>
			<persName><forename type="first">Yao</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Ester</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighth ACM International Conference on Web Search and Data Mining</title>
				<meeting>the Eighth ACM International Conference on Web Search and Data Mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="199" to="208" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><surname>Zoomdata</surname></persName>
		</author>
		<ptr target="https://visual.ly/community/interactive-graphic/social-media/real-time-interactive-zoomdata-wordcloud" />
		<title level="m">Real-time Interactive Zoomdata Wordcloud</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
