A Multi-Dimensional Conceptualization Framework for Personalized Explanations in Recommender Systems Qurat Ul Ain1 , Mohamed Amine Chatti1 , Mouadh Guesmi1 and Shoeb Joarder1 1 Social Computing Group, University of Duisburg-Essen, 47048, Duisburg, Germany Abstract Recommender systems (RS) have become an integral component of our daily lives by helping decision making easier for us. The use of recommendations has, however, increased the demand for explanations that are convincing enough to help users trust the provided recommendations. The recommendations are desired by the users to be understandable as well as personalized to their individual needs and preferences. Research on personalized explainable recommendation has emerged only recently. To help researchers quickly familiarize with this promising research field and recognize future research directions, we present a multi-dimensional conceptualization framework for personalized explanations in RS, based on five dimensions: WHAT to personalize?, TO WHOM to personalize?, WHO does the personalization?, WHY do we personalize?, and HOW to personalize?. Furthermore, we use this framework to systematically analyze and compare studies on personalized explainable recommendation. Keywords Recommender systems, Explainable recommendation, Personalized explanation 1. Introduction users’ ratings and likes or dislikes, as compared to the items consumed by similar users. Over the past few years, with the increased usage of However, the majority of RS still act as a black-box online services like social media, e-learning, and e- and users have no idea why and how items are being commerce, recommender systems (RS) have become an recommended to them. Therefore, it is increasingly integral part of our lives. These RS help in shaping the important to make RS more intelligible and investigate decisions of users and helping them choose what they methods to explain them to end-users. Explaining the want based on a number of relevant options presented reasoning behind a recommendation has become an ac- to them, called recommendations. However, with the tive area of research in the last few years. Researchers increased amount of available recommendations, there have argued that explanations in RS could be very ben- is a chance of creating mistrust among users about the eficial [1, 2, 3]. To “explain” means “to make known, presented information. A huge amount of available in- to make plain or understandable, to give the reason formation creates ‘information overload’ which might for or cause of” [4]. An explanation seeks to answer lead to users questioning the validity of the provided questions such as what, why, how, what if, why not, content and might think of it as misinformation. and how to [5]. Providing the reasoning behind why One way to overcome this challenge is to provide per- an item is recommended to the user or how the rec- sonalized recommendations to the users. The content ommendation process works, as an explanation, adds of these recommendations is adapted to users’ inter- to the system’s transparency [6] and can benefit user ests and only relevant items are recommended to them. experience and trust in the RS [1] . The goal of personalized recommendations is to pre- Explainable RS have traditionally followed a one- dict items considered attractive and interesting by the size-fits-all model, whereby the same explanation is user. This relevant item prediction is made by either provided to each user, without taking into considera- (1) content, i.e., items having similar content with the tion an individual user’s context, i.e., abilities, goals, items already used by the user are recommended or (2) needs, or preferences. In the explainable recommenda- past behavior, i.e., items are recommended based on tion field, research regarding personalized explanation has emerged only recently, showing that personal char- Joint Proceedings of the ACM IUI Workshops 2022, March 2022, Helsinki, Finland acteristics have an impact on the perception of expla- $ qurat.ain@stud.uni-due.de (Q.U. Ain); nations, and that there is potential for the development mohamed.chatti@uni-due.de (M.A. Chatti); of personalized explanations in RS [7, 8]. For example, mouadh.guesmi@stud.uni-due.de (M. Guesmi); researchers have focused on investigating what specific shoeb.joarder@uni-due.de (S. Joarder)  characteristics may play a role in a user’s interaction © 2022 Copyright © 2022 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). with an explainable RS [9, 10]. An analysis on existing CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) explainable recommendation work focusing on person- with varying backgrounds and proposed an algorithm alized explanation is vital to help researchers quickly that allows constructing personalized explanations that familiarize with this promising topic, compare studies are optimal in an information-theoretic sense. Assum- in this field, and recognize future research directions. ing that, based on varying backgrounds like training, To fill this critical research gap, we present a timely domain knowledge and demographic characteristics, in- conceptualization framework for personalized expla- dividuals have different understandings and hence men- nations in RS and provide an overview of the current tal models about the learning algorithm, Kuhl et al. [15] state research in this emerging research area. To get investigated how personalized explanations of learning at this, we gathered, analyzed, and connected existing algorithms affect employees’ compliance behavior and concepts related to personalized explanations in the ar- task performance. On a conceptual level, Schneider and tificial intelligence (AI), machine learning (ML), and RS Handali [16] proposed a conceptualization of personal- domains. We then proposed a conceptualization frame- ized explanation in ML based on a framework covering work that can be used to systematically categorize and desiderata of personalized explanations, dimensions compare the literature on personalized explainable rec- that can be personalized, what and how information ommendation. Based on this framework, we analyzed can be obtained from individuals and how this infor- studies in this research domain. mation can be utilized to customize explanations. The paper is structured as follows. We first outline In the field of explainable recommendation, research the background for this research (Section 2). We then regarding personalized explanation is emerging, rec- present the details of the proposed conceptualization ognizing that it is increasingly important not only to framework (Section 3) and use the framework to ana- explain recommendations to the user but also to per- lyze the literature on personalized explainable recom- sonalize these explanations [7, 8]. Studies showed that mendation (Section 4). Finally, we summarize the work different users have different reactions to, and expec- and outline future research plans (Section 5). tations from explainable RS [17, 9] and that personal characteristics play a major role in the perception of, and interaction with these systems [10, 18, 19]. How- 2. Personalized Explanation ever, a comprehensive framework to categorize related work on personalized explanation in the RS field is In the field of explainable AI (XAI), Mohseni et al. [11] lacking. argue that different user groups will have other goals in mind while using such systems. For example, while machine learning experts might prefer highly-detailed 3. A Framework For Personalized visual explanations of deep models to help them opti- mize and diagnose algorithms, lay-users do not expect Explainable Recommendation fully detailed explanations for every query from a per- To dive deeper into the understanding of key concepts sonalized agent. Instead, systems with lay users as related to personalized explanation in RS and provide target groups aim to enhance the user experience with a systematic categorization of the literature in this the system through improving their understanding and area, we propose a multi-dimensional conceptualiza- trust. In the same direction, Miller [12] argues that tion framework for personalized explanations in RS providing the exact algorithm which generated the spe- (see Figure 1). To develop this framework, we gathered, cific decision is not necessarily the best explanation. utilized, and adapted ideas, concepts, and methods re- Therefore, the literature on AI/ML in recent years has lated to personalized explanations in the RS literature emphasized the need for explanations that are tailored and formulated a succinct and concise framework based to individuals, i.e., personalized explanations. For exam- on five dimensions: WHAT to personalize?, TO WHOM ple, Arya et al. [13] stressed that one explanation does to personalize?, WHO does the personalization?, WHY not fit all, as different AI stakeholders present different do we personalize?, and HOW to personalize?. Similar requirements for explanations and may desire differ- to the conceptualization of personalized explanation ent kinds of explanations (e.g., feature-based, instance- in ML presented in [16], we adopted and adapted the based, language-based). The authors presented an AI framework presented by Fan and Poole [20], and ex- toolkit, which contains eight state-of-the-art explain- tended it from "what, to whom, and who personalizes?" ability algorithms that can explain an AI model in dif- by adding two more dimensions, namely "why to per- ferent ways to a diverse set of users. Jung and Nardelli sonalize?" which describes the goals of personalized [14] pointed out that XAI is challenging since explana- explanation and "how to personalize?" which describes tions must be tailored (personalized) to individual users Figure 1: A Conceptualization Framework for Personalized Explainable Recommendation the methods for personalized explanations. Below, we tween the user profile and recommended item features. discuss in detail the five dimensions of our proposed To personalize an explanation, its content must be tai- conceptualization framework for personalized explana- lored to different user profiles and should be adapted tion in RS. according to the explanation context. 3.1. WHAT to Personalize? 3.1.2. Explanation Design Choices The "WHAT" dimension refers to the properties of an There is a large design space for explainable RS. Re- explanation that can be adjusted to the user profile searchers presented different ways to design explain- to provide personalized explanations. We identified able RS, referred to as explanation design choices [17, two main explanation properties that can be adapted 21]. Like the content of an explanation, these design based on explainee (for whom explanations are pro- choices represent further characteristics of an expla- vided) data, namely content and design choices of the nation that can be customized based on a user profile. explanation. Based on the literature on explainable recommendation, we identified five explanation design choices, namely 3.1.1. Explanation Content explanation style, explanation scope, explanation format, level of detail, and intelligibility types. Content of an explanation refers to the information Explanation Style: The explanation style refers presented in an explanation. This information repre- to the model or strategy used for generating explana- sents a description of details related to the recommen- tions [3]. In general, the explanation style is dependent dation process. These include, for example, user/item on the recommendation approach used in the RS, e.g., attributes contributing to the recommendation, inner a content-based RS produces content-based explana- workings of background algorithms, information re- tions [2]. In case of complex RS (e.g., deep learning lated to the user model used as input to the RS, simi- models), however, the explanation style for a given larities between users and/or items, and connection be- explanation may not reflect the underlying algorithm by which the recommendations are computed [2, 3]. image, a graph, or a chart (visual). Textual explana- Personalizing the explanation style means to present tions are usually in form of short or long sentences explanation in different styles to different users based using verbal elements, i.e., words, phrases or natural on their preferences. Explanation styles have been per- language describing the reasoning behind a recommen- ceived differently in different domains [22, 23]. In this dation. Visual explanations are usually in a graphical paper, we build on the taxonomy of explanation styles format using visual elements to explain a generated in RS used in [2, 9, 24]. recommendation. Personalizing the explanation for- mat means to present the explanation in the format • User-based Explanations: Explains similarity with preferred by the user. other users having same tastes. For example: Level of detail: The level of detail refers to the User A with whom you share similar tastes, likes amount of information exposed in an explanation that item B. should be presented to a user [17, 11]. Users are not • Item-based Explanations: Explains recommended always interested in all the information that is pro- items based on item (rating) similarity. For ex- duced in an explanation [12]. Different users demand ample: People who like item A in your profile different levels of explanation information and expla- also like item B. nations may cause negative effects if an explanation is difficult to understand [26]. Thus, it is important • Content-based Explanations: Explains similarity to provide explanations with enough details to allow between item features. For example: Item A has users to build accurate mental models of how the RS similar features as of item B purchased by you. operates without overwhelming them. For example, in an explanation, providing the exact algorithm that • Social-based Explanations: Explains similarity be- was used to generate the explanation may be a good tween users who know each other. For example: choice for a Machine Learning (ML) expert but not for Your friend User B likes item A. a lay user [12]. Furthermore, the demand for level of • Knowledge-based Explanations: Explains the de- detail in an explanation also varies with the cognitive scription of user’s needs or interests in the con- abilities of the user [26]. Only when the users have text of a recommendation. For example: This des- enough time to process the information and enough tination has higher average temperature, which ability to figure out the meaning of the information, is better for sunbathing. a higher level of detail in the explanation will lead to a better understanding. But as soon as the amount of • Demographic-based Explanations: Explains the information is beyond the users’ comprehension, the use of demographic data and its connection to explanation could lead to information overload and the recommendations. For example: This movie bring confusion [17]. was recommended to you according to your age. Intelligibility Types: When user encounters a com- plex system, she might demand different type of ex- Explanation Scope: The explanation scope refers planatory information based on the system and its con- to the part of RS that an explanation focuses on, i.e., text [11]. Lim and Dey [5] identified several queries input, process or output [25]. Explanation focusing on (called intelligibility types) that a user may ask of a input tries to explain the user model which is taken smart system. These include: as an input by the RS. Explanation focusing on process tries to explain the working and flow of the underlying • How Explanations: demonstrate how the under- algorithm used to generate a recommendation. Expla- lying system behind a recommendation works. nation focusing on output tries to explain why an item How explanations are useful when Users are in- was recommended. Different users demand explana- terested in knowing how the system generates tions with a different scope. Not every user is interested certain recommendations. How explanations in knowing the details of the user model or the inter- aim to answer the question: "How (under what nals of the underlying algorithm [11]. Personalizing conditions) does the system do Y?". the explanation scope means to present explanations that focus on the input, output or process, according • Why Explanations: demonstrate why a recom- to the user’s preferences and needs. mendation is made for a particular user. These Explanation Format: The explanation format refers explanations cover what was the input to the to how an explanation is displayed to the user. It can system (user model) and what logic was used to either be in form of a text description (textual) or an generate the recommendation. Why questions are very common by the user hence it is an es- 3.2.1. Target users sential intelligibility requirement. Some users The explanations can be personalized for target users might expect a very informative answer from at different levels of granularity. A target user for an this why explanation and a simple explanation explanation can be an individual user, category of indi- may not satisfy them [5]. Why explanations aim viduals (e.g., experts, lay users), or group of individuals to answer the question: "Why did the system do (in case of group RS). X?". • Why Not Explanations: (also called Contrastive 3.2.2. User model attributes Explanations) help the users in understanding why a specific item was not recommended to To personalize the explanations for a user or a group of them. Lim et al. [27] argued that these explana- users requires to tailor explanation based on user mod- tions are useful for high-risk circumstances and els. Schneider and Handali [16] summarize different when user might ask for alternative possibilities user model attributes that can be used to customize an from the RS. Why Not explanations aim to an- explanation. These include: swer the question: "Why did the system not do • Prior knowledge: What an explainee knows, e.g., Y?". knowledge about the RS domain, expertise re- • What Íf Explanations: deal with the manipulation garding the RS methods to be explained of inputs to the RS. These explanations illustrate • Preferences/Interests: What an explainee likes and how the manipulation of inputs affect the output prefers, e.g., preferred presentation format (vi- of the RS, i.e., recommendations. These explana- sual or textual), desired level of detail of the ex- tions involve users’ interaction with the system planation, the time or effort an explainee wants when they can change an input to the RS and to invest to understand the explanation. User want to know what will happen as a consequence. preference and interests are used interchange- What If explanations aim to answer the question: ably in the literature on RS "What If there is a change in conditions, what would happen?". • Decision information: What information is used by an explainee to make decisions • What Else Explanations: demonstrate different examples of the similar inputs to the RS that can • Purpose: What the explanation is used for produce similar outputs (recommendations). Lim and Dey [5] demonstrated that although the de- Recent studies on explainable recommendation showed mand for these explanations is low but these are that personal characteristics have an effect on the per- helpful in critical situations when users expect ception of explanations and that it is important to take that the RS is doing more than shown to them, personal characteristics into account when designing to handle a critical situation. What else explana- explanations. Examples of personal characteristics that tions aim to answer the question: "What else the can be used as further user model attributes accord- system is doing?". ing to which an explanation can be tailored include the Big Five personality traits: openness, conscien- • How To Explanations: (also called Counterfactual tiousness, extraversion, agreeableness, and neuroticism Explanations) are basically explanations about [28, 29, 30, 31, 32], need for cognition (NFC), [33, 10, 9], what hypothetically needs to change for the de- ease of satisfaction, visualization familiarity, domain sired outcome to happen. How To explanations experience [9], locus of control, need for cognition, aim to answer the question: "How to make the visualisation literacy, visual working memory, tech- system recommend X?". savviness [10, 18], and musical sophistication [19]. 3.2. TO WHOM to Personalize? 3.2.3. User data collection The "TO WHOM" dimension focuses on to which user(s) User data collection indicates how data is collected to to personalize. A crucial requirement in explainable generate a model of the user for whom an explanation RS is to build detailed user models that can be used by is personalized, in our case the explainee. This could the system to recommend items or to provide personal- be same as the data used to generate recommendations ized explanations. These user models are based on data or different. There are two ways to get explainee data collected from the user to generate different attributes. to generate user models: • Implicit data collection: refers to getting explainee There are possible refinements to these goals. For exam- data to generate a user model, implicitly through ple, in [1] satisfaction is not considered as a single goal, user’s past activity, browser search history, mouse but is split into ease to use, enjoyment, and usefulness. clicks, social media information, system usage Other explanation goals in RS include user engagement history, previously consumed items, etc. This re- resulting from more confidence and transparency in quires techniques like preference elicitation and the recommendations [34], compliance with legal regu- knowledge extraction from raw data. lations e.g., European Union’s GDPR [7], education by allowing users to learn from the system [8], debugging • Explicit data collection: refers to getting explainee to be able to identify wrong or unexpected recommen- data to generate a user model, explicitly by ask- dations and take control to make corrections [8]. This ing the user to write reviews and feedback to goal is closely related to scrutability [1]. The goal might items, giving ratings, filling out questionnaires, also be seen as obtaining an answer to different intelli- interviews and surveys, liking and disliking items gibility types of explanations, e.g., what, how and why etc. questions [27, 12]. We consider these goals as also important in rela- 3.3. WHO does the Personalization? tion to personalized explanation and we use them as possible candidates for the "WHY" dimension in our The "WHO" dimension focuses on who does the person- conceptualization. alization. The literature on personalized systems distin- guish between automatic personalization by the system providing explanations (i.e., system-driven personalized 3.5. HOW to Personalize? explanation) and manual personalization which is done The "HOW" dimension refers to the methods used to on-demand by the explainee, actively setting the expla- generate personalized explanations. In general, per- nation parameters, e.g., choosing the level of detail to sonalized explanations can be created using a two-step be shown in an explanation (i.e., user-driven personal- process, namely (1) adjusting explanation properties and ized explanation) [20, 17, 16]. (2) applying adaptation rules. 3.4. WHY to Personalize? 3.5.1. Adjusting explanation properties The "WHY" dimension refers to the intended goals A personalized explanation can be generated by ad- of personalizing an explanation. Different users have justing explanation properties, i.e., content and design different goals when they interact with an explainable choice, as illustrated in Table 1. A common task in RS. Thus, the explanation presented to a specific user explainable recommendation is to provide explanation should be personalized in a way to achieve the specific based on similarity of items or users. The content of goal(s) intended by the user. Prior work on explainable an explanation can be personalized by highlighting recommendation has presented different explanation similarities, e.g., between users (user-based explana- goals. For example, Tintarev and Masthoff [2] identified tion), items related to user’s interests (item-based ex- seven goals, as follows: planation), item features (content-based explanation), and users in a social circle (social-based explanation). • Transparency: Explain how the system works The explanation scope can be personalized by chang- • Scrutability: Allow users to tell the system it is ing the explanation viewpoint to focus on the RS in- wrong put, process, or output. The explanation format can be personalized by adjusting the presentation (e.g., tex- • Trust: Increase user’s confidence in the system tual or visual), presentation state (e.g., permanent or on-demand), the visualization idiom (e.g., node-link • Effectiveness: Help users make good decisions diagram, bar chart, heatmap, tag cloud), and the inter- • Persuasiveness: Convince users to try or buy action method (e.g., select, zoom, filter, brushing and linking, overview+detail) [35]. The level of detail of • Efficiency: Help users make faster decisions an explanation is personalized by tailoring its sound- ness and completeness. Soundness refers to telling • Satisfaction: Increase the ease of use or enjoy- nothing but the truth, how truthful each element in ment an explanation is with respect to the underlying sys- tem. Completeness refers to telling the whole truth, Table 1 Exemplary use of different explanation styles and explanation properties to generate personalized explanations Explanation Style User-based explanation Item-based explanation Content-based explanation Social-based explanation Content Similarity with other users Similarity between items based on users’ preferences Similarity between item features Similar users in social circle Explanation Explanation Properties Choice of explanation scope (input, process, output) Scope Choices Design Explanation Choice of explanation format (Presentation, Presentation state, Visualization idiom, Interaction method Format Level of Choice of level of detail (Soundness, Completeness) Detail Intelligibility Choice of intelligibility type (how, why, why not, what if, what else, how to) Types the extent to which an explanation describes all of the 4. Categorization of Existing underlying system [36, 37]. The intelligibility type can be personalized by adjusting the query that a user can Literature ask from the RS (e.g. how, why, why not, what if, what We used our conceptualization framework to systemat- else, how to). Finally, the choice of the explanation ically categorize and compare the literature on person- style, e.g., user-based, item-based, content-based, socialalized explainable recommendation. Our aim was not explanation can also be adjusted. to conduct a systematic literature review, but rather to show how the framework can be applied to analyze 3.5.2. Applying adaptation rules studies in the this research area. To identify relevant A system-driven personalized explanation requires to works, we focused on recent studies that explicitly ad- adjust explanation properties by taking into account dressed personalized explanation in RS. The results of users’ preferences and personal characteristics. Thereby, the categorization of these studies are summarized in it is crucial to decide which adaptation to apply and Table 2. then apply that adaptation [38]. This is a straightfor- ward task in case of personalizing the content as an 4.1. WHAT to Personalize? explanation property, where the adaptation is applied Starting with the "WHAT" dimension of our concep- by highlighting similarities between users or preferred tualization framework, the synthesis of existing litera- items. However, this is a challenging task in case of per- ture revealed that in most studies related to personal- sonalizing design choices as explanation property. The ized explanation, only the content of the explanation challenge here is to define adaptation rules in order to is personalized by keeping all the design choices (i.e., answer the question “which instance of a design choice explanation style, scope, format, level of detail, intelli- is good for which user type?”. To achieve this, it is im- gibility types) constant. The explanation content could portant to conduct user studies to evaluate explanations be of the form "Because [user] likes [genre]" [34] or designed for different levels of personal characteristics. "Last.fm’s data indicates that [U2] is similar to [Cold- The results of these studies would lead to design sugges- play] that is in your profile" [9]. In these examples, the tions and guidelines that help decide which explanation content of an explanation is personalized by varying should be provided to which user before adapting the the text in the square brackets according to each user’s explanation to different users. As examples of adapta- data content. As Schneider and Handali [16] noted, ex- tion rules, Martijn et al. [19] suggested that (1) for users planations for RS are often inherently personalized due low in need for cognition, displaying explanations must to the nature of the task. For example, users’ reviews, be optional, (2) provide brief explanations that do not tags, or preferred item features, serve as input for the require domain knowledge to support users with low recommendation algorithm as well as explainee data musical sophistication, and (3) provide explanations used for explanations. In general, the explanations are with a lower number of explanation elements for users personalized by marking certain parts of the recom- with low openness. mended item (e.g., item features) which are relevant to the explainee. All reviewed studies focused on personalizing the content of the provided explanation. For example, Gedikli Table 2 Categorization of existing literature based on our conceptualization framework. The following abbreviations are used: Ind. (Individual user), Gr. (Group of individuals), I (Implicit data collection), E (Explicit data collection), Pref. (Prefer- ences/Interests), PC (Personal characteristics), UB (User-based explanation), IB (Item-based explanation), CB (Content-based explanation), S (Social-based explanation) Reference WHAT TO WHOM WHO WHY HOW Design Choices Content Adjusting explanation properties Applying adaptation rules User model attributes Explanation Format User data collection Intelligibility Types Explanation Scope Explanation Style System-driven Level of detail Target users User-driven Goals Satisfaction, Persuasiveness, Kouki et al. [9] ✔ x x x x x Ind. Pref. I ✔ x Similarity (UB, IB, CB, S) x Transparency, Confidence Satisfaction, Trust, Chang et al. [39] ✔ x x x x x Ind. Pref. I ✔ x Similarity (CB) x Effectiveness, Efficiency Satisfaction, Persuasiveness, Efficiency Gedikli et al. [40] ✔ x x x x x Ind. Pref. E ✔ x Similarity (UB) x Transparency, Effectiveness Lu et al. [41] ✔ x x x x x Ind. Pref. I ✔ x x Similarity (IB) x McInerney et al. [34] ✔ x x x x x Ind. Pref. I ✔ x Satisfaction, User engagement Similarity (IB) x Persuasiveness,Trust, Efficiency Musto et al. [42] ✔ x x x x x Ind. Pref. I ✔ x Similarity (IB) x Transparency, User engagement Similarity (UB, CB), Svrcek et al. [24] ✔ ✔ x x x x Ind. Pref. I ✔ x Transparency, Trust ✔ Choice of explanation style Satisfaction, Trust, Similarity (IB), Millecamp et al. [10] ✔ x x x ✔ x Ind. Pref. I x ✔ x Confidence Choice of level of detail (Completeness) Chen et al. [43] ✔ x x x x x Ind. Pref. I ✔ x x Similarity (IB) x Zhang et al. [6] ✔ x x x x x Ind. Pref. I ✔ x x Similarity (CB) x Quijano-Sanchez et al. [44] ✔ x x x x x Gr. Pref., PC I, E ✔ x Satisfaction, Persuasiveness Similarity (S) x Tintarev and Masthoff [45] ✔ x x x x x Ind. Pref. E ✔ x Effectiveness, Satisfaction Similarity (CB) x Similarity (CB), Satisfaction, Transparency, Guesmi et al. [17] ✔ x x x ✔ x Ind. Pref. I x ✔ Choice of level of detail (Soundness, x Scrutability Completeness) et al. [40] presented and discussed the results of a user mation available in the Linked Open Data (LOD) cloud. study where recommendation systems were provided Their user study results revealed that their strategy out- with different types of explanation. The study revealed performed both a non-personalized explanation base- that the content-based tag cloud explanations were line and a popularity-based one. McInerney et al. [34] effective and well accepted by the majority of users. presented a method (Bart) that combines bandits and Tintarev and Masthoff [45] focused on personalized recommendation explanations. This method is able to feature-based explanations that described item features, jointly learn which explanations each user responds to tailored to the user’s interests. Musto et al. [42] pre- (personalized explanation), and learn the best content sented a framework for generating personalized natural to recommend for each user (personalized recommen- language explanations of the suggestions produced by a dation). The conducted experiments revealed that per- graph-based recommendation model based on the infor- sonalizing explanations and recommendations provides a significant increase in estimated user engagement. Lu textual explanation format by explaining the reason- et al. [41] presented a multi-task learning framework ing behind an explanation in natural language. Only that simultaneously learns to perform rating prediction few studies have used a visual explanation format. For and generate personalized recommendation explana- example, Kouki et al. [9] have used Venn diagrams and tion. They employed a matrix factorization model for static cluster dendrograms, Gedikli et al. [40] have used rating prediction, and a sequence-to-sequence learning tag clouds, and Quijano-Sanchez et al. [44] have used model for explanation generation by generating person- graphical representation of images to present expla- alized reviews for a given recommendation-user pair as nations. Finally, none of the studies have worked on they consider user-generated reviews as explanations personalizing intelligibility types of explanations de- of the ratings given by users. Inspired by how people pending on user profile. explain word-of-mouth recommendations, Chang et al. In summary, it can be observed that personalizing [39] designed a process, combining crowd-sourcing the content of an explanation to each user’s data and and computation, that generates personalized natural personality is dominant in the literature on explainable language explanations. And, Chen et al. [43] provided recommendation. By contrast, personalized explana- personalized explanations visually by highlighting dif- tions that focus on tailoring a certain explanation de- ferent parts of an item based on user preferences. sign choice, such as explanation scope, format, or level Considering the design choices (i.e., explanation style, of detail are under-explored and deserve more research scope, format, level of detail, intelligibility types) which in the future. could also be personalized based on user profiles, most of the studies have kept design choices fixed in ex- 4.2. TO WHOM to Personalize? planations. Only the work presented in [24] takes the personalized explanation to the explanation style design Related to the "To WHOM" dimension, which identifies choice level. The authors proposed a hybrid method of the target users of a personalized explanation (i.e., indi- personalized explanation of recommendations, which vidual user, category of individuals, or group of individ- combines basic explanation styles to provide the appro- uals), it has been observed that almost all the reviewed priate type of personalized explanation to each user. studies have personalized for individual users. Only the Based on this method, each user will be given an expla- study in [44] provided explanations targeting a group nation adapted to what most impressed her (i.e., expla- of users. The authors argued that adding a social com- nation style which she prefers). Furthermore, only the ponent to explanations in group recommenders can works in [10] and [17] reported personalizing the level enhance the impact that explanations have on users’ of detail in an explanation depending on how much de- likelihood to follow the recommendations and used ex- tail the user prefers to see in an explanation. Millecamp planations like: “Although we have detected that your et al. [10] developed a music RS that not only allows preference for this item is not very high, your friends users to choose whether or not to see the explanations X and Y really like it. Besides, we have detected that by using a "Why?" button but also to select the level they usually don’t give in". of detail by clicking on a "More/Hide" button. Guesmi In terms of user model attributes and user data collec- et al. [17] developed a transparent Recommendation tion, most of the studies have focused on user prefer- and Interest Modeling Application (RIMA) that pro- ences or interests to personalize the explanations and vides on-demand personalized explanations of both the collected data implicitly to generate user models. For interest models and the recommendations with three example, in the study by Kouki et al. [9], a user model different levels of of detail (i.e., basic, intermediate, ad- was created based on users’ music preferences, Chang vanced). et al. [39] generated a user model based on users’ pref- In all reviewed studies, there was no personalization erence of movies modeled from their activities with related to explanation scope, explanation format, or in- the system. Data was also collected implicitly through telligibility types. None of the studies has focused on users’ listening history to generate their music interests varying these design choices depending on the user in [34], through users’ likes to generate their movie profile. In terms of the explanation scope design choice, preferences [42], through users’ interactions, readings, all reviewed studies have focused only on explaining brought and clicked books, to generate user models the output of the RS (i.e., recommendation), none of based on preferences [24]. Furthermore, a user model them has tried to explain the input of the RS (i.e., user was created by [10] based on users’ music preferences model) or the process (i.e., algorithm used used to gen- generated implicitly based on listening history. Zhang erate a recommendation). Concerning the explanation et al. [6] generated a user model based on user pref- format design choice, most of the studies have used a erences collected implicitly through applying phrase- level sentiment analysis on user reviews and opinions. efficiency [39, 40], confidence [9, 10], and user engage- For visual explanations, Chen et al. [43] used users’ ment [34, 42]. Only the study in [25] aimed to pro- attention and users’ visual preferences to generate user vide personalized explanations to achieve scrutability. models, used to personalize visual explanations. Sim- Moreover, only two studies aimed at comparing per- ilarly, Gedikli et al. [40] created a used model based sonalized and non-personalized variants of an expla- on user’s preferences of movies, however, users were nation. The study in [40] found that content-based tag explicitly asked to provide overall rating for at least 15 cloud explanations were particularly helpful to increase items from a collection of 1000 movies, to record their user satisfaction as well as the user-perceived level of preferences. transparency thanks to its personalized variant. How- Only the work by Quijano-Sanchez et al. [44] gener- ever, they found that personalization was detrimental ated a user model based on user preferences collected to effectiveness. Similarly, Tintarev and Masthoff [45] implicitly from users’ activities in Facebook as well investigated the impact of personalizing simple feature- as personal characteristics (e.g., cooperative, assertive) based explanations on effectiveness and satisfaction. obtained explicitly through a personality evaluation They also reported that their personalization method test to get users’ behaviors, and social information re- hindered effectiveness, but on the other hand increased lated to friends and their preferences to generate per- the satisfaction with the explanations. More studies sonalized explanations where each user will receive are needed to investigate the benefits of personalized a different explanation of the group recommendation explanations compared to non-personalized variants presented by the system. In general, it can be observed related to different goals. Furthermore, only two stud- that there is less focus on personal characteristics as ies investigated the effects of personal characteristics a user model attribute that can be collected (explicitly on the perception of explanations in terms of persua- or implicitly) and used to personalize the explanations. siveness [9] and confidence [10]. As different design This represents an interesting future research direction. choices will be affected by the user type, more research seems required to understand the interaction effects 4.3. WHO does the Personalization? of design choice and user type on the perception of personalized explainable RS with regard to different Related to the "WHO" dimension, we have observed explanation goals. that only the works presented in [10] and [17] have fol- lowed a user-driven personalized explanation approach 4.5. HOW to Personalize? by providing on-demand explanations with varying level of details. All the other works have focused on In terms of adjusting explanation properties, the ma- system-driven personalized explanation, mainly to au- jority of the reviewed studies only focused on person- tomatically adapt the content of the explanation, based alizing the explanations by adjusting the content as on users’ preferences. This opens a new avenue of explanation property. In most cases, a personalized ex- research in the field of explainable recommendation, planation is generated by highlighting the similarities and researchers should try to fill in this gap. More re- between users (user-based explanation) [40] or items search is needed to focus on user-driven personalized (item-based explanation) [41, 34, 42, 43]. Following explanation in RS by having the users in the loop and a content-based explanation approach, the studies in giving them control to steer the explanation process. [39, 10, 6, 17] generated personalized explanations by Furthermore, there is a need to follow a system-driven highlighting feature similarities between items. And, personalized explanation approach, that not only fo- the study in [44] used a social-based explanation ap- cuses on adapting the content of an explanation, but proach to explain individual and group recommenda- also the design choice. tions, by highlighting similar users in a social circle. However, only a few studies have worked on personal- 4.4. WHY to Personalize? izing the explanations by adjusting the design choice as explanation property. Among these studies, Svrcek The next dimension is "WHY" to personalize?" which et al. [24] worked on peronalizing explanation style, refers to possible goals of providing an explanation. and Millecamp et al. [10] and Guesmi et al. [17] have The most common goals evaluated in the reviewed personalized level of detail as explanation property, studies are user satisfaction [9, 39, 40, 34, 10, 44, 25, 45], by varying only completeness (show/hide explanation) transparency [40, 24, 42, 9, 25], persuasiveness [9, 40, and both soundness and completeness (basic, interme- 42, 44], trust [39, 42, 24], effectiveness [39, 40, 42, 45], diate, advanced explanation), respectively. Referring to applying adaptation rules, only few stud- ies have worked on proposing and/or applying adap- future work, we will leverage the proposed framework tation rules to personalize an explanation. In order to to conduct a thorough systematic literature review to assign the appropriate explanation style for the spe- gain more insights into the domain of personalized cific user, Svrcek et al. [24] proposed and applied an explanations in recommender systems. adaptation rule, following a test-then-train approach that identifies if users prefer a certain explanation style, based on a continuous monitoring of the user clicks References on each item explained by different styles. As a re- [1] I. Nunes, D. Jannach, A systematic review and sult, the users obtain more explanations generated by taxonomy of explanations in decision support and their preferred style. Kouki et al. [9] and Millecamp recommender systems, User Modeling and User- et al. [10] also proposed adaptation rules without ap- Adapted Interaction 27 (2017) 393–444. plying them. In [9], only the content of an explana- [2] N. Tintarev, J. Masthoff, Explaining recommenda- tion is personalized to each user’s data and personality, tions: Design and evaluation, in: Recommender while the explanation styles are kept fixed. The au- systems handbook, Springer, 2015, pp. 353–382. thors, however, investigated the effect of personality on [3] Y. Zhang, X. Chen, Explainable recommendation: the perception of explanations and found that (1) calm A survey and new perspectives, arXiv preprint participants (low neuroticism) preferred popularity- arXiv:1804.11192 (2018). based explanations, while anxious participants (high [4] M.-W. O. Dictionary, Explain, 2021. URL: https:// neuroticism) preferred item-based explanations. Like- www.merriam-webster.com/dictionary/explain. wise, neurotic participants, showed a slight preference [5] B. Y. Lim, A. K. Dey, Assessing demand for intel- for item-based explanations, (2) open participants were ligibility in context-aware applications, in: Pro- persuaded by many explanations, while conscientious ceedings of the 11th international conference on participants preferred fewer. The work in [10] did not Ubiquitous computing, 2009, pp. 195–204. follow a system-driven explanation approach. The au- [6] Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Liu, S. Ma, thors, however, investigated the effect of personal char- Explicit factor models for explainable recommen- acteristics on the perception of explanations and found dation based on phrase-level sentiment analysis, that participants with a low need for cognition (NFC) in: Proceedings of the 37th international ACM were more confident about their playlist when recom- SIGIR conference on Research & development in mendations were explained, as opposed to those with information retrieval, 2014, pp. 83–92. a high NFC. In general, there is a lack of research on [7] M. Naiseh, N. Jiang, J. Ma, R. Ali, Personalis- adaptation rules to tailor explanations in RS to users ing explainable recommendations: literature and with different preferences and personal characteristics. conceptualisation, in: World Conference on Infor- Thus, more user studies need to be conducted on the mation Systems and Technologies, Springer, 2020, same lines, to come up with concrete adaptation rules pp. 518–533. that can be used to personalize system-driven explana- [8] D. Jannach, M. Jugovac, I. Nunes, Explanations tions. and user control in recommender systems, in: Personalized Human-Computer Interaction, De 5. Conclusion Gruyter Oldenbourg, 2019, pp. 133–156. [9] P. Kouki, J. Schaffer, J. Pujara, J. O’Donovan, In this paper, we presented a multi-dimensional concep- L. Getoor, Personalized explanations for hybrid tualization framework for personalized explanations recommender systems, in: Proceedings of the in recommender systems, based on five dimensions: 24th International Conference on Intelligent User WHAT to personalize in an explanation, TO WHOM to Interfaces, 2019, pp. 379–390. personalize, WHO does the personalization, WHY an [10] M. Millecamp, N. N. Htun, C. Conati, K. Verbert, explanation should be personalized, and HOW to per- To explain or not to explain: the effects of personal sonalize an explanation. Through this work we aimed characteristics when explaining music recommen- to (1) provide researchers with a structured way to dations, in: Proceedings of the 24th International organize current and future research on personalized Conference on Intelligent User Interfaces, 2019, explainable recommendation, (2) provide an overview pp. 397–407. of what has been done in the domain of personalized ex- [11] S. Mohseni, N. Zarei, E. D. Ragan, A multidis- planations in RS so that more knowledge can be built on ciplinary survey and framework for design and top of it, and (3) identify research gaps in this area. As evaluation of explainable ai systems, ACM Trans- actions on Interactive Intelligent Systems (TiiS) chology 5 (2014) 373. 11 (2021) 1–45. [23] R. Larasati, A. De Liddo, E. Motta, The effect [12] T. Miller, Explanation in artificial intelligence: of explanation styles on user’s trust., in: ExSS- Insights from the social sciences, Artificial intelli- ATEC@ IUI, 2020. gence 267 (2019) 1–38. [24] M. Svrcek, M. Kompan, M. Bielikova, Towards [13] V. Arya, R. K. Bellamy, P.-Y. Chen, A. Dhurand- understandable personalized recommendations: har, M. Hind, S. C. Hoffman, S. Houde, Q. V. Hybrid explanations, Computer Science and In- Liao, R. Luss, A. Mojsilović, et al., One expla- formation Systems 16 (2019) 179–203. nation does not fit all: A toolkit and taxonomy [25] M. Guesmi, M. Chatti, Y. Sun, S. Zumor, F. Ji, of ai explainability techniques, arXiv preprint A. Muslim, L. Vorgerd, S. Joarder, Open, scrutable arXiv:1909.03012 (2019). and explainable interest models for transparent [14] A. Jung, P. H. Nardelli, An information-theoretic recommendation, in: IUI Workshops, 2021. approach to personalized explainable machine [26] R. Zhao, I. Benbasat, H. Cavusoglu, Do users learning, IEEE Signal Processing Letters 27 (2020) always want to know more? investigating the 825–829. relationship between system transparency and [15] N. Kuhl, J. Lobana, C. Meske, Do you comply user’s trust in advice-giving systems (2019). with ai?–personalized explanations of learning [27] B. Y. Lim, Q. Yang, A. M. Abdul, D. Wang, Why algorithms and their impact on employees’ com- these explanations? selecting intelligibility types pliance behavior, arXiv preprint arXiv:2002.08777 for explanation goals., in: IUI Workshops, 2019. (2020). [28] L. R. Goldberg, An alternative" description of [16] J. Schneider, J. Handali, Personalized explanation personality": the big-five factor structure., Journal in machine learning: A conceptualization, arXiv of personality and social psychology 59 (1990) preprint arXiv:1901.00770 (2019). 1216. [17] M. Guesmi, M. A. Chatti, L. Vorgerd, S. Joarder, [29] S. D. Gosling, P. J. Rentfrow, W. B. Swann Jr, A S. Zumor, Y. Sun, F. Ji, A. Muslim, On-demand very brief measure of the big-five personality do- personalized explanation for transparent recom- mains, Journal of Research in personality 37 (2003) mendation, in: Adjunct Proceedings of the 29th 504–528. ACM Conference on User Modeling, Adaptation [30] M. Tkalcic, L. Chen, Personality and recom- and Personalization, 2021, pp. 246–252. mender systems, in: Recommender systems hand- [18] M. Millecamp, N. N. Htun, C. Conati, K. Verbert, book, Springer, 2015, pp. 715–739. What’s in a user? towards personalising trans- [31] M. P. Graus, B. Ferwerda, Theory-grounded user parency for music recommender interfaces, in: modeling for personalized hci, in: Personalized Proceedings of the 28th ACM Conference on User human-computer interaction, De Gruyter Olden- Modeling, Adaptation and Personalization, 2020, bourg, 2019, pp. 1–30. pp. 173–182. [32] S. T. Völkel, R. Schödel, D. Buschek, C. Stachl, [19] M. Martijn, C. Conati, K. Verbert, “knowing me, Q. Au, B. Bischl, M. Bühner, H. Hussmann, Op- knowing you”: personalized explanations for a portunities and challenges of utilizing personality music recommender system, User Modeling and traits for personalization in hci, in: Personalized User-Adapted Interaction (2022) 1–38. Human-Computer Interaction, De Gruyter Olden- [20] H. Fan, M. S. Poole, What is personalization? bourg, 2019, pp. 31–64. perspectives on the design and implementation [33] S. Naveed, T. Donkers, J. Ziegler, Argumentation- of personalization in information systems, Jour- based explanations in recommender systems: nal of Organizational Computing and Electronic Conceptual framework and empirical results, in: Commerce 16 (2006) 179–202. Adjunct Publication of the 26th Conference on [21] M. Guesmi, M. A. Chatti, L. Vorgerd, S. Joarder, User Modeling, Adaptation and Personalization, Q. U. Ain, T. Ngo, S. Zumor, Y. Sun, F. Ji, A. Mus- 2018, pp. 293–298. lim, Input or output: Effects of explanation focus [34] J. McInerney, B. Lacker, S. Hansen, K. Higley, on the perception of explainable recommenda- H. Bouchard, A. Gruson, R. Mehrotra, Explore, tion with varying level of details, in: IntRS’21: exploit, and explain: personalizing explainable Joint Workshop on Interfaces and Human Deci- recommendations with bandits, in: Proceedings sion Making for Recommender Systems, 2021. of the 12th ACM conference on recommender sys- [22] S. Wilkinson, Levels and kinds of explanation: tems, 2018, pp. 31–39. lessons from neuropsychiatry, Frontiers in psy- [35] T. Munzner, Visualization analysis and design, CRC press, 2014. [36] T. Kulesza, M. Burnett, W.-K. Wong, S. Stumpf, Principles of explanatory debugging to personal- ize interactive machine learning, in: Proceedings of the 20th international conference on intelligent user interfaces, 2015, pp. 126–137. [37] T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, W.-K. Wong, Too much, too little, or just right? ways explanations impact end users’ mental mod- els, in: 2013 IEEE Symposium on visual languages and human centric computing, IEEE, 2013, pp. 3– 10. [38] A. Paramythis, S. Weibelzahl, J. Masthoff, Layered evaluation of interactive adaptive systems: frame- work and formative methods, User Modeling and User-Adapted Interaction 20 (2010) 383–453. [39] S. Chang, F. M. Harper, L. G. Terveen, Crowd- based personalized natural language explanations for recommendations, in: Proceedings of the 10th ACM Conference on Recommender Systems, 2016, pp. 175–182. [40] F. Gedikli, D. Jannach, M. Ge, How should i ex- plain? a comparison of different explanation types for recommender systems, International Journal of Human-Computer Studies 72 (2014) 367–382. [41] Y. Lu, R. Dong, B. Smyth, Why i like it: multi-task learning for recommendation and explanation, in: Proceedings of the 12th ACM Conference on Recommender Systems, 2018, pp. 4–12. [42] C. Musto, F. Narducci, P. Lops, M. De Gemmis, G. Semeraro, Explod: a framework for explain- ing recommendations based on the linked open data cloud, in: Proceedings of the 10th ACM Conference on Recommender Systems, 2016, pp. 151–154. [43] X. Chen, H. Chen, H. Xu, Y. Zhang, Y. Cao, Z. Qin, H. Zha, Personalized fashion recommendation with visual explanations based on multimodal at- tention network: Towards visually explainable recommendation, in: Proceedings of the 42nd In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval, 2019, pp. 765–774. [44] L. Quijano-Sanchez, C. Sauer, J. A. Recio-Garcia, B. Diaz-Agudo, Make it personal: a social expla- nation system applied to group recommendations, Expert Systems with Applications 76 (2017) 36–48. [45] N. Tintarev, J. Masthoff, Evaluating the effective- ness of explanations for recommender systems, User Modeling and User-Adapted Interaction 22 (2012) 399–439.