<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explanations in Proactive Recommender Systems in Automotive Scenarios</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roland Bader</string-name>
          <email>roland.bader@bmw.de</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Karitnig</string-name>
          <email>andreas.karitnig@gmx.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wolfgang Woerndl</string-name>
          <email>woerndl@in.tum.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gerhard Leitner</string-name>
          <email>Gerhard.Leitner@uni-klu.ac.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alpen-Adria Universitaet Klagenfurt</institution>
          ,
          <addr-line>9020 Klagenfurt</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>BMW Group Research and Technology</institution>
          ,
          <addr-line>80992 Munich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Technische Universitaet Muenchen</institution>
          ,
          <addr-line>85748 Garching</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recommender techniques are commonly used to ease the selection and support the decision in the context of large quantities of items such as products, media or restaurants. Typically, recommender systems are used in contexts where users focus their full attention to the system. This is not the case in automotive scenarios, therefore we want to provide recommendations proactively to reduce driver distraction while searching for information. Our application scenario is a gas station recommender. Proactively delivered recommendations may will not be accepted, if the user does not understand why something was recommended to her. Therefore, our goal in this paper is to enhance transparency of proactively delivered recommendations by means of explanations. We focus on explaining items to convince the user of the relevance of the items and to enable an e cient item selection during driving. We describe a method based on knowledge- and utility-based recommender systems to extract explanations automatically. Our evaluation shows that explanations enable fast decision making for items with reduced information provided to the user.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In recent years more and more information is digitally available. Due to the
availability of Internet connections in many state-of-the-art cars, this information can
be made accessible for drivers. As searching for information is not the primary
task during driving, providing information as recommendations in a proactive
manner seems to be a reasonable approach to reduce information overload and
driver distraction [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. As the user does not request recommendations by
herself it is important to present the recommendations in a way that she quickly
recognizes why this information is relevant for her.
      </p>
      <p>
        The goal of this paper is to investigate the applicability of explanation
techniques to make proactive recommendations comprehensible for drivers with
limited amount of information. Explanations are already the focus of research in
other areas of recommender systems, e.g. product recommendations ([
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]).
To our knowledge there is no existing work on explanations for mobile
proactive recommender systems. The challenge is to provide as little information as
possible to make proactive decisions transparent without information overload.
Our application scenario is a gas station recommender for driver, already
presented in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The contribution of this paper is rst, an investigation what the
requirements on explanations in our application scenario are, second, how short
explanations for items can be generated out of the recommendation process
described in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and third, an evaluation of generated explanations. Note that the
scope of this paper is limited to an o ine investigation to lay the groundwork
for an in eld study in a car.
      </p>
      <p>The remainder of the paper is organized as follows. In Section 2 we describe
fundamentals of explanations in recommender systems. Section 3 summarizes a
preliminary study. In Section 4 we describe how explanations are generated out
of the recommendation process and Section 5 includes a prototype evaluation of
the presented method. Section 6 closes with conclusions and future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Fundamentals and Related Work</title>
      <p>
        Recommender systems suggest items such as products or restaurants to an active
user. Proactively delivered, recommendations should have high relevance, be
nonintrusive and the system should have a long term memory [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We have already
developed methods for proactivity in recommender systems in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Based
on this work we observed that proactively delivered recommendations lack user
acceptance if the user does not know why something was recommended to her.
Transparency and comprehensibility are two aspects a proactive system should
ful l to be accepted [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Our goal in this paper is to avoid loss of acceptance by
providing explanations in our existing proactive recommender for gas stations.
      </p>
      <p>An explanation is a set of arguments to describe a certain aspect, e.g. an
item or a situation. An argument is a statement containing a piece of
information related to the aspect which should be explained, e.g., "The gas station is
inexpensive" or "Gas level is low". In an item explanation arguments can be for
(positive) or against (negative) an item or neutral.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] seven generalizable goals for explanations in recommender systems are
provided. Which goals are accomplished by an explanation depends on the eld of
application. To give the user the chance to correct the system (scrutability ) and
to deliver e ective recommendations is important for recommendation systems
in general. For proactive recommender systems in a car, we think that especially
transparency (Why was this recommended to me?), persuasiveness (Are the
recommended items relevant for me?) and e ciency (Can I make a decision
with little interaction?) are the most important reasons. If they are ful lled
trust and satisfaction can also be positively in uenced.
      </p>
      <p>
        The work described in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] contains design principles for explanations in
recommender systems. The principles are focused on categorizing alternative items
and explain the categories. Due to limited amount of items represented in a
proactive recommendation, we think that categorization can hardly be applied
in our application domain. This applies to many explanation methods created
for desktop systems, where the user can turn her attention fully to the interface.
Hence, the challenge in proactive recommender systems is to convince the user
quickly of the usefulness of the recommended items.
      </p>
      <p>
        As we want to explain utility- and knowledge-based recommendations based
on [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a utility-based approach for explanations seems reasonable. The work in
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] presents a method based on the utility of a whole explanation to select and
rank explanations. Instead of the utility of the whole explanation, [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] measures
the performance of a single argument and combines arguments to structured
explanations. We combine ideas from both works in our proposed method.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Preliminary Study</title>
      <p>Before we implemented our methods for explanations in proactive recommender
systems, we conducted a user survey to nd out the main requirements for
the generation of arguments in our application scenario of a gas station
recommender.</p>
      <p>
        The user survey was conducted on the basis of an online questionnaire. The
subjects had to rate di erent kinds of arguments and structures on a 5 point
Likert scale ranging from "very useful" to "not useful at all". We focused on aspects
we found in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The most important question was what kind of
arguments should be used for explaining items in our application domain. Arguments
are build either on context-based (e.g. gas level, opening times) or
preferencebased (e.g. gas brand or price preference) criteria. Moreover, we wanted to know
how many arguments to use and how to combine and structure them
(independent vs. comparative to other items vs. comparative to an average). We also
asked the respondents about the usefulness of other type of information like
situation explanations, status information and reliability of item attributes and
context data. The survey had 81 respondents who completed the questions. The
group of participants consisted of 64 male and 17 female with an average age of
29 years.
      </p>
      <p>The most important aspects in uencing the decision for a certain gas station
seem to be gas price, detour and gas level at the gas station. Following this
pattern, arguments including detour, price and gas level have been rated mostly
very good. Ratings for gas station context data, like opening times or a free soft
drink, varied depending on the content of an argument. Arguments more related
to the task of re lling, e.g. opening times, are rated better.</p>
      <p>There is no clear subject's favourite for the structure of an explanation.
Independent as well as comparative argumentation was rated equally. Two arguments
seem to represent a good size for an explanation in the case of gas stations.
Regarding the desired number of items in a gas station recommendation, which
ranges from 3 to 5, two arguments seem to be reasonable to distinguish them.
Arguments concerning situations leading to a recommendation were rated di
erently. Situations which are directly connected to the task and have an impact on
the recommendation were rated best, e.g. "only gas stations along the route were
recommended because you do not have much time" or "Just a few gas stations
are available in this area". Status information as well as data reliability were not
interesting for the subjects.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Our Approach for Explanations in Proactive</title>
    </sec>
    <sec id="sec-5">
      <title>Recommender Systems</title>
      <p>Based on the results from the preliminary study, there are obviously two major
aspects which should be explained to the user. First, we have to explain what has
been the crucial situation for a recommendation. A low gas level is an obvious
situation for a gas station recommendation, but there are some more situations
which may lead to a recommendation: A rather good gas station along the route,
e.g. very low priced, a deserted area with few gas stations or an important
appointment which leads to a recommendation only with gas stations on the
route. Without explanation a proactive recommendation in this situations may
result in misunderstanding.</p>
      <p>Second, it should be clear to the user why the recommended items are
relevant for her based on her user pro le. In this paper we focus on explanations
for items. Our explanation method is designed for a small set of recommended
items because many items overwhelm the user if they are provided proactively.
There are two main goals we try to accomplish. First, we want to enable e
ciency because item selection is no primary task while driving and much harder
compared to situations where users can focus their attention to the system (e.g.
parking). Second, the user should be persuaded that the items are relevant.</p>
      <p>
        We use a ramping strategy like [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] to explain recommendations, i.e.
explanations are distributed over several levels of detail. The lowest level ( rst phase)
is provided automatically with the recommendations. Then gradually more and
more information is accessible by the user manually. The elements in the rst
phase are short explanations for the situation and for the items. More detailed
levels include a comparison of items, a list of all items or item details. The rst
phase is the most important one in the ramping strategy, as the user has to
recognize quickly why the recommendation is relevant for her. The following
description mainly comprises this phase.
      </p>
      <p>The arguments for items in the rst phase are structured independently, i.e.
no comparative explanations are used. The preliminary study showed that it
makes no di erence for the user but an independent structure allows for shorter
arguments. We use preference- as well as context-based arguments, starting with
a positive argument in the rst place and adding a second one if necessary. A
maximum of 2 arguments are used for every item.</p>
      <p>Information for arguments in an explanation can either be interpreted
attribute values, e.g. gas level is low, or facts, e.g. gas level is 32 liter. An
interpretation is a mapping from a speci c value to a discrete interval. We used a generic
nominal interval with One, Very High, High, Medium, Low, Very Low, Null to
map values to a discrete value. Two kinds of values can be mapped. A utility
interpretation maps the utility of an item, e.g. a gas level of 32 liter at a gas
station can be mapped to Null, because most people do not re ll at this level,
therefore the utility is 0 on that decision dimension. Interpreting the attribute
and context values leads to di erent results, e.g. a gas level of 32 liter is Medium
if the tank has a capacity of 65 liters. This is called attribute interpretation.
4.1</p>
      <sec id="sec-5-1">
        <title>Argument Assessment</title>
        <p>
          Our argument generation method for items is based on a context-aware
recommender system for gas stations presented in our previous work [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. It uses
Multi-Criteria Decision Making Methods (MCDM) to assess items I on multiple
decision dimensions D by means of utility functions. For example, dimensions
are price or detour. First, all item attributes and context (level 1) belonging
together are aggregated to local scores LSI;D in the range [0; 1] (level 2) on every
dimension D. On level 3 all dimensions are aggregated to a global score GSI .
Users are able to set their preferences for the item dimensions explicitly which
results in a weight wD for every dimension D.
        </p>
        <p>
          The argument assessment uses two additional scores. The explanation score
ESI;D describes the explaining performance of an item dimension and the
information score ISD measures the amount of information in a dimension. The
explanation score is calculated by multiplying the weight of a dimension wD
with the performance of the item I in that dimension: ESI;D = LSI;D wD.
This way, bad performing dimensions as well as aspects not important for the
user are neglected. The score corresponds to the product of user interest in a
dimension with the utility of an explanation for that dimension described in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
Instead of a whole explanation we measure the performance of the dimension
directly. The problem of only using this score is that if every item performs
well on a dimension and this dimension is important for the user, every item
would be explained by the same information. This decreases the opportunity to
make an e ective decision as items are not distinguishable. Therefore the
information score measures the amount of information in a dimension relative to
an item set. It is calculated by ISD = R+2I . The value R = max(x) min(x)
is the range of x in the set. The information can either be Shannon's entropy
I = Pin=1 p(x)lognp(x) or simply I = nn h1 where n is the number of items in
the set and h is the frequency of the most frequent x in a set. Taking x = LSI;D
is a good choice if local scores have a small value range, otherwise the utility
interpretation of LSI;D performs better. The information score is low if either
all x are similar (R is low) or same x appear frequently (I is low), e.g. all gas
stations are average priced.
4.2
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>Explanation Process</title>
        <p>
          Figure 1 shows the process to select arguments based on the scores we described
in the previous section. It follows the framework for explanation generation
described in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] by dividing the process in the selection and organization of the
explanation content and the transformation in a human understandable output.
        </p>
        <p>Explanation Score</p>
        <p>Main argument</p>
        <p>ESI,D &gt; α
Overall assessment</p>
        <p>GSI &gt; β
Attribute | Context
Interpretation | Fact</p>
        <p>Structure</p>
        <sec id="sec-5-2-1">
          <title>Content Selection</title>
          <p>1
2</p>
          <p>Argument
1 2
5
Explanation
Database
3
4</p>
          <p>Information Score
Information</p>
          <p>ISD &lt; γ
Second Argument</p>
          <p>ESI,D &gt; μ
Attribute | Context
Interpretation | Fact
Argument 1
Argument 2
(optional)</p>
          <p>Explanation</p>
        </sec>
        <sec id="sec-5-2-2">
          <title>Surface Generation</title>
          <p>In content selection our argumentation strategy selects arguments for
every item I separately. A positive argument is selected rst to help the user to
instantly recognize why this item is relevant. For this, the best performing
dimension D based on the explanation score ESI;D is compared to threshold
(1). Larger than means the dimension is good enough for a rst argument.
The threshold should be chosen so that the rst argument is positive. If no
dimension is larger and thus no rst argument can be selected, we look at the
global score GSI (2). If this score is larger than the item is a good average,
otherwise we suppose that the recommender could not nd better alternatives.
With a rst argument we look at the information score of its dimension (3). A
small information score (lower than ) means that this dimension provides low
information, therefore a second argument is selected by means of the explanation
score: The explanation score ESI;D of the second argument must be larger
to make sure the second argument is meaningful enough (4). Generally, &lt;
because the requirements on the second argument are lower. With the thresholds
and the amount of information can be controlled.</p>
          <p>The result of the content selection is an abstract explanation, which needs to
be resolved to something the user understands. This is done in the surface
generation. We map a key value pair, like (gaslevel; low), to human understandable
information, e.g. textual phrases or icons (5). Either facts or attribute
interpretations can be used as values. Human understandable explanation information
is uniquely stored in a database, e.g. in XML format. Also the structure of an
explanation (icon, independent phrase, comparative phrase etc.) can be de ned
here.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Evaluation</title>
      <p>To evaluate our generated explanations, we set up a user study with a desktop
prototype. The prototype is a combination of a street map viewer and an
explanation view. The map view is based on a street map from OpenStreetMap.com
and is able to visualize a user's route, icons for recommended gas stations and
detour routes for the gas stations. The displayed content depends on the current
phase in the ramping strategy. The view for the rst phase which is shown to
the user automatically provides a list of maximum 3 gas station
recommendations, 1 or 2 arguments for every gas station and a situation explanation. Due to
shortness constraints of an explanation, negative arguments are avoided. From
here, the subject can access the views for the second phase with item details and
the third phase with a list of all gas stations pre ltered along the route.</p>
      <p>
        We conducted a user interview with 20 participants with an average age of
29, 17 male and 3 female. For that, we created 6 di erent scenarios (2 short, 3
average and 1 long route). In every phase, the subjects were asked for missing and
relevant information in the explanation as well as on the map. The persuasiveness
was measured by asking the subjects for their satisfaction with a selection in
the rst phase and if they need more information. Looking at how often the
subjects needed to switch to deeper phases with more information accounts for
the e ciency. The explanations were all text-based. For example, a set of 3
gas stations could be explained by (1) very low priced (2) on the route (3) low
priced, little detour. Acoustic and tactile modalities are out of scope of this
survey. The recommendations were generated by the methods presented in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
and every subject was asked to give her preference for gas price, detour, brand
and preferred gas level at the gas station.
5.1
      </p>
      <sec id="sec-6-1">
        <title>Results</title>
        <p>The number of items provided by the recommender was rated as the right number
by 14 subjects in average. The number of arguments was rated as too few by 7
subjects and exactly right by 8 subjects. Too few arguments have been criticized
if two items could not be distinguished. Presenting the arguments either as facts
or interpreted was rated di erently. 11 subjects prefer facts, 9 interpretations.
This may change in a real driving scenario, depending on which kind of argument
imposes more cognitive e ort.</p>
        <p>Almost all information in the rst phase was rated as useful by most of the
subjects. In regular scenarios, most subjects could make a satisfying decision only
with this information. Interestingly, the predicted gas level at the gas station was
useless for most subjects, although it is an important decision dimension for most
of the subjects. This may indicate that user's expectation plays also an important
role: In our case, users only expect to get gas station recommendation if their
gas level is low. The second phase only contained useful information and was
selected if special details are needed, e.g. an ATM or a shop. In the beginning
of the interview some subjects used the second phase to check the matching of
interpreted values. The list of all items along the route was rarely selected and
only if the recommendations do not corresponded to user expectations. In 70%
of the cases the map played an important role for the decision process.
6</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Conclusions and Future Work</title>
      <p>
        We conclude that the explained strategy worked well o ine. Most of the
subjects were satis ed with the items based on the explanations provided in the rst
phase. Therefore we think that the amount of information was enough to
convince the subjects of the relevance of the items. Further phases were rarely used
and if needed than they were quickly accessible, therefore the selection could also
be made e ciently. In this stage of the project it could not be derived if users
prefer interpreted or speci c information in an argument. Next, we investigate
if the results are transferable to a driving scenario with real proactive
recommendations. In our further research, we also will adjust the parameters based
on the results of the study. Furthermore, we want to use Shannon's entropy on
the whole pre ltered set of items to meet user expectations better. To further
increase persuasiveness, we plan to integrate a dominance check like [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] over all
arguments presented to the user to better distinguish items.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bader</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neufeld</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woerndl</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prinz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Context-aware POI recommendations in an automotive scenario using multi-criteria decision making methods</article-title>
          .
          <source>In: Workshop on Context-awareness in Retrieval and Recommendation</source>
          . pp.
          <volume>23</volume>
          {
          <fpage>30</fpage>
          . ACM Press, Palo Alto, CA (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bader</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woerndl</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prinz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Situation Awareness for Proactive In-Car Recommendations of Points-Of-Interest ( POI )</article-title>
          .
          <source>In: Workshop on Context Aware Intelligent Assistance</source>
          . Karlsruhe,
          <string-name>
            <surname>Germany</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Carenini</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moore</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          :
          <article-title>Generating and evaluating evaluative arguments</article-title>
          .
          <source>Arti - cial Intelligence</source>
          <volume>170</volume>
          (
          <issue>11</issue>
          ),
          <volume>925</volume>
          {952 (Aug
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Felfernig</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gula</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leitner</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maier</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Melcher</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Teppan</surname>
          </string-name>
          , E.:
          <article-title>Persuasion in Knowledge-Based Recommendation</article-title>
          .
          <source>In: 3rd International Conference on Persuasive Technology</source>
          . pp.
          <volume>71</volume>
          {
          <fpage>82</fpage>
          . Springer, Oulu, Finland (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Myers</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yorke-smith</surname>
          </string-name>
          , N.:
          <article-title>Proactive Behavior of a Personal Assistive Agent</article-title>
          .
          <source>In: Workshop on Metareasoning in Agent-Based Systems. Honolulu</source>
          ,
          <string-name>
            <surname>HI</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Pu</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Trust building with explanation interfaces</article-title>
          .
          <source>In: 11th International conference on Intelligent User Interfaces</source>
          . pp.
          <volume>93</volume>
          {
          <fpage>100</fpage>
          . ACM Press, Sydney, Australia (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Puerta</given-names>
            <surname>Melguizo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.C.</given-names>
            ,
            <surname>Bogers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Boves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Deshpande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Bosch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.V.D.</given-names>
            ,
            <surname>Cardoso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Cordeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Filipe</surname>
          </string-name>
          , J.:
          <article-title>What a Proactive Recommendation System Needs: Relevance, Non-Intrusiveness, and a New Long-Term Memory</article-title>
          .
          <source>In: 9th International Conference on Enterprise Information Systems</source>
          . vol.
          <volume>6</volume>
          , pp.
          <volume>86</volume>
          {
          <fpage>91</fpage>
          .
          <string-name>
            <surname>Madeira</surname>
          </string-name>
          ,
          <string-name>
            <surname>Portugal</surname>
          </string-name>
          (Apr
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Rhodes</surname>
            ,
            <given-names>B.J.</given-names>
          </string-name>
          :
          <article-title>Just-In-Time Information Retrieval</article-title>
          .
          <source>Phd thesis</source>
          , MIT Media Lab (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Tintarev</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mastho</surname>
          </string-name>
          , J.:
          <source>Designing and Evaluating Explanations for Recommender Systems</source>
          , pp.
          <volume>479</volume>
          {
          <issue>510</issue>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>