=Paper=
{{Paper
|id=None
|storemode=property
|title=Visual Nudges for Enhancing the Use and Produce of Reputation Information
|pdfUrl=https://ceur-ws.org/Vol-612/paper1.pdf
|volume=Vol-612
}}
==Visual Nudges for Enhancing the Use and Produce of Reputation Information==
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
Visual Nudges for Enhancing the Use and Produce of
Reputation Information
Kristiina Karvonen¹, Sanna Shibasaki¹, Sofia Nunes¹, Puneet Kaur¹, Olli Immonen2
2
¹Helsinki Institute for Information Technology HIIT Nokia
P.O.Box 19800 Aalto P.O.Box 407
FIN-00076 +358 9 470 28362 00045 Nokia Group, Finland +358 71 800 8000
{kristiina.karvonen, sanna.shibasaki, olli.immonen@nokia.com
sofia.nunes, puneet.kaur@hiit.fi}
ABSTRACT operates by computing reputation scores for some set of objects,
In this paper, we aim to analyse the current level of usability on such as services or items on sale, within a certain community or
ten popular online websites utilising some kind of reputation domain. The scores can typically be computed on basis of a
system. The conducted heuristic and expert evaluations reveal a collection of opinions – usually ratings – that other entities hold
number of deficiencies on the overall usability of these about the objects, by employing a reputation algorithm to
websites, but especially on how the reputation information is calculate reputation scores based on the received ratings, which
currently presented. The low level of usability has direct are then published. Reputation information typically represents
consequences on how accessible and understandable the users’ opinions about a particular product, service or peers [5].
reputation information is to the user. We also conducted user Reputation information can be textual (e.g. descriptions,
studies, consisting of test tasks and interviews, on two websites reviews) or visual (e.g. images, symbols, statistical
utilising reputation information. The results suggest why the visualisations), or, usually, a combination of the two. However,
currently provided information remains under-utilised and, to a currently the reputation information is often presented in such a
great extent, goes undetected or gets misinterpreted. On basis of way that may make it hard to notice and to interpret. To make
the work so far, we propose ways to overcome some of the things worse, according to our heuristic and expert evaluations,
current problems by changing, rearranging and grouping of the the overall level of usability on the sites offering reputation
visual elements and visual layout of the reputation information information is often bad enough to stop users from effectively
offered on the sites. The enhanced visualisations create “visual having the reputation information at their disposal, as it goes
nudges” by enhancing the key elements in order to make users undetected: if the user cannot find the functionality, the
notice and use the information available for better and more functionality is not really there [12]. The reputation information
informed decisions. . is not utilised as guidance in the way it could and should be.
Which parts of the reputation information is presented visually
Categories and Subject Descriptors needs to be carefully selected: Our user studies [9][16]
evaluating websites that use reputation systems have shown that
H.5.2 [Information Interfaces and Presentation]: User Interfaces: the visually prominent parts of the reputation information
Evaluation/Methodology offered gets center stage, regardless of its actual usefulness and
relevance for the decision making. Furthermore, cohesion
General Terms between the various reputation elements is often missing and the
Design, Security, Human Factors
reputation information is experienced as scattered, with
Keywords unrelated pieces of information that are being used in random
Usability, heuristics, expert evaluation, user study, combinations that is dictated by their visual prominence, rather
recommendation, reputation, visual nudge, user interface design than by their actual importance for the decision-making.
To further investigate the described issues we have evaluated
ten more websites of different categories (news, shopping, social
1. INTRODUCTION networking etc.) that employ some kind of reputation system.
As Internet services and peer-to-peer systems currently are
The main objective of the usability evaluations was to evaluate
lacking in the traditional indicators of trustworthiness [3], being
the current level of usability of these services, and how well the
able to differentiate between a good offer and a bad one in an
standard set of heuristics from Nielsen [13] works for sites with
easy manner is not trivial. In the peer-to-peer markets
reputation information, or if they need additional rules of thumb.
especially, information about the reputation of the various
In the expert evaluations, we were focusing on the reputation
parties in the online transactions – the buyer, seller, and venue –
information and how it is visualised in order to understand what
can help to make good decisions and diminish the risks involved
works, what fails and how things could be improved.
[5].
As the visual prominence seems key for better utilisation of the
Reputation systems have grown into a prominent means to
reputation information, we introduce the idea of visual nudging
gather and provide such information about the quality of the
for improving the usage and production of reputation
offering and its seller for the end user. A reputation system
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
1
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
information to enable better and more informed decision-
making. “Nudging”, a term introduced by Thaler et al as a way
to enhance decision-making [19], in this context means that by
enhancing the key elements of the reputation information that
the user should be looking at in order to reach a good decision,
we aim to gently influence the users’ behavior by focusing their
attention in relevant direction. The visually prominent elements
are intended to serve as nudges. A nudge can alter the users’
behavior in a predictable way without forbidding any options or
significantly changing their economic incentives [19]. As
indicated by our previous studies [9], nudging through the visual
means could be most effective as visual elements are gaining the
users’ attention. Further, better visualisation may also help to
create more interest in contributing to the reputation information
(commenting and rating), as currently the ratio between all users
of a site and those who actually actively add to the reputation
information is often quite low [add ref or take out].
We will first present the background for the current study, the
previously conducted user studies together with the earlier work
done in this area. We will then proceed with the usability
evaluations for the additional websites and discuss the findings. Figure 1. Examples of usage of the star symbols as
We will conclude by summarising the lessons learned on what reputation visualisation in some popular websites
kind of usability issues we currently see as most pressing on the
websites utilising reputation systems, and how they could be
improved on, especially focusing on the key role of the visual
elements and their prominence for the overall usability of such
websites.
2. BACKGROUND
Reputation information is typically presented by both visual and
textual means.
2.1 Visual reputation information
Currently, the most common way to present visual reputation
information is to use star symbols to represent the current rating
of the item under scrutiny (Figure 1). Other symbolic icons
commonly used for visual reputation information include Figure 2. Example of other commonly used symbolic icons
“thumbs up” or “thumbs down” and a scale consisting of circle for reputation information
symbols (Figure 2).
2.2 Textual reputation information
Most common representations of reputation information are
Possibly, partly due to all of these problems in the visually
used to communicate the popularity rate of the product or
presented reputation information, the textual information is
service based on users’ votes. Usually, the user is able to see the
currently considered more important for the users: Reliance on
amount of votes given describing the popularity or how much
peer reviews has become everyday news. For example,
the product is “liked”. However, this information is not
USAToday has recently reported the growing importance of
revealing the scale of the information, and the user may be left
peer reviews, stating that “customers are increasingly vocalising
with confusion: What is the difference between three or four
their experiences online for other travelers to read” [22]. In
stars? How many stars a good product usually gets? How many
another article, online ratings and reviews were considered
ratings can be considered “a lot of ratings” in this service?
almost twice as significant as brand and reputation when
Because of this ambiguity, the quality of the reputation
choosing a hotel [21].
information is experienced as questionable: What do the ratings
actually mean (to me)? How credible are the ratings? How are
Online reviews have indeed become increasingly popular as a
the ratings calculated? For the users, the transparency of the
way to judge the quality of various products and services
information [17][18] is missing.
[4][8][11]. Even when popular and used, the textual reputation
information has its own troubles. The basic usability problems
related to how the information is presented hinder the efficient
use of the reviews. The user is encountering a burden of finding
the relevant information out of sometimes an excessive amount
of textual feedback. Furthermore, in a recent study by Jurca et al
[8], the reviewing behavior can also include a variety of biases.
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
2
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
2.3 Trust and risk the users also preferred the decision making process to be
In the context of downloading, trust and risk perception also “quick and easy”. Answering these demands requires efficient
become an issue. For the online user, the perceived credibility of composition of information from different sources. As humans
a website or a service has a strong impact on the trust level and are experts in processing visual information, presenting the
risk perception [5]. As it has been studied before [1], visual or information visually, in graphical form is also likely to ease and
aesthetic factors are linked to a website’s credibility – a good enhance the information processing.
first impression, strongly based on the visual representation, can
set the trust level towards the service in a matter of milliseconds 4. RESEARCH QUESTIONS AND
[10]. Investing on a visually pleasing user interface (UI) has
METHODOLOGY
been found to enhance a positive user experience of web pages
The previous studies showed that there is a lack of visual
[7][14].
prominence and cohesion between the different reputation
3. EARLIER WORK elements, and the reputation information was under-utilised. The
In our earlier work [9][16], we have studied the basis of the findings led to the formulation of the following hypotheses:
actual usage, usability and the ways of utilisation of the • The websites offering reputation information had
reputation information in the context of websites that offer problems with usability;
mobile applications for downloading. Our studies focused on
two websites; 1) WidSets, which was a website for downloading • More specifically, the reputation information provided
and developing mobile applications (“widgets”), launched in has bad usability;
October 2006 by Nokia (www.widsets.com) and 2) Nokia Ovi
• Visual prominence of the reputation elements is
Store (www.ovi.com), Nokia’s Internet service offering services
guiding the decision-making process on these sites;
in various areas such as games, maps, music, and mobile
applications. Ovi replaced the WidSets site in April 2009. Our • The visually prominent elements on the websites are
study on Ovi focused on the part of the service offering “wrong”;
downloadable mobile applications.
• Visual nudging is not working on the websites to
In the study for the WidSets website [9], we were focusing on enhance the decision-making process.
the current usage of the reputation elements on the website. The
results indicated that the visually prominent UI elements of the The basic research question behind the study is: “Why is the
site acted as the main sources of information when making reputation information underutilised?” By addressing this
decisions about downloading widgets, while less prominent research question, and armed with an initial understanding about
information was, for the most, overlooked. Therefore, we were the importance of the visual elements, we aimed at analysing
able to conclude that any information that is de facto important how the reputation information is currently displayed across the
for the decision making should also be presented as visually selected sites.
prominent in order to gain the users’ attention. The question of Among the various methods available in the field of Human
whether the elements should be presented as an aggregation of Computer Interaction (HCI), heuristic evaluation based on
the different elements or separately, allowing users to utilise the Nielsen’s heuristics [12] was chosen as the basic method to
information in a more independent fashion, could not be analyse the sites offering reputation information. The heuristic
determined on basis of the studies and thus became one of the evaluation was complemented with expert evaluation focusing
questions to be resolved by further studies. on the visual elements of the sites.
As a direct continuation of the WidSets study, we conducted Heuristic evaluation is a form of usability inspection where
another study focusing on Ovi and how the online reputation usability specialists or other evaluators judge how the object of
information currently offered in Ovi is understood and utilised study, e.g. a website, passes on an itemised list of established
by its users [16]. usability heuristics [12][15]. Preferably, the evaluators are
Our results again showed that the reputation information experts in human factors or HCI, but less experienced evaluators
available was not efficiently utilised. According to our can also follow the heuristics checklist and produce a report of
interpretation, the lack of cohesion between the reputation valid problems. Expert evaluation is a more free-form analysis
elements hinders the understandability and use of the of a given object under observation, based on the expert’s
information available. Users also reported that they found the experience, often focusing on certain elements of the object [2].
credibility and quality of the reputation information to be With the evaluations, we aimed at gaining an understanding of
questionable, which may be the result of the inconsistent and the usability issues and to potentially formulate additional
ambiguous way of presenting the information. Users were heuristics for reputation information.
currently not able to find the relevant information and thus also
not able to form an overall view or an understanding about the 5. THE STUDY
content and the message of the reputation information.
The websites chosen for the usability evaluation were well-
Based on the results from these studies we suggested [16] that in known sites, and selected on basis of their general popularity 1 :
order to help users making full use of the reputation
information, a visually prominent aggregation of the various
1
reputation elements would be helpful. According to our studies, http://www.google.com/adplanner/static/top1000/#,
http://www.alexa.com/topsites,
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
3
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
• Amazon (shopping), www.amazon.com The website presents the rating’s information through a
• eBay (shopping), www.ebay.com chart with detailed information about how many users rated the
item and how, as well as a direct access to their reviews.
• TripAdvisor (hotel and vacation reviews),
Information about the seller is presented clearly.
www.tripadvisor.com
Users can access the list of top reviewers, i.e. the ones with
• LinkedIn (networking tool), www.linkedin.com the most useful reviews.
• YouTube (video sharing), www.youtube.com
eBay
• Yelp (reviews and recommendations for local
Information about the overall purpose of the website is hard
businesses), www.yelp.com
to find even when registering (statement of purpose).
• Digg (social news website), digg.com The user cannot sort other users' reviews about a seller by
• IMDb (movie and serial reviews), www.imdb.com any other category except “date”, the default category. In case a
seller has both positive and negative reviews, the user will have
• NowPublic (social news website), to scroll through all the reviews to find the negative ones. This
www.nowpublic.com might be very time-consuming (Figure 4).
• AppStore (Apple’s store for iPhone applications). Both the ratings about the seller and the way the feedback is
www.apple.com/iphone/apps-for-iphone/ calculated are clearly presented to the user.
The evaluations were performed by four evaluators: one senior
HCI expert (> 10 years of experience), 2 expert (>2 years of
experience) and one non-expert (< 1 year of experience). The
expert evaluation focused on how the reputation information
was presented on the selected sites.
6. ANALYSIS OF THE USABILITY
EVALUATIONS
Table 1 summarises the outcomes of the usability evaluations
against Nielsen’s heuristics. We will now present the findings of
the expert evaluations on the reputation information website by
website, focusing on the main findings. The findings are marked
Figure 4. Sort reviews
either with (negative) or (positive).
Amazon TripAdvisor
The visualisation of the rating system is ambiguous. A
The different pieces of information are presented similarly,
novice user might be confused by the two different ways of
as if having the same value (e.g. product details and important
showing the ratings 1) thumbs and 2) circles. The actual
information). This makes retrieving information for the
meaning of the symbols becomes clear only by the time the user
decision-making a hard task. (Figure 3).
writes a review: thumbs are associated with a separate question -
"would you recommend this to a friend?" (Figure 5); circles
represent the rating.
Figure 5. Confusing information
The number of reviews is not consistent. The addition of all
the ratings provides a number, which is different than the one
presented along with the written reviews and still different from
the one obtained when the user clicks the "clear filters" option.
Figure 3. Different types of information similarly presented
This might jeopardise trust in the reputation system.
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
4
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
Table 1. Overall outcomes of the heuristic evaluation. The symbol √ was used when there were more good aspects than problems,
the X was used when the problems were more than the good aspects and the √ / X symbols when the number of problems and good
aspects was balanced
Information provided is not clear. For example the rating LinkedIn
information provided for hotels consists of three different
ratings (Figure 6). The UI does not provide a clear guidance of what are the
goals of the website, how it should be used and what is the order
The different elements of information are presented as of importance of the content. This information is hidden behind
having the same value, and without a clear structure to guide the an unnoticeable link, which makes it hard for the novice user to
user, which makes retrieving information a time consuming detect.
task.
The users' own recommendations are listed, enabling
The target of the reputation and the reputation elements comparison between recommendations, and adding transparency
were not easily distinguishable. to the system.
While reading the reviews, the user can see the reviewer
profile with just a mouse hover, which provides an easy access YouTube
to the information, prevents the disruption of the task and adds After having rated a video as negative or positive, the user
quality to the user experience. is not allowed to undo the action. This adds unreliability to the
system especially as it is possible to click on the rating
accidentally.
User is not allowed to delete a video previously rated as
"Liked" from the "liked videos" view (Figure 7). The only
actions allowed are adding it to a playlist or to a list of favorites.
In order to delete a video previously rated as "liked" the user has
to perform too many steps. First, the user has to open the "liked
videos" view, add the selected video to a playlist or to favorites
and only then remove the video. This is time consuming and
counter intuitive as the user has to perform a contradictory
operation – “add to favorites” - to the one they actually intend to
perform.
Figure 6. Confusing rating information The system does not provide a confirmation or an option to
undo the action of reporting another user. This might generate
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
5
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
unreliability in the reputation information as users can report
The system does not allow the user to delete a previously
and be reported by accident.
provided comment.
There is specific statistical information about the history,
The scale of the “Top” is ambiguous. The user is not able to
popularity and spread of the videos, which contributes to the
distinguish the timeframe of the “tops” and might get confused.
transparency of the website.
When clicking the icon corresponding to the number of
Information provided under "views" shows a detailed “diggs”, the user is directed to a page presenting the
pictorial and statistical representation of activity frequency over comments. This is counter-intuitive since the user expects to
time and per location. see a list related to the number of “diggs”, instead of the
comments regarding the news. The “how many diggs”- icon is
the most prominent element of the page, hence it should provide
the expected information.
After digging an article the system provides good feedback
and updates the results immediately, which contributes to the
overall reliability of the system.
The site enables users to evaluate one another’s comments,
which might contribute to establish or strengthen the community
feeling.
IMDb
If the user rates the same movie more than once the system
Figure 7. No delete option provides a feedback message saying the vote was counted,
which might be misleading.
Yelp The user profile, accessed through the username link, only
The users have access to the amount of reviews for a contains a list of the reviews that the user has made. The more
specific place but cannot see the relationship between other informative user profile is accessible through an additional link
reviewed places. Even if all the reviews are positive and the on the page presenting the users’ reviews. This jeopardises the
place has a certain number of stars it does not provide system’s consistency.
information about its quality when compared to other places in The reputation information and the links to reputation
the same area. information are presented among the general information about
After rating a review as useful, funny or cool, the user is the movie. The information is mainly presented in the form of
provided with feedback and the number of ratings is text. The first link on the page dedicated to the reviews is
immediately updated, which evokes reliability in the system. blended among the general textual information and the links,
which requires an extra effort from the user in order to find
The system provides the option to undo the ratings to other relevant information and differentiate between different types of
users' reviews, which allows the user to correct potential information provided.
mistakes and adds more trustworthiness to the ratings.
User cannot distinguish the relationships between popularity
The website provides a graphical and clear explanation of and rating of the movies. The info button on MOVIEmeter
ratings and ratings over time. It clearly details how the overall (question mark) gives some additional information but does not
ratings are obtained. resolve the issue as the users may have a hard time
The basic review contains plenty of information about the understanding how the percentages are formed and how to
reviewers’ reputation, making the relevant information interpret them.
immediately available to the user and the reputation of the The website provides detailed user ratings, and allows the
review itself can also be seen. user to access information about the voting trends for specific
By presenting diverse information about the reviewed target categories.
and the reviewer community on the first page the website guides The website uses weighted average for unbiased ratings,
the novice users and keeps their interest in exploring the which eliminates the ratings that are only intended to change the
website. overall rating in their benefit, adding reliability to the reputation
information.
Digg
The website also provides links to external reviews, which
The main page does not provide information about what is contributes for the feeling of transparency.
“Digg” or how it works. The lack of directions might make
the novice user confused about the purpose of the website. NowPublic
Advertisements were presented as having the same value as Information elements and advertisements are hard to tear
the information the user was looking for. apart. The small boxes of information and advertisements create
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
6
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
a cluttered look for the UI and the vertical page structure does 7. DISCUSSION
not support a natural flow of information retrieval. A general problem found in most of the analysed websites was a
The "recommend" icon does not provide clear information cluttered UI and the fact that the all available information was
about if the user is recommending the other member or their presented in a similar fashion as if having the same value, which
posts. This might affect the results, in case the users do not may cause confusion and mislead the user: The nudge to look at
understand what is recommended (Figure 8). information that is relevant is missing. The elements available
are presented in a way that does not guide the users’ attention to
the relevant information while making decisions. Another main
problem was related with the lack of interrelation between the
different reputation elements. This has a negative effect on the
information credibility provided by these elements. It may also
affect the users’ willingness to contribute as it is unclear how
the contribution will affect the offering.
On basis of the usability evaluations, the current level of
Figure 8. Misleading icon usability on the studied websites has general usability problems
that are big enough to jeopardise the use of the sites altogether.
Moreover, when it comes to how reputation information is
The website provides a guidance pop-up window for novice
currently offered, the level of usability can be described as
users as a starting page, which gives immediate information
remarkably low. Improvements in distinguishing and
about the purpose and usage of the website.
understanding different types of information available and
The website provides detailed and clear information about visual nudges for how they should be utilised by the user in the
getting promotion by points and an explanation about the decision-making process can easily be suggested:
meaning of the user ranking.
• Clearly distinguish between distinct sources of
The members are given points according to different information: the service provider, the reputation
categories of posts. This motivates contribution as it might be system, advertisements, other users and what is
seen as recognition. actually meaningful – highlight the relevant
The ranking status of the members, based on their information and guide the users task-flow;
individual points, is presented visually and in a clear way. • Tie together the different instances of reputation
information to form a coherent set of information
AppStore where different elements support each other;
An option to read more information in the reviews - expand • Promote transparency: clearly show where the
text – is provided, but the user cannot go back to the condensed reputation information comes from and how it is
text, which can make the page cluttered. formed.
The site does not offer access to more details about the star There are also social aspects related to understanding, or
ratings or all customer reviews unless the user uses the iTunes accepting the information. The results of our earlier studies and
software to view applications. those by others have indicated that reputation information
The user has no information about the way the ratings are available in textual format, in form of peer reviews in writing,
formed except for the fact that they are based on the reviews. has a big importance in online decision-making [9][8][11][16].
Although the quality of the reviews is sometimes seen as
The user can easily sort the reviews by several categories questionable as already discussed, reading peer reviews or
that are provided on the left column. This adds efficiency and comments undeniably is currently the most reported element to
transparency to the presented information, as the user is able to be used to make decisions online, when available. However, a
easily find both positive and negative reviews. closer look may reveal that users may report reviews as the main
The website provides a list of accessories rated and information source more readily than visual impressions, as
suggested by staff, which makes it easy for a first time user to users may not be able to reflect on their visual impressions that
navigate through what is available in the store. not only are hard to put into words, are also to a great extent
formed automatically and unconsciously [10]. Because of this,
When user clicks on a product, all information is provided users may over-report the importance of the textual information,
in three sections – 1) a description with snapshots, 2) ratings and and under-report the importance of the visual impressions, as
reviews by users and 3) Q&A section, with questions asked and they may not be fully aware of it.
answered by other users. This provides a complete and detailed
overview of the products, contributing for transparency. Some ways to take all the above-mentioned aspects into account
and enhance the utilisation of all reputation elements conjointly
The website offers visibility for the developer, which may is likely to include creating visually prominent, real-time links
enhance both the willingness to contribute and the between the users. When users are exposed to appropriate
trustworthiness of the contributions. amount of social data about one another, it tends to increase the
activity of giving contributions [6]. The user profiles should also
be presented in a visually attractive and motivational way in
order to promote participation and contributions [20]. By visual
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
7
FULL PAPER
Proceedings of the ACM RecSys 2010 Workshop on User-Centric Evaluation of Recommender Systems and Their Interfaces (UCERSTI),
Barcelona, Spain, Sep 30, 2010
Published by CEUR-WS.org, ISSN 1613-0073, online ceur-ws.org/Vol-612/paper1.pdf
nudges – making the relevant information visually prominent – [10] Lindgaard, G., Fernandes, G., Dudek, C., Brown, J.
users can be helped towards more sound and informed decisions Attention Web Designers: you have 50 Milliseconds to
in risky online situations. Make a Good First Impression! Behavior & Information
Technology 25, 2 (2006), 115-126.
8. REFERENCES [11] Park, D.H., Lee, J., Han, I. The Effect of On-line Consumer
[1] Alsudani, F. and Casey, M. 2009. The effect of aesthetics Reviews on Consumer Purchasing Intentions. International
on web credibility. In Proceedings of the 2009 British Journal of Electronic Commerce 11, 4 (2007) 125-148.
Computer Society Conference on Human-Computer
[12] Nielsen, J. Designing Web Usability: The Practice of
interaction (Cambridge, United Kingdom, September 01 -
Simplicity. New Riders Publishing, Indianapolis, 1999
05, 2009). British Computer Society Conference on
Human-Computer Interaction. British Computer Society, [13] Nielsen, J. Usability Engineering. Academic Press, 1993.
Swinton, UK, 512-519. [14] Robins, D., Holmes, J. Aesthetics and Credibility in Web
[2] Baauw, E., Bekker, M. M., and Markopoulos, P. 2006. site Design. Information Processing and Management: An
Assessing the applicability of the structured expert International Journal, 44(1):386–399, 2008.
evaluation method (SEEM) for a wider age group. In [15] Sears, A. Heuristic Walkthroughs: Finding the problems
Proceedings of the 2006 Conference on interaction Design without the noise. International Journal of Human-
and Children (Tampere, Finland, June 07 - 09, 2006). IDC Computer Interaction, 9(3) (1997) 213-234
'06. ACM, New York, NY, 73-80
[16] Shibasaki, S., Nunes, S., Immonen, O., Karvonen, K.:
[3] Bhattacharjee, R. and Goel, A. 2005. Avoiding ballot Understanding Online Reputation Information
stuffing in eBay-like reputation systems. In Proceedings of (unpublished manuscript under submission)
the 2005 ACM SIGCOMM Workshop on Economics of
Peer-to-Peer Systems (Philadelphia, Pennsylvania, USA, [17] Sinha, R., Swearingen, K. The Role of Transparency in
August 22-22,2005). P2PECON’ 05. ACM, New York, Recommender Systems. CHI ’02: CHI ’02 extended
NY, 133-137. abstracts on Human factors in computing systems. ACM
Press (2002).
[4] Cheung, M.Y, Luo, C, Sia, C.L, Chen, H. Credibility of
Electronic Word-of-Mouth: Informational and Normative [18] Suh, B., Chi, E. H., Kittur, A., and Pendleton, B. A. 2008.
Determinants of On-line Consumer Recommendations. Lifting the veil: improving accountability and social
International Journal of Electronic Commerce 13, 4 (2009) transparency in Wikipedia with wikidashboard.
9-38. In Proceeding of the Twenty-Sixth Annual SIGCHI
Conference on Human Factors in Computing
[5] Egger, F. N. Affective Design of e-commerce User Systems (Florence, Italy, April 05 - 10, 2008). CHI '08.
Interface: How to Maximize Perceived Trustworthiness? ACM, New York, NY, 1037-1040.
Proceedings of the International Conference on Affective
Human Factors Design. London: Academic Press (2001), [19] Thaler, R. H., Sunstein, C. R. Nudge: Improving Decisions
317-24. About Health, Wealth and Happiness. Yale University
Press (2008).
[6] Harper, F. M. The impact of Social Design on User
Contributions to Online Communities. Doctoral Thesis. [20] Vassileva, J. and Sun, L. 2007. An improved design and a
UMI Order Number: AAI3358616.University of case study of a social visualization encouraging
Minnesota, 2009. participation in online communities. In: Proceedings of the
13th international Conference on Groupware: Design
[7] Hartmann, J., Sutcliffe, A., Angeli, A. D. Towards a Implementation, and Use (Bariloche, Argentina, September
Theory of User Judgment of Aesthetics and user Interface 16 – 20, 2007). J. M. Haake, S. F., Ochoa, and A. Cechich,
Quality. ACM Transactions on Computer-Human Eds. Lecture Notes In Computer Science. Springer-Verlag,
Interaction (TOCHI), Article No. 15, ACM New York, NY, Berlin, Heidelberg, 72-86.
USA, 15(4), 2008.
[21] Ye, Q., Law, R., Gu, B. The Impact of online user reviews
[8] Jurca, R., Garcin, F., Talwar, A., and Faltings, B. 2010. on hotel room sales. International Journal of Hospitality
Reporting incentives and biases in online review forums. Management 28, (2009) 180-183., R: Hearing Online
ACM Trans. Web 4, 2 (Apr. 2010), 1-27. Critiques. USA today, 22.3.2010
[9] Karvonen, K., Kilinkaridis, T., Immonen, O. WidSets: A [22] Yu, R. Hearing Online Critiques. USA Today, 22.3.2010.
Usability Study of Widget Sharing, in: T. Gross et al. http://www.usatoday.com/MONEY/usaedition/2010-03-23-
(Eds.): INTERACT 2009, Part II, LNCS 5727, pp. 461– businesstravel23_ST_U.htm
464, 2009. The Proceedings of INTERACT 2009, 12th
IFIP TC13 Conference in Human-Computer Interaction,
August 24-28, 2009, Uppsala, Sweden
Copyright © 2010 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
This volume is published and copyrighted by its editors: Knijnenburg, B.P., Schmidt-Thieme, L., Bollen, D.
8