<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Investigating the Usefulness of Methods for Evaluating User Experience of Social Media Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amela Karahasanović</string-name>
          <email>amela@sintef.no</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marianna Obrist</string-name>
          <email>marianna.obrist@sbg.ac.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ICT&amp;S Center, University of Salzburg</institution>
          ,
          <addr-line>Sigmund-Haffner-Gasse 18, 5020 Salzburg, Austria, +43 662 8044 4814</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SINTEF ICT and University of Oslo</institution>
          ,
          <addr-line>Postbox 124, Blindern, 0314 Oslo, Norway, +47 48 10 88 95</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The usage and importance of social media or Web2.0 applications such as Youtube, Flickr, Facebook is rapidly increasing over the last years. They all build up on user communities, provide networking opportunities for their members, and are strongly related to audio-visual usergenerated content (UGC). Providing the user a good experience is a central success factor for such applications. Apart from standard usability principles the much broader concept of user experience (UX), including aspects such as fun, enjoyment, emotion, sociability and other factors have become relevant in the design of interactive systems. However little has been known on the usefulness of different evaluation methods for UX in the context of social media applications. We need to understand what new requirements for applying UX evaluation methods on these applications evolve and how to choose which of the existing methods are suitable for capturing different aspects of UX. This paper reports results and lessons learned on the usefulness of seven UX evaluation methods that were applied for evaluating ten different applications supporting non-professional users in sharing and co-creating usergenerated content. The results might be useful for practitioners and researchers developing social media applications when planning UX evaluation studies.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;User experience</kwd>
        <kwd>Evaluation methods</kwd>
        <kwd>Social media</kwd>
        <kwd>Web 2</kwd>
        <kwd>0</kwd>
        <kwd>Communities</kwd>
        <kwd>User generated content</kwd>
        <kwd>Audio-Visual content</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        INTRODUCTION
For many applications, such as social media and social
network sites, applications for sharing and co-creating
audio-visual content, and, for instance, games, it is
important that people enjoy using them. Thereby, providing
people a good experience and evaluating their UX is
becoming more and more essential [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>
        Hassenzahl [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] states that a “good UX is the consequence
of fulfilling the human needs for autonomy, competency,
stimulation (self - oriented), relatedness, and popularity
(others - oriented) through interacting with the product or
service (i.e. hedonic quality)”. Pragmatic quality, such as
the usability of a system, is also contributing to a positive
experience, but only through facilitating the pursuit of
meaningful hedonic needs. The most important
characteristics of UX are its normative nature
(differentiating between a positive, desired experience and
a negative, undesired experience that a user can have when
interacting with an application) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] as well as its dynamic
nature [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        Next to reach a common understanding on UX, there is still
a lack of research on UX evaluation methods in general
(see for instance an overview on UXEM in [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]) and on
their usefulness in particular. Research papers and
textbooks such as [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] provide surveys of different
evaluation methods according to their appropriateness in
different evaluation phases, their objectivity, reactivity and
needed resources. However, little has been known about
the usefulness of these methods for evaluating social media
applications. This paper presents results and lessons
learned of a case study we conducted to investigate the
usefulness of both traditional and new methods for
evaluating UX.
      </p>
      <p>This work was carried out in the framework of the
European research project CITIZEN MEDIA
(http://www.ist-citizenmedia.org/) which aimed to develop
social media applications supporting non-professional users
in sharing and co-creating user-generated content (UGC).</p>
      <p>Several applications have been developed and evaluated at
three testbeds, namely in Germany, Norway and Austria.</p>
      <p>
        The evaluation activities for all three testbeds were guided
by a common evaluation framework consisting of
preselected UX factors (e.g. fun/enjoyment, motivation,
emotion, sociability, as well as usability) and a set of
evaluation methods considered as relevant for the context
of the project (see [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]). Well known methods were
combined or adapted in order to capture UX.
      </p>
      <p>
        BACKGROUND
Over the years many usability evaluation methods have
been proposed and evaluated. The research of Gray and
Salzman [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], Hartson et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and Blandford et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
established a basis for critical evaluation and selection of
usability evaluation methods.
      </p>
      <p>
        Blandford et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] propose a comprehensive list of ten
criteria for evaluating UEMs. Reliability, also called
internal validity, is the extent to which different analyses of
the same system, using the same UEM, yield the same
insight. External validity is the ability to apply the findings
in the real world context. Thoroughness is a proportion of
real problems identified by a method. Effectiveness is the
product of reliability and thoroughness. Productivity is the
number of problems a UEM identifies. The practicalities
criterion is concerned with what is needed to integrate a
method within design practice. The analyst activities
criterion describes what analysts do when applying a UEM.
      </p>
      <p>Persuasive power is concerned with the ability of an
analyst working with a UEM to persuade developers to
change the system. Downstream utility is usefulness of the
findings in informing design. Scope describes what kind of
problems a method is useful and not useful for finding.</p>
      <p>
        When comparing usability engineering methods, Holzinger
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] considers the following criteria: applicability in phase,
required time, needed users, required evaluators, required
equipment, required expertise, and intrusiveness.
      </p>
      <p>
        Recently there has been growing interest in UX evaluation
methods [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. Several workshops have been organised to
focus on the methods, techniques, and tools for evaluating
UX such as CHI 2008 [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], CHI 2009 [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], INTERACT
2009 [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] and COST294-MAUSE workshops ([
        <xref ref-type="bibr" rid="ref16">16</xref>
        ][
        <xref ref-type="bibr" rid="ref29">29</xref>
        ])
and special issues of HCI journals (e.g. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ][
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]).
      </p>
      <p>
        Väänänen-Vainio-Mattila et al. [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] identified a set of
requirements for practical UX evaluation methods.
      </p>
      <p>
        Requirements for UX evaluation in an industrial context
have been identified by Ketola et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Roto and
colleagues investigated 30 UX evaluation methods during a
SIG session at the CHI’09 conference ([
        <xref ref-type="bibr" rid="ref25">25</xref>
        ][
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]). They
found differences in requirements on UX evaluation
methods in academia and industry. Industry needs methods
that are lightweight, fast, and relatively simple to use.
      </p>
      <p>Academia emphasizes the importance of scientific rigor in
the methods. Common requirements for industry and
academia are: including experimental aspects and allowing
repeatable and comparative studies in an iterative manner.</p>
      <p>
        Although a majority of UX evaluation methods originates
from usability [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], knowledge on UEMs is not completely
transferable on UX. A clear understanding of the
differences between usability and UX evaluation methods
and measurement models is still missing. There is a need
for systematic knowledge on UX methods. Furthermore,
there is a need for UX evaluation methods targeting
community oriented applications [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>By investigating usefulness of seven methods used for
evaluation of user experience in the context of applications
for sharing and co-creating user-generated content, this
paper aims to increase our knowledge on UX methods.</p>
      <p>
        UX EVALUATION FRAMEWORK APPLIED WITHIN THE
PROJECT
We have developed a common framework for evaluating
and addressing users’ experiences. Based on the previous
work and the needs of the project we have identified eight
central factors considered as relevant for investigating
users’ experiences with audio-visual networked
applications. UX is investigated from an individual
perspective, and is further influenced by the social context
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] of the evaluated applications [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Thus, we included
co-experience (UX6) and sociability (UX7) as relevant
factors addressing these social influences on the individual
experience in our UX evaluation framework [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. The
coexperience approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] was considered as relevant for the
testbeds – urban and rural communities – within the
CITIZEN MEDIA project, as it focuses on the sharing of
an experience and provides the basis for building
relationships. From a methodological point of view we
tried to investigate UX as social by applying group-based
evaluation methods, which still need to be extended in the
future [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Table 1 lists these factors together with the main
questions (further sub-questions were defined) they
address. These UX factors were applied to collect user
feedback from all three testbeds and to detect common UX
problems or demands (see resulted UX patterns in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]).
      </p>
      <p>
        UX4
User engagement
UX Factor
UX1
Fun/enjoyment
UX2
Emotion
UX3
Motivation
that was evaluated. A detailed description of the UX
factors and evaluation methods we used can be found in
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        Method Description
(1) User study with think-aloud and eye tracking [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]; IPTV (Internet Protocol TV);
early prototype
(2) User study with bio-physiological measurements; IPTV; working product (after 3
months of usage)
Group interviews with a facilitator [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]:
(1) Less structured than usual focus groups; combined with a short questionnaire; IPTV;
early in the process [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
(2) Focus group with free exploration session integrated into a workshop; two web-based
applications; during the design phase
ESM implemented as a part of the application [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]; answering by clicking on
smileyfaces; web-based application for collaborative story telling; non-public alpha version of
the application
Web-based survey with closed questions [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        Web based application for sharing User Generated Content (video); shortly after the
application went online; use case (content based communication); both early and later in
the evaluation
(1) Scenario based usability inspection method [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]; web based application for sharing
      </p>
      <p>
        User Generated Content (video); after the application went online
(2) A variation of the method combining elements from focus groups and usability
evaluation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]; also used in combination with focus group elements; web-based
application for sharing music; prior to redesign; in combination with focus group
elements used for evaluation of beta-version
(3) Group-based expert walkthrough in combination with focus groups elements;
handson sessions also included; web-based application for collaborative story telling;
nonpublic alpha version of the application
(4) In combination with discussion and free exploration; unified Electronic Program
      </p>
      <p>
        Guide [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        Extended heuristic evaluation was a variation of the standard heuristic evaluation where
the test leader moderated the evaluation and provided additional explanations; web-based
application for sharing UGC (photos and texts) on a city map; evaluation of the paper
prototype
Interviews with application domain experts preceded by hands-on sessions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]; web based
application for sharing User Generated Content (video); ready product
OUR APPROACH
As a step towards building and consolidating knowledge on
UX methods, we wanted to explore the usefulness of the
UX methods in a real-life context from the perspective of
the researchers and developers working in the project. We
thus focused our research on the scope and downstream
utility. In the context of this research, scope describes what
kind of user experience factors a method is good and not
good in finding. Both downstream utility and scope are
subjectively evaluated by the researchers in the project. To
collect the data we developed two open-ended
questionnaires. The questionnaire evolved through several
iterations for optimal clarity and accuracy. We sent the
survey to eight researchers involved in the evaluation
activities. Six of the researchers were experts in HCI and
usability, and two were master students focusing their
studies on HCI and user experience research. All of them
had relevant methodological expertise and were provided
training if needed.
      </p>
      <p>The first questionnaire collected the following information
about the evaluation method: description of the method, the
resources used on data collection and analysis, description
of the amount and the type of the collected data, and the
rationale for using this method. The second questionnaire
collected background information about a researcher, the
researcher’s general opinion on the method, the
researcher’s experience with the method in this project,
including usefulness and drawbacks of the method and
lessons learned. The analysis was done by one researcher.</p>
      <p>
        To reduce the threat to validity that might introduced by
this, and to facilitate analysis of the qualitative data we
used the coding process described by Seaman [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. The
collected answers were categorized according to the above
described criteria. .
      </p>
      <p>USEFULNESS OF THE EVALUATION METHODS AND
RECOMMANDATIONS
This section both reports our findings on the usefulness of
the used evaluation methods for capturing UX, and
provides relevant recommendations. Table 3 gives an
overview of the methods we used for capturing UX factors.</p>
      <p>When describing the usefulness of a method for capturing
different UX factors, the researchers also reflected on the
cost/benefits of a method.
Lab based user studies with bio-physiological
measurements were reported to be useful for capturing fun,
emotions, and usability, particularly when reaching the
users in a real life environment was difficult. However, the
method is complex in terms of data collection and analysis.</p>
      <p>Hands-on sessions preceded interviews, were integrated in
workshops with focus groups, and used in an adapted
version of group-based expert walkthrough. The
importance of these sessions for capturing UX was
emphasised by all researchers. Common experience in
exploring applications made it easier for the participants to
talk about non-functional aspects of the applications,
particularly about enjoyment, emotions, motivation,
coexperience and sociability. One could compare enjoyment
and emotions of users when using different functions of the
old and the new version of an application. When evaluating
another application, the participants worked together on a
common collaborative task (writing a story together). This
common experience made it easier to discuss feelings
related to use of these applications such as emotional
response when a co-author has deleted a paragraph. A
common task has been very useful for initiating discussions
on sociability and co-experience.</p>
      <p>Recommendation 1: Encourage collaboration.</p>
      <p>Investigating motivation, user engagement, user
involvement, co-experience and sociability at the level of
communities and families is essential for applications
aiming to support sharing and co-creation of UGC. Both
tasks and evaluation methods should reflect this priority.</p>
      <p>Extending well known methods such as interviews, focus
groups, and group-based expert walkthroughs with
hands-on sessions and usage of collaborative tasks has
been very useful for capturing these factors.</p>
      <p>The researchers also reflected on the importance of
different UX factors in the different project phases and in
the relation to the availability of other UX evaluation
methods. For example, a researcher said: Especially in this
early phase in the evaluation process, issues concerning
motivation need to be investigated in detail. The online
questionnaire was valuable in doing so.... Since no logging
data was available at this point of time in the evaluation
process, it was good to receive any information about the
usage of the platform.</p>
      <p>Recommendation 2: Start to evaluate UX as early as
possible. Early feedback is very valuable to the
developers. In particular, feedback on motivation,
emotions, and anticipated engagement is valuable.</p>
      <p>However, one should adapt both the methods and the
measurement to the evaluation phase. As the project
progresses, one can move towards finer granularity
evaluation. For example, one can measure the emotions
related to a general idea of a tool for collaborative
writing early in a project and emotions related to a
particular function of the tool later in the project.</p>
      <p>Not surprisingly, usability was easiest to measure, as it is
the most standardized factor. When describing the
usefulness of a method for capturing usability, our
respondents used the term “very useful” without exception.</p>
      <p>On the other hand, fun, emotions and co-experience were
reported as difficult to measure. Furthermore, they pointed
out the centrality of usability and its effects on other user
experience factors: In my opinion, a usability test is an
essential part of a user experience evaluation, because if
the usability of an application is bad, this has further
effects on other UX factors like motivation or user
engagement among the users.
Recommendation 3: Evaluate usability and its
influence on UX. Evaluating usability together with
other UX factors is beneficial particularly early in the
project. Other factors often might be affected by usability
(e.g. motivation). Capturing several factors together thus
makes it easier to understand the results and to organise
the studies. One the other hand, one should not explore
too many factors in the same study.</p>
      <p>A summary of downstream utility (Table 4) is based on
self-reported usage of the evaluation results for the further
design and the development of the application. All the
methods have been reported as useful for the subsequent
project phases. When describing the usefulness of the UX
feedback collected by a method, the researchers always
related usefulness to the complexity of the analysis (simple
analysis was appreciated), the phase of the project (early
feedback was appreciated), and the necessary effort. For
example, feedback from expert interviews was directly
used to inform design, but the researcher reported that a lot
of interviews were needed in order to capture feedback
from different stakeholders.
Collected feedback influenced design by capturing users’
past behaviour and trends, identifying specific problems,
identifying solutions, providing better user experience,
providing new solutions or ideas for improvements, and
providing rationale and ideas for complete redesign. In one
case, a negative user experience collected by an expert
group walkthrough led to a complete redesign of the
application. In particular, feedback on motivation and
emotions had a great persuasive effect on the design team.</p>
      <p>The participants stated clearly that they could not see the
purpose of an application for collaborative writing and that
writing is something very private for them. For evaluating
the next version of the same application, ESM was used
together with group-based expert walkthrough for
collecting feedback on enjoyment, emotions, and
sociability. The feedback was very positive, and only some
minor changes of the applications were proposed.</p>
      <p>Recommendation 4: Evaluation should be playful and
provide added value for the participants. One cannot
overemphasize the importance of providing a safe,
comfortable and playful evaluation environment, and
giving ‘something extra’ to the study participants. The
opportunity to learn and try something completely new
and to affect the development of new applications is not
only very stimulating and rewarding for the communities
of users and experts participating in the evaluation, but
also positively affects usefulness of the evaluation
methods. When working with communities it is very
important to build a trustful relationship for ensuring a
successful long term relationship.</p>
      <p>Although they are a commonplace in usability evaluation,
simple recommendations such as “Conduct evaluation in
nice and familiar environment”, “Prepare playful tasks”,
“Use original and playful ways for studies promotion”,
were repeatedly reported by the researchers as very
important for the usefulness of the methods used.</p>
      <p>Recommendation 5: Prepare for diversity. In depth
knowledge of your communities—the different groups of
users and non-users—is essential for successful data
collection. Different versions of questionnaires and focus
group guidelines should be prepared for different user
groups (e.g., professional cabaret artists, amateur artists,
and theatres) and evaluators/moderators should be able
to speak ‘different languages’ (e.g., to talk to children,
teenagers, and elderly people) at the same time.</p>
      <p>When describing the usefulness of the evaluation results,
researchers emphasised importance of good knowledge of
communities and relationships among them. Questionnaires
tailored to different communities have been more useful
than general ones. The researchers also reported that good
collaboration with designers and developers teams was
important for uptake of the evaluation results. Good
knowledge of the application including the ideas of the
designers that might be not yet implemented or presented at
a paper prototype was very useful, as well as the ability to
clearly and quickly report the results on user experience.</p>
      <p>Quotations being typical for users’ emotions and
motivations were highly appreciated by the designers and
developers.</p>
      <p>Recommendation 6: Be best friends with the
developer. Good knowledge of the application under
development is very important for the success of the
evaluation. Evaluators/moderators should be able to
explain ideas behind paper prototypes and screenshots.</p>
      <p>Communicating the results of the evaluation clearly and
in formats understandable to the developers is extremely
important for uptake of the evaluation results.</p>
      <p>CONCLUSIONS AND FUTURE WORK
We conducted a survey among the researchers involved in
the evaluation activities of the CITIZEN MEDIA research
project that developed a plethora of applications supporting
non-professional users in sharing and co-creating
usergenerated content. Combinations of well known evaluation
methods and their home-grown adaptations were used (as
there were no clear defined UX evaluation methods
available yet, fitting the needs of the project). Our results
indicate that group based evaluation methods (group-based
expert walkthrough and focus groups) were useful for
measuring a broad spectrum of the pre-defined UX factors.</p>
      <p>Some factors such as emotions, fun, and co-experience
were difficult to measure and there is an urgent need for
development of such methods. Furthermore methods for
sharing individual experience have to be extended to
capture shared experience of community of users.</p>
      <p>
        Collaborative playful methods and collaborative tasks
supported well move from individual user evaluation
methods to community evaluation methods (e.g. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]).
      </p>
      <p>Within this paper, we summarized our results and lessons
learned from the evaluation activities in several
recommendations, which might be useful for practitioners
working in the area of UX in general, and UX of social
media applications in particular. Furthermore, our
experience might be a useful input for the ongoing
discussions on UX evaluation methods and measurement
within the HCI research community, which special
attention on how to support the design and development
process of new applications, software, or systems.</p>
      <p>
        As pointed out by Blandford et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], comparison of
evaluation methods is very complex and cannot be done by
one study. Although our study covers a broad range of
evaluation methods, UX factors and social media
applications, it does not draw on a large data collection
from numerous subjects with different background,
experiences, and contexts. Furthermore, usefulness was
subjectively evaluated by the researchers in the project
while the development process was still in progress. We
plan to extend our work by mail-based interviews of the
developers investigating down-stream utility in more
details and with objective evaluation of usefulness based
on the inspection of the project’s documentation and
tracing of actual design changes. We also encourage other
researchers to validate and complement our
recommendations by further studies.
      </p>
      <p>ACKNOWLEDGMENTS
This work was supported by the CITIZEN MEDIA
research project (funded by FP6-2005-IST-41 038312) and
by the R2D2 Networks research project (CELTIC and
VERDIKT programs). We thank the partners involved in
the development and evaluation activities. We also thank
the participants in our studies.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Battarbee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Defining co-experience</article-title>
          .
          <source>In Proceedings of the 2003 international Conference on Designing Pleasurable Products and interfaces (Pittsburgh</source>
          , PA, USA, June 23 - 26,
          <year>2003</year>
          ).
          <source>DPPI '03</source>
          . ACM, NY,
          <year>2003</year>
          ,
          <fpage>109</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Blandford</surname>
            ,
            <given-names>A.E.</given-names>
          </string-name>
          , et al.,
          <string-name>
            <surname>Scoping Analytical Usability Evaluation Methods: A Case Study Human-Computer Interaction</surname>
          </string-name>
          ,
          <year>2008</year>
          .
          <volume>23</volume>
          (
          <issue>3</issue>
          ): p.
          <fpage>278</fpage>
          -
          <lpage>327</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Brandtzaeg</surname>
            ,
            <given-names>P.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Følstad</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geerts</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Berg</surname>
          </string-name>
          . R. (
          <year>2009</year>
          ).
          <article-title>Innovation in online communities - Towards community-centric design</article-title>
          .
          <source>In Proceedings of the First International Conference, UCMedia</source>
          <year>2009</year>
          , Venice, Italy, December 9-
          <issue>11</issue>
          ,
          <year>2009</year>
          . In P. Daras Ibarra and
          <string-name>
            <given-names>O.</given-names>
            <surname>Mayora</surname>
          </string-name>
          (Eds.), LNICST, Springer, Vol.
          <volume>40</volume>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Dix</surname>
            <given-names>A.</given-names>
          </string-name>
          , et al.,
          <string-name>
            <surname>Human-Computer Interaction</surname>
          </string-name>
          . 3rd ed.
          <source>2003: Prentice Hal.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Følstad</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Group-based Expert Walkthrough</article-title>
          . in
          <source>R3UEMs: Review, Report and Refine Usability Evaluation Methods, COST924-MAUSE 3rd International Workshop</source>
          .
          <year>2007</year>
          . Athens, Greece.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Følstad</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Work-Domain Experts as Evaluators: Usability Inspection of Domain-Specific WorkSupport Systems</article-title>
          .
          <source>International Journal of HumanComputer Interaction</source>
          ,
          <year>2007</year>
          .
          <volume>22</volume>
          (
          <issue>3</issue>
          ): p.
          <fpage>217</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Gray</surname>
            , W.D. and
            <given-names>M.C.</given-names>
          </string-name>
          <string-name>
            <surname>Salzman</surname>
          </string-name>
          ,
          <article-title>Damaged merchandise? A review of experiments that compare usability evaluation methods</article-title>
          .
          <source>Human-Computer Interaction</source>
          ,
          <year>1998</year>
          .
          <volume>13</volume>
          (
          <issue>3</issue>
          ): p.
          <fpage>203</fpage>
          -
          <lpage>261</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Hartson</surname>
            ,
            <given-names>H.R.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>T.S.</given-names>
            <surname>Andre</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.C.</given-names>
            <surname>Willings</surname>
          </string-name>
          ,
          <article-title>Criteria for evaluating usability evaluation methods</article-title>
          <source>International Journal of Human-Computer Interaction</source>
          <year>2001</year>
          .
          <volume>13</volume>
          (
          <issue>4</issue>
          ): p.
          <fpage>373</fpage>
          -
          <lpage>410</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Hassenzahl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>User experience (ux): towards an experiential perspective on product quality</article-title>
          .
          <source>In IHM'08: Proc. of the 20th International Conference of the Association Francophone d'Interaction HommeMachine</source>
          , pages
          <fpage>11</fpage>
          -
          <lpage>15</lpage>
          , New York, NY, USA,
          <year>2008</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Hassenzahl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Tractinsky</surname>
          </string-name>
          ,
          <article-title>User experience - a research agenda Behaviour &amp; Information Technology, Empirical Studies of the User Experience</article-title>
          ,
          <year>2006</year>
          .
          <volume>25</volume>
          (
          <issue>2</issue>
          ): p.
          <fpage>91</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Usability engineering methods for software developers Communications of the ACM</article-title>
          ,
          <year>2005</year>
          .
          <volume>48</volume>
          (
          <issue>1</issue>
          ): p.
          <fpage>71</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Ketola</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Roto</surname>
          </string-name>
          ,
          <string-name>
            <surname>On User Experience Measurement Needs - Case Nokia</surname>
          </string-name>
          .
          <source>International Journal on Technology and Human Interaction (IJTHI)</source>
          ,
          <year>2009</year>
          .
          <volume>5</volume>
          (
          <issue>3</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Kuniavsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <article-title>Observing the user experience a practitioner's guide to user research</article-title>
          .
          <year>2003</year>
          , San Francisco, USA: Morgan Kaufmann
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Krueger</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>M.A.</given-names>
            <surname>Casey</surname>
          </string-name>
          ,
          <source>Focus Groups: A Practical Guide for Applied Research</source>
          .
          <year>2000</year>
          ,
          <string-name>
            <given-names>Thousand</given-names>
            <surname>Oaks</surname>
          </string-name>
          , Calif: Sage Publications Inc, Thousand Oaks, Calif
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>E. L.-C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roto</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hassenzahl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vermeeren</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kort</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Understanding, scoping and defining user experience: a survey approach</article-title>
          .
          <source>In CHI'09: Proc. of the 27th international conference on Human factors in computing systems</source>
          , pages
          <fpage>719</fpage>
          -
          <lpage>728</lpage>
          , New York, NY, USA,
          <year>2009</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>E.L.-C.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>E.T.</given-names>
            <surname>Hvannberg</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Hassenzahl.</surname>
          </string-name>
          COST294-MAUSE Workshop on User Experience -
          <article-title>Towards a Unified View</article-title>
          . Workshop at NordiCHI'06 conference. 2006; Available from: http://nordichi.net.dynamicweb.dk/Workshops/W2- User-Experience_.aspx.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>E.L.-C.</given-names>
          </string-name>
          <article-title>and v. Schaik, Modelling UX - an agenda for research and practice</article-title>
          .
          <source>Interacting with Computers.</source>
          ,
          <year>2010</year>
          .
          <volume>22</volume>
          (
          <issue>5</issue>
          ): p.
          <fpage>313</fpage>
          -
          <lpage>438</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wurhofer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beck</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karahasanovic</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>User Experience (UX) Patterns for Audio-Visual Networked Applications: Inspirations for Design. Accepted full paper to NordiCHI2010.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Roto</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Väänänen-Vainio-Mattila</surname>
          </string-name>
          .
          <article-title>User experience evaluation: do you know which method to use?</article-title>
          <source>CHI Extended Abstracts</source>
          <year>2009</year>
          :
          <fpage>2763</fpage>
          -
          <lpage>2766</lpage>
          .
          <source>in SIG session at CHI</source>
          <year>2009</year>
          , available in CHI Extended Abstracts.
          <year>2009</year>
          . Boston, USA: ACM.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weiss</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Evaluating user-generated content creation across contexts and cultures</article-title>
          .
          <source>In IAMCR2007: Proceedings of the 50th Anniversery Conferneces of the International Association for Media and Communication Research</source>
          (Paris,
          <year>July 2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meschtscherjakov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>User experience evaluation in the mobile context</article-title>
          . In Aaron, Marcus, Cereijo Roibás, Anxo, and
          <string-name>
            <surname>Sala</surname>
          </string-name>
          , Riccardo (Eds).
          <source>Mobile TV: Customizing Content and Experience. Human-Computer Interaction Series</source>
          , Springer London,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miletich</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holocher</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beck</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kepplinger</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muzak</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernhaupt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Local communities and IPTV: Lessons learned in an early design and development phase</article-title>
          .
          <source>In Computers in Entertainment 7</source>
          ,
          <issue>3</issue>
          (Sep.
          <year>2009</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moser</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alliez</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holocher</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M. Connecting TV</given-names>
          </string-name>
          &amp;
          <article-title>PC: An In-Situ Field Evaluation of an Unified Electronic Program Guide Concept</article-title>
          .
          <source>In EuroITV2009: Proceedings of 7th European Conference on Interactive Television</source>
          (New York, NY, USA,
          <year>2009</year>
          ), ACM,
          <fpage>91</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Preece</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rogers</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <article-title>Interaction design: beyond human-computer interaction</article-title>
          . New York: John Wiley &amp; Sons.
          <year>2002</year>
          , New York: John Wiley &amp; Sons.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Roto</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , et al.
          <source>User Experience Evaluation Methods in Product Development (UXEM'09)</source>
          , Workshop at INTERACT 2009; Available from: http://wiki.research.nokia.com/index.php/UXEM09.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Roto</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Obrist</surname>
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Väänänen-Vainio-Mattila</surname>
          </string-name>
          ,
          <article-title>User experience evaluation methods in academic and industrial contexts</article-title>
          ,
          <source>in Workshop on User Expererience Evaluation Methods</source>
          , in conjunction with Interact'
          <volume>09</volume>
          <fpage>conference</fpage>
          . 2009: Uppsala, Sweden.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Seaman</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <article-title>Qualitative Methods in Empirical Studies of Software Engineering</article-title>
          .
          <source>IEEE Transactions on Software Engineering</source>
          ,
          <year>1999</year>
          .
          <volume>25</volume>
          , No.
          <volume>4</volume>
          (
          <issue>July</issue>
          /
          <year>August 1999</year>
          ): p.
          <fpage>557</fpage>
          -
          <lpage>572</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Väänänen-Vainio-Mattila</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Roto</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Hassenzahl</surname>
          </string-name>
          .
          <article-title>Now let's do it in practise: User eXperience Evaluation Methods in product development (UXEM)</article-title>
          . CHI 2008 Workshop; Available from: http://www.cs.tut.fi/ihte/CHI08_workshop/.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Väänänen-Vainio-Mattila</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Roto</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Hassenzahl.</surname>
          </string-name>
          COST294-MAUSE Workshop on Meaningful Measures:
          <article-title>Valid Useful User Experience Measurement (VUUM)</article-title>
          .
          <source>in Proceedings of the COST294-MAUSE, 18th June</source>
          ,
          <year>2008</year>
          , Reykjavik, Iceland.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Vermeeren</surname>
            ,
            <given-names>A.P.O.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>E.L-C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roto</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Obrist</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoonhout</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Väänänen-Vainio-Mattila</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <source>User Experience Evaluation Methods: Current State and Development Needs. NordiCHI2010</source>
          , Reykjavik, Iceland
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>