=Paper=
{{Paper
|id=None
|storemode=property
|title=Identifying User eXperiencing Factors along the Development Process: A Case Study
|pdfUrl=https://ceur-ws.org/Vol-922/paper8.pdf
|volume=Vol-922
|dblpUrl=https://dblp.org/rec/conf/iused/WincklerBB12
}}
==Identifying User eXperiencing Factors along the Development Process: A Case Study==
Identifying User eXperiencing factors along the development process: a case study Marco Winckler Cédric Bach Regina Bernhaupt ICS-IRIT ICS-IRIT ICS-IRIT, Univ. Paul Sabatier Université Paul Sabatier Université Paul Sabatier RUWIDO winckler@irit.fr cedric.bach@irit.fr Regina.Bernhaupt@ruwido.com ABSTRACT Currently there are many evaluation methods that can be used to assess the user interface at different phases of the development process. However, the comparison of results obtained from methods employed in early phases (e.g. requirement engineering) and late phases (e.g. user testing) of the development process it is not straightforward. This paper reports how we have treated this problem during the development process of a mobile application called Ubiloop aimed at supporting incident reporting in cities. For that purpose we have employed semi-directive requirement interviews, model-based task analysis, survey of existing systems and user testing with high fidelity prototypes. This paper describes how we have articulated the results obtained from these different methods. Our aim is to discuss how the triangulation of methods might provide insights about the identification of UX factors. Author Keywords Incident reporting systems, UX factors, development process ACM Classification Keywords Figure 1. Overview of incident reporting with Ubiloop: users H.5.m. Information interfaces and presentation (e.g., HCI): report incidents like potholes, tagging, or broken street lamps to Miscellaneous. the local government using a mobile phone application. General Terms Despite the fact that incident reporting systems using Human Factors; Design; Measurement. mobile technology are becoming more common, little is known about its actual use by the general population and 1 INTRODUCTION which factors affect the user experience when using such Incident reporting is a very well-known technique in system. In order to investigate which user experience application domains such as air traffic management and factors must be taken into account when designing the health, where specialized users are trained to provide interface of mobile application for incident reporting, we detailed information about problems. More recently, this have employed several evaluations methods (including kind of technique has been used for crisis management such semi-directive requirement interviews, model-based task as the hurricane Katrina [1]. Such self-applications are analysis, survey of existing systems and user testing with aimed to be accessible by the general public with a high fidelity prototypes) along the development process of minimum or no training. In the context of the project the application Ubiloop (developed in the context of the Ubiloop, we are investigating the use of mobile technology eponym project). Hereafter we report how, using several for allowing citizens to report urban incidents in their evaluation methods, it was possible to: neighborhood that might affect the quality of their environment. We consider urban incidents as any • Identify which (and in what extension) UX factors (micro)events, perceived by a citizen, that might affect the affect mobile incident reporting systems; quality of his urban environment (e.g. hornet nest, potholes, • Associate UX factors and artifacts that are aimed to broken bench, tags,…). By reporting incidents, citizens can support the design and implementation of systems; improve the quality of life by influencing the quality of • Determine how users value the incident reporting their environment. Figure 1 illustrates the overall scenario systems (in terms of UX factors) in both early and late of our case study. phases of the development process. The first two sections of this papers provide an overview of the development process (section 2) and the methods employed (section 3) in the Ubiloop project. Then, at section 4 we describe how we have articulated the results in order to provide a bigger picture of UX factors and artifacts used during the development process. Finally we discuss the results and lessons learned. 2 OVERVIEW OF THE PROCESS We have followed a user centered design approach. Our first goal was to identify how user experience factors are important to the users when they are performing tasks such as reporting, monitoring and sharing with other citizen’s information about urban incidents. We firstly address the following dimensions: perceived quality of service, awareness of perceived user involvement with reported incidents, perceived effects of mobile technology for reporting incidents, trust, privacy, perceived usefulness, usability and satisfaction with incident reporting systems in urban contexts. These dimensions are articulated around four main research questions: • How citizens perceive and describe urban incidents as part of their quality of life? • How does the choice of communication to digitally report incidents in a mobile context influence the overall user experience? If so, what dimensions of user experience are important for such an incident reporting Figure 2 Articulation between artifacts and methods employed. application? Thick lines indicated artifacts produced; thinner lines indicate input for the method; dashed lines are used to show compatibility • How does social awareness affect the user experience checking between artifacts. when interacting with incident reporting systems? • What contextual factors are important for incident In more general terms, this design process started by (1) reporting and which interaction techniques better assist benchmarking existing applications in order to provide a user in reporting incidents? coverage of the application domain. From this step we have extracted (2) generic and representative scenarios that were These questions were investigated along the development used to organize an (2.1) interview with (18 potential) process by the means of different evaluation methods as future end-users of the Ubiloop application. These shown by Table 1. requirement interviews allowed us to identify new scenarios (some of them not covered by existing applications), Table 1. Methods employed during the development process of expectations (that we name here early requirements) and the application Ubiloop. (2.2) UX factors that are associated to the scenarios. By Design phase Methods employed (2.3) analyzing a set of 120 scenarios it was possible to identify a task pattern that was then specified by using a Survey of existing applications task-model notation. This (3) task model was used to check Requirement analysis Semi-directive requirement interviews the coherence of the design with respect to the previously Model-based tasks analysis identified scenarios. Then, design options supported by the task model were (4) (5) prototyped and subsequently tested Design Prototyping with end users. During (6) user testing, we have assessed Evaluation User testing (7) UX factors that were then compared with those collected earlier during the (2.2) interviews. Figure 2 shows the articulation between methods and artifacts produced. Notice that the dashed arrows indicate 3 METHODS EMPLOYED AND MAIN FINDINGS the relationships ensuring cross-consistency between In this section presents the methods and key findings. artifacts and results obtained from the methods employed. 3.1 Survey of Existing Systems In order to analyze the actual support provided by existing applications, we conducted an analysis of existing services for incident reporting in urban contexts. This study focused specific point, for example: a broken lamp points out an on the front office (i.e. reporter tools). Applications for incident that is difficult to illustrate with a picture, whilst a incident reporting were first identified from the set of tools hornet nest focus on the perceived danger. Every interview ranked by Web search engines (i.e. google.com). Then, included a short questionnaire on demographics and only those that were available for remote testing were technology usage. All sessions were recorded and then selected for further analysis. transcribed by a French native speaker. The transcriptions were analyzed accordingly to the grounded theory approach Fifteen applications were selected covering international [3][6]. A corpus of 92 240 words was analyzed resulting in reporting services. What we found to be specific for the 11 classes/codes with 1125 segments of text. The coding area of incident reporting is the broad diversity of features was supported by the MaxQDA 10 software [8]. for reporting urban incidents (more than 340). Nonetheless, these incident reports seem to share similar characteristics The interviews provided two key pieces of information: i) which can be used for helping users to locate on the user scenarios for reporting incidents, which can be associated to interface the service that better suits to the type of incident a task that must be supported by the system; and, ii) s/he wants to report in a given context of use. Despite the qualitative attributes that could be interpreted as UX factors fact that these applications address the same problem of associated to the given scenario. For an example, let assume reporting incidents in urban context using mobile the following segment given by participant P2: “…Besides technology, none of them was implemented following the going to report your [own] idea, you could ask if there are same scenario; which might be explained by cultural other ideas [proposed by other]... [that are] close to your difference that affect the user experience with this kind of home...” From this passage, the participant clearly states a applications. For example, in some countries the identity of UX factor (stimulation as described by Hassenzahl [4]) that the citizen reporting the incident is always mandatory could influence him to perform the task (report [an whilst in other countries it was mainly optional or only incident]). These two requirements interviews provided requested in specific types of incidents (that could be evidence for identifying the following UX dimensions: perceived as denunciation). visual & aesthetic experience, emotions (related to negative experience of the incident and positive experience to report From the analysis of existing systems we have extracted a it – joy / pride), stimulation, identification (through their set of generic and representative scenarios that should be personality, their own smartphone, their sensibility to supported by our application. We could not find in the specific incidents), meaning and value, and social literature any work describing UX factors addressing this relatedness/co-experience. specific application domain. 3.3 Model-Based Task Analysis 3.2 Semi-directive requirement interviews From the analysis of existing applications and interviews In order to understand users expectations and requirements we have identified 120 possible scenarios that could be for the future system, two series of semi-directed interviews generalized as a user task pattern consisting of: (1) to detect were conducted. The first one, called general interview, the incident, (2) to submit an incident report and (3) to focused on how users perceive their environment and how follow up on an incident report. This pattern was modeled they formulate general requirements for reporting incidents using the task notation HAMSTERS [6] which feature a using a smartphone. The second one, called scenario-based hierarchical graph decomposing complex tasks into more interview was designed to investigate how users react to simple ones as shown by Figure 4. Tasks are depicted different situations that would be subject of an incident accordingly to the actors involved in the task execution (i.e. report. Each series of interviews involved nine participants. user, system or both). It also integrates operators for During the general interview, participants were prompted to describing dependencies between tasks (i.e. order of report about: how they perceive places and their execution). As this task model does not impose any environment; negative experiences in terms of particular design for the system it can accommodate all the environmental quality; personal involvement with scenarios identified during the analysis of existing problems; preferred system design; and dimensions they applications. By modeling user tasks it was possible to think important. identify aspects such as optional/mandatory tasks associated to incident reporting, inner dependencies between tasks, as In the scenario-based interview, participants were well as pre- and post-conditions associated to tasks introduced to 7 scenarios (one at once, in random order) execution. and then asked to explain how they would envisage reporting incidents using a smartphone. The scenarios 3.4 Prototyping included to report a broken street lamp, a pothole, a missing In previous work [2] we have found that information related road sign, a bulky waste, a hornet nest, a tag/graffiti, and a to incidents includes: what the incident is about, when it broken bench in a park. These incidents were selected from occurs, where it is located, who identifies the incident and the set of scenarios supported by existing applications. the expected outcomes leading to its solution. These Moreover, each scenario was designed to highlight a dimensions include optional and mandatory elements that characterize incidents. For example, the dimension what report incidents found in the way. The route was populated can include a combination of either a textual description, a with tags prompting users to report fake incidents that refer picture of the incident, or just an indication of the incident to the scenarios presented in section 3.2. In addition to these category. Based on these early findings and the generic task predefines tags, users were free to report any other incident model described above we developed a low-fidelity and he could see in the campus (and the route had many real then a high-fidelity prototype (see Figure 3). The prototype incidents such as potholes, tags, public light open during takes full benefits of currently embedded technology day…). In addition to these tasks users were asked to fill in available in smartphones such as video camera and global a demographic questionnaire, an AttrakDiff questionnaire positioning systems (GPS). GPS makes the user’s task of [5] and a debriefing interview. locating incidents easier and photos attached to the Nineteen participants, ranging from 21 to 52 years old, took description of incidents provide contextual information and part in the experiment. All participants successfully in some situation might be used as evidence of its complete the tasks. The analysis of data concerning UX occurrence. factors took into account the answers provided by the AttrakDiff questionnaire, the users tasks and the comments provided by users whilst performing the tasks. Again user’s comments were transcribed and analyzed accordingly to the grounded theory approach. At this time the segments were coded accordingly to the actual tasks performed by the users during the experiment. One of the findings is that all UX factors previously identified during the semi-directed requirement interviews a) b) c) (see section 3.2), were reported again during the user testing. Nonetheless, due to space reasons, we illustrate the Figure 3 Ubiloop protoype featuring: a) main menu page; b) description of findings to two factors, stimulation and textual description of incident; c) location on an interactive map. identification to incident, that we have found out to be key The user interface of the Ubiloop prototype supports all the UX factors to engage the process of reporting (when user user tasks previously identified. The prototype was also decides to report the incident s/he identified in the designed to support the early requirements expressed by environment).: users. Moreover, the prototype was designed to create a • Stimulation was evaluated during the user testing positive user experience that could be also inferred from the through a question of the post-test interview: “Did you results of the semi-directive requirement interview. For discover some incidents on the University campus that example, to enhance the UX factor experience we deploy you could report with the prototype?” This UX factor the prototype in a smartphone (whose technology is can was also detected during thinking aloud technique perceived as a stimulation for using the application), we and the Attrakdiff questionnaire. include categories of incident (as users said they are more • Identification to incident was evaluated with another likely to report an incident if they could see example of question of the post-test interview: “Are the incidents categories on incidents) and allow users to see reported you declared during the experiment candidates to be incidents in the neighborhood (as suggested by the really reported by you to the Ubiloop service”? participant 2, see section 3.2). Furthermore, the evaluation of Identification to incident 3.5 User testing reveals that a strong proportion of UT participants declare A user testing with high-fidelity prototype was designed to to be ready to report some of the mandatory incidents (90 % explore how users report urban incidents with Ubiloop. The for Broken bench and Hornet nest; 75% for the Broken study was held at the campus of the University of Toulouse street lamp; and 45% for the Heap of rubble). And during the summer 2012. Thinking aloud protocol was used individuals are mainly ready to declare the incidents they during the experiment. Users were asked to wear glasses spontaneously discover during the experiment (according embedding a video recording system, so that it was possible that the declaration is easy to perform and useful). In other to determine where they were looking at whilst using the words, the applications seem to be able to increase both prototype. The recording apparatus also included a logging Stimulation and Identification to incident. system and a screen recorder embedded into the smartphone. 4 TRIANGULATION OF METHODS To answer the research questions on what user experience Users were trained during 5 minutes on how to report a (UX) dimension should be taken into account when simple incident (i.e. a Broken street lamp) with a designing incident reporting systems for urban contexts; we smartphone embedding Ubiloop. Participants were then have triangulated the results of the three methods used in asked to follow a predefined route in the campus and any this work, as follows: • During semi-directive requirement interviews users scenarios. By combining UX dimensions and user expressed requirements and expectation for reporting scenarios it was possible to extrapolate the results in a incidents by the means of personal stories that were single task model as shown by Figure 4 where user interpreted as possible scenarios of use. These tasks are decorated with UX dimension (e.g. [ID] for scenarios were then used to revise our original task identification] so that the above could be read as model for incident reporting systems. follows: "I am passing by at this park every Sunday and this • By using a model-based task analysis, it was possible bench has not been repaired for weeks [ID]. It is time now to report that, so it will get fixed. It is not really a problem or to remove ambiguities present in the discourse of unsafe, but the bench is simply not usable in the current state participants and then to formalize users’ requirements. [MV]. [: detect/recognize the incident:]. It seems important Moreover, model-based task analysis provided an now to make sure that the appropriate person is informed accurate description of user tasks. This step is about that bench [CX], I think I should use the application to extremely important for future development of incident report the incident, because I want to be a good citizen [ID]. reporting systems. As described in [7], tasks models do I think it is a good idea to send them a photo so they can see not only improve the understanding of user tasks but that the bench is really broken and that the wood has to be they also can be used to assess if an incident reporting replaced. And when they see the photo they see that it is system was effectively implemented to support the really there and so they will not need my contact information to have a proof that the broken bench really exists. [MV] specified set of user tasks. [:describe the incident:]”. This example shows how user • In order to make sure that tasks identified in the semi- tasks are interrelated to the UX dimensions. directive requirement interviews and model-based tasks • The prototypes were building accordingly to the task analysis are representative we compare them with a models. Once implemented, the prototype was cross- survey of existing systems. The results confirm that our checked in order to make sure that it can effectively analysis is exhaustive because our task model covers support the scenarios early identified. Thus, every all tasks supported by surveyed systems and these presentation unit (ex. screens and widgets) can be systems do not implement any task that is not described easily associated with an element of the task model. By in the task model. extrapolation with the results from requirement • The analysis of transcripts of semi-directive interviews we could extrapolate a tuple consisting of requirements interviews also supported the user interface elements + user tasks + UX factors. identification of UX dimensions associated to user Figure 4 Generic task and most important UX dimensions for each sub-task. • During user testing is was possible to identify UX found in early and later phases of the development factors during the execution of the tasks with the process. Thus, we have found that the UX factor prototype. It is interesting to notice that the scenarios stimulation reported during interviews to the tasks to supported by the prototype were the same used during find incidents occurred again when the users use the interviews so it was possible to correlate the results prototype to complete the same task. This confirms the value of early identification of UX factors with types of interactive systems that can be successfully requirement interviews. Moreover, when counting the described by tasks models. We can just wonder if this number of segments of user testing reporting the UX approach could work in application domain such as game factor stimulation, we have found that this factor is where user activity is harder to represent by the means of more frequent and even distributed along tasks. We task models. Further work is required to determine if other also have compared the categories of incident reported design artifacts and evaluation methods can also be used to by users during the thinking aloud and during the provide such as articulation. debriefing; we have found that the distributions of We have deliberated performing the user testing with high- incidents across categories are more important in the fidelity prototypes. We have found in the requirement requirement phase (72 citations/42 categories) than in interviews that the use of the device smartphone is per se a user testing (80 cites/19 categories). Indeed, during the stimulating element. For the purpose of the project, it was requirement interview participants had difficulties to more important to test the high-fidelity prototype in a identify/remember urban incidents whilst during user situation of mobility than a paper-based mockup. However testing participants had more ease to identify incidents it would be interesting to assess the impact of mockups on along the route of the experiment. the identification of UX factors. • Before the participants of the requirements interviews had strong difficulties to identify, remember or imagine Acknowledgement urban incidents. It’s not the case (or less the case) when This work is part of the Ubiloop project partly funded by users can interact with the mobile application. the European Union. Europe is moving in France Midi- • Others examples come from the responses to the post- Pyrenees with the European Regional Development Fund test interview question about the Stimulation factor. (ERDF). Genigraph and e-Citiz are partner of this work. ”I never thought to report this kind of incident [a public REFERENCES garbage with a broken top] before [to use the application], 1. Moynihan, D. P. (2007). From Forest Fires to Hurricane but that true this is would be quickly a serious problem of Katrina: Case Studies of Incident Command Systems. IBM squalor.” Center for the Business of Government. ”That’s funny because this application gives me the 2. Bach, C., Bernhaupt, R., Winckler, M. Mobile Incident opportunity to discover my own environment with a new Reporting in Urban Contexts: Towards the Identification of eye.” Emerging User Interface Patterns. In 5th IFIP's WG 13.2 Workshop PUX. Lisbon, Portugal, September 5th 2011. 5 DISCUSSION AND LESSONS LEARNED 3. Glaser, B.G., Strauss, A. L. (1967) The discovery of Grounded Unfortunately, we don’t have room for providing a Theory: Strategies for qualitative research, Adline: Chicago. comprehensive description of the results collected by the 4. Hassenzahl, M. The thing and I: understanding the relationship between user and product. In Funology: From Usability to different methods. Nonetheless, the results given in this Enjoyment, M. Blythe, C. Overbeeke, A. F. Monk, & P. C. paper illustrate that UX factors can be detected both in early Wright Eds Dordrecht: Kluwer, 2003, pp. 31-42. and later phases of the development process. Moreover, in 5. Hassenzahl, M. (2002). The effect of perceived hedonic some extend, such results can be correlated. quality on product appealingness. International Journal of Human-Computer Interaction, 13, 479-497 One of the challenges was to determine the importance of 6. Lazar, J., Feng, J. H., Hochheiser, H. Research methods in UX factors when they are collected in different phases of Human-Computer interaction, John Wiley & Sons: UK, 2010. the development process. In the present work we have been 7. Martinie, C., Palanque, P., Winckler, M. Structuring and using a simple counting method (number of segments) and Composition Mechanisms to Address Scalability Issues in distribution of UX factors across users’ tasks. Using this Task Models. In Proc. of INTERACT, (3) 2011, pp. 589-609. simple method we found some differences that require Springer LNCS. further analyses. Nonetheless, it prompts by a case where it 8. MaxQDA [online] software available: www.maxqda.com would be interesting to have quantitative metrics of UX for 9. Matyas, S., Kiefer, P., Schlieder, C., Kleyer, S. (2011). comparing them. Wisdom about the Crowd: Assuring Geospatial Data Quality Collected in Location-Based Games. Entertainment It is important to associate the identifying UX factors with Computing – ICEC 2011 Lecture Notes in Computer Science, the artifacts used to the design. In our study, we have found 2011, Vol. 6972/2011, 331-336. that scenarios and task models works as a lingua franca for 10. Goodchild, M. F. (2007). Citizens as sensors: the world of mapping user requirements and UX factors. However, it is volunteered geography. GeoJournal, 69(4), 211–221. doi:10.1007/s10708-007-9111-y. worthy to notice that this might be specific to a certain