=Paper= {{Paper |id=Vol-1910/paper0206 |storemode=property |title=Learning System User Interface Preferences: An Exploratory Survey |pdfUrl=https://ceur-ws.org/Vol-1910/paper0206.pdf |volume=Vol-1910 |authors=Timo Hynninen,Antti Knutas,Arash Hajikhani,Jussi Kasurinen |dblpUrl=https://dblp.org/rec/conf/chitaly/HynninenKHK17 }} ==Learning System User Interface Preferences: An Exploratory Survey== https://ceur-ws.org/Vol-1910/paper0206.pdf
                              Learning System User Interface
                              Preferences: An Exploratory Survey

Timo Hynninen                                          Abstract
Antti Knutas                                           User experience is a key aspect when designing a
Arash Hajikhani                                        software product. This applies especially when the use
Lappeenranta University of                             of the service requires a cognitive load from the user,
Technology                                             such as in online learning systems. In this paper, we
Lappeenranta, Finland                                  present initial results that can serve as source material
timo.hynninen@lut.fi                                   for creating preference profiles for users, based on their
antti.knutas@lut.fi                                    personal information and teamwork preferences to
arash.hajikhani@lut.fi                                 enhance usability aspects of software systems. Based
                                                       on our studies on student behavior during a learning
Jussi Kasurinen                                        experience, we present the plan for a solution which
South-Eastern Finland University                       combines these two approaches.
of Applied Sciences
Kotka, Finland                                         Author Keywords
jussi.kasurinen@xamk.fi                                User interface; adaptive systems; survey; Belbin; Yee;
                                                       human factors; learning system; profiling; clustering.

                                                       ACM Classification Keywords
Copyright is held by the author/owner(s).              H.1.2. User/Machine Systems: Human factors.
CHItaly ’17, September 18-20, 2017, Cagliari, Italy.   H.5.2. User Interfaces: Evaluation/Methodology.
                                                       H.5.2. User Interfaces: Benchmarking.

                                                       Introduction
                                                       User experience is a key aspect when designing a
                                                       product or a service aimed towards the general
                                                       population. In this context, the user experience means
                                                       both the usability and learnability of the user interface,
                                                       and the provided satisfaction and emotional impact
                                   from being exposed to the contents of the service [11].      To realize these research goals we conducted an
                                   Often the primary aim in design is to maximize the           exploratory empirical usability survey with several
                                   usability and the user experience aspects to the best        different basic user interface components that could be
                                   possible degree. However, the problem of this approach       featured in online learning systems, such as Moodle 1.
                                   is that the people tend to like different things, and        We formed a focus group consisting of volunteer
                                   generally behave differently from each other. To             university students, and asked the participants to
                                   mitigate this problem in the user interface design,          individually evaluate the different user interface
                                   many services allow users to modify the interface to         elements according to their own preferences. During
                                   their liking. However, in many services the customers        the same study, the participants were also asked to fill
                                   may not even be aware of all possible modifications          out teamwork and online game preference profile
                                   that can be done [3]. Some users consider usability          questionnaires to examine if their preferences could be
                                   tutorials, mentors or mandatory visits to see the help       matched to the motivational aspects of using an online
                                   systems irritating, or simply lack the computer skills to    system, or preferred working styles.
                                   independently learn to use anything more complicated
Figure 1: A set of examples of     than the simple web services [4].                            Research Setup
the UI elements: Editor with                                                                    To understand what fundamental user interface
several support views (top), and   At the same time, there are studies which show that          solutions our test group liked and used, we created a
Editor with one support view in    the adaptive UI generation is possible [9], and various      test with 153 test cases of different types of user
window (bottom).
                                   approaches towards this objective have been studied          interface solutions, layouts, input devices, elements
                                   [10]. Previous research has also established that the        and color schemes.
                                   users of online systems can be profiled [1], and these
                                   profiles can be used to create customized approaches         The concept was to measure the usefulness, efficiency
                                   to enhance user experience [8] in computer-supported         and likeability of the different user interface schemes,
                                   learning. The possibility of creating an adaptive            following the usability categories by Rubin [11]. The
                                   interface raises questions: Which kind of interface          participants were requested to evaluate the user
                                   elements should the designers try present to the users?      interface elements, and grade them for efficiency,
                                   Which kind of approaches should be presented to the          likeability and usefulness. After this, the participants
                                   users from a multitude of options?                           were showed two user interface elements from which to
                                                                                                choose the one they preferred. The element of
                                   To summarize, the main research questions in this            learnability was not measured, since our test set was
                                   study are:                                                   composed of images depicting different elements and
                                                                                                layouts as illustrated in figure 1. The test image set and
                                       1.   Are there distinct preferences for certain
                                            interface types, and
                                       2.   Are there distinct clusters of interaction
                                            preferences amongst the users?                     1 https://moodle.org/
    Item          Avg      Std    survey materials can be downloaded from the online          results that were high on average. The results of the
 Mouse and       4.55     0.66    appendix 2.                                                 two tables correlate highly. The respondents seem to
 Keyboard                                                                                     think that traditional input methods such as the mouse
   Mouse         4.32     0.86    The data was collected in controlled sessions from the      and keyboard are usable and efficient. Desktop
  Keyboard       4.19     0.86    volunteers. In total, we collected profiles from 31         computer was perceived to be more efficient as an
                                  participants. The goal in analyzing the data was to         input method, but on the other hand laptop was
   Laptop        3.97     0.93
                                  establish whether there are varying user interface          considered to be more usable overall. On the UI design
Table 1: Items where perceived    preferences for different types of users. To establish      selections, the traditional tools such as hyperlinked
usability was high.               the user profiles for the participating volunteers, they    text, desktop icons and a selection of the simplified
                                  were asked to fill questionnaires for two profiling         versions of tools (login, search tool) reached universal
                                  methods: The Belbin teamwork profile [5] and Yee's          high scores. Table 3 presents a list of detected clusters
    Item          Avg      Std    online game motivations [13]. The results were              and their centroids. The table cells have been colored
                                  analyzed to discover any recurring patterns between         for clarity. Red indicates large values and blue indicates
 Mouse and       4.65     0.60
 Keyboard                         the respondents with the k-means clustering algorithm,      low values. Column C1 presents values present in the
                                  which is a statistical analysis method for automatically    first cluster, column C2 in the second cluster and
  Keyboard       4.61     0.61
                                  partitioning a dataset into a specified number of groups    column Dev is the calculated difference between the
  Desktop        4.58     0.71    [7,12].                                                     two columns.
   Mouse         4.35     0.74
 Hyperlinks      4.23     0.91    Results and Implications                                    When investigating the cluster values, the clearest line
   Simple        4.23     1.01    First, the results were sorted to discover universal high   of division is between respondents who rated
Login-screen                      scores from the data. The usability aspects of the          themselves motivated in play and respondents who did
  Desktop        4.26     1.29    different layouts, input devices and designs did not        not feel highly motivated to play in any category. The
   icons                          include any surprises; Table 1 presents the results that    division can be seen clearly in three last rows in Table
   Simple        4.06     1.13    were high on average. Results are presented on a            3. Respondents who had high motivations regarding
   search                         range of 0 to 5. For using any system, the most             gameplay (C1) were also biased towards Belbin
                                  universal high scores were given to the mouse and           coordinator, shaper and resource investigator profiles
Table 2: Items where perceived
                                  keyboard, and laptop system. Any other UI                   compared to the other cluster. On the other hand,
efficiency was high.
                                  arrangement (different touch screen layouts, pens,          respondents who were not motivated to play games
                                  tablets, OS styles) did not reach high average without      (C2) had higher Belbin profile preferences in
                                  significant deviation.                                      implementer, team worker and complete finisher. It
                                                                                              should be noted that the cluster C2 was still slightly
                                  The respondents were also queried on the perceived          interested in the social aspects of gameplay.
                                  efficiency of the input methods. Table 2 presents the

                                 2 http://www2.it.lut.fi/GRIP/datatools/UI-

                                  images/UIelements_Cyberlab.zip
                                           Profile category          C1       C2     Dev      layouts with minimal amount of added views or
Cluster Analysis Results
                                             Implementer             0,35    0,46    0,11     elements. Also, plain presentation of the data was
and Validity Evaluation
                                              Coordinator            0,45    0,29    0,16     usually considered more likeable and efficient; users
                                                                                              strongly preferred simple dropdown menus over
                                                 Shaper              0,54    0,48    0,06
                                                                                              context-classified, and simple text over hypertext. In
K-means clustering analysis                       Plant              0,41    0,42    0,01
                                                                                              general, most of the universal low scores in usability
resulted in two clusters of             Resource Investigator        0,46    0,30    0,16     represented complex, clustered or item-saturated user
student profiles that share
                                           Monitor Evaluator         0,41    0,40    0,01     interface views.
same preference sets in
Belbin and Yee tests. The                    Team Worker             0,34    0,43    0,09
                                          Complete Finisher          0,34    0,44    0,10     A threat to the validity of this study is the overfitting of
average silhouette coefficient
                                                                                              the target population. Our method of collecting answers
for combined profile clusters                  Specialist            0,39    0,37    0,02
                                                                                              steered the system towards 20-35 year old, educated
for Yee and Belbin combined               Yee: Achievement           0,70    0,07    0,62
                                                                                              and technologically savvy audiences, since we collected
is 0.45. This is at the same
                                              Yee: Social            0,58    0,20    0,37     most of our data from the university and college.
level of clustering as
                                            Yee: Immersion           0,58    0,06    0,52     Acknowledging this limitation, we intend to diversify
individual Yee (0.46) or
                                                                                              our sample population in the further studies with actual
individual Belbin profile        Table 3: List of detected clusters and their centroids.
                                                                                              test scenarios, where the learnability and intuitivism
clusters (0.46), The
silhouette value of 0.45
                                 Discussion and Conclusion                                    have larger impact on the results.
                                 In this paper we discussed the applicability of Belbin [5]
means that the data point
                                 and Yee [13] profiles to usability preferences and           The presented empirical results provide valuable data
cluster is medium-weak, with
                                 performed an exploratory survey of a learning system         for interface design decision-making, for example in the
value of 0.51 being a limit to
                                 user preferences. Our study indicates that the users         model based approaches [10] or adaptive
a medium coverage cluster
                                 can be profiled and these profiles can be used to tailor     recommender systems [6]. The presented approach
[7].
                                 user interfaces. Also, per the survey we observed that       has potential to provide a method for providing
                                 there are clear differences in the user preferences          datasets for adaptive interface systems, like the one
                                 between the different types of interfaces, with              presented by Ahmed et al. [2]. The initial analysis and
                                 traditional input methods being preferred the most.          small-scale tests indicate, that this approach could be
                                                                                              feasible, since there are differences in the preferences
                                 The analysis of the data showed meaningful patterns          between the different user groups, especially when
                                 that can be used to divide users into distinct types.        observing the motivational aspects.
                                 There is also a clear order of respondent preferences in
                                 perceived efficiency and usability of inputs.                As for future work our intention is to continue with this
                                 Additionally, the k-means clustering produced two            concept, and extend the profiling test into a complete
                                 geometrically distinct groups in Belbin and Yee profile      test setting with usability-related case activities and
                                 results. Especially in mobile platform layouts the           more detailed user profiling.
                                 respondents seemed to prefer simple, systematic
References                                                       Gamification Design. International Journal of
1.   Gediminas Adomavicius and Alexander Tuzhilin.               Human Capital and Information Technology
     1999. User profiling in personalization applications        Professionals (IJHCITP) 7, 3: 47–62.
     through rule discovery and validation. In              9.   Vìctor López-Jaquero, Francisco Montero, Antonio
     Proceedings of the fifth ACM SIGKDD international           Fernández-Caballero, and Marìa D. Lozano. 2004.
     conference on Knowledge discovery and data                  Towards Adaptive User Interfaces Generation. In
     mining, 377–381.                                            Enterprise Information Systems V, Olivier Camp,
2.   Ejaz Ahmed, Nik Bessis, and Yong Yue. 2010.                 Joaquim B. L. Filipe, Slimane Hammoudi and Mario
     Customizing interactive patient’s diagnosis user            Piattini (eds.). Springer Netherlands, 226–232.
     interface. In Digital Information Management           10. Vivian Genaro Motti, Dave Raggett, and Jean
     (ICDIM), 2010 Fifth International Conference on,           Vanderdonckt. 2013. Current Practices on Model-
     536–539.                                                   based Context-aware Adaptation. In CASFE, 17–23.
3.   Pierre A. Akiki, Arosha K. Bandara, and Yijun Yu.      11. Jeffrey Rubin and Dana Chisnell. 2008. Handbook
     2013. RBUIS: simplifying enterprise application            of usability testing: how to plan, design and
     user interfaces through engineering role-based             conduct effective tests. John Wiley & Sons.
     adaptive behavior. In Proceedings of the 5th ACM
     SIGCHI symposium on Engineering interactive            12. Kiri Wagstaff, Claire Cardie, Seth Rogers, Stefan
     computing systems, 3–12.                                   Schrödl, and others. 2001. Constrained k-means
                                                                clustering with background knowledge. In ICML,
4.   Vânia Paula de Almeida Neris and M. Cecília C.             577–584.
     Baranauskas. 2010. Making interactive systems
     more flexible: an approach based on users’             13. Nick Yee. 2006. Motivations for play in online
     participation and norms. In Proceedings of the IX          games. CyberPsychology & behavior 9, 6: 772–
     Symposium on Human Factors in Computing                    775.
     Systems, 101–110.
5.   R. Meredith Belbin. 2012. Team roles at work.
     Routledge.
6.   Catherine Inibhunu, Scott Langevin, Scott Ralph,
     Nathan Kronefeld, Harold Soh, Greg A. Jamieson,
     Scott Sanner, Sean W. Kortschot, Chelsea
     Carrasco, and Madeleine White. 2016. Adapting
     level of detail in user interfaces for Cybersecurity
     operations. In Resilience Week (RWS), 2016, 13–
     16.
7.   Leonard Kaufman and Peter J. Rousseeuw. 2009.
     Finding groups in data: an introduction to cluster
     analysis. John Wiley & Sons.
8.   Antti Knutas, Jouni Ikonen, Dario Maggiorini, Laura
     Ripamonti, and Jari Porras. 2016. Creating Student
     Interaction Profiles for Adaptive Collaboration