=Paper= {{Paper |id=Vol-2637/paper4 |storemode=property |title=Boss fights in lectures! – A longitudinal study on a gamified application for testing factual knowledge |pdfUrl=https://ceur-ws.org/Vol-2637/paper4.pdf |volume=Vol-2637 |authors=Henrik Wesseloh,Felix M. Stein,Phillip Szelat,Matthias Schumann |dblpUrl=https://dblp.org/rec/conf/gamifin/WesselohSSS20 }} ==Boss fights in lectures! – A longitudinal study on a gamified application for testing factual knowledge== https://ceur-ws.org/Vol-2637/paper4.pdf
                                       Copyright © 2020 for this paper by its authors.
                  Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).




                           Boss fights in lectures! – A longitudinal study on a
                           gamified application for testing factual knowledge

                          Henrik Wesseloh1, Felix M. Stein1, Phillip Szelat1, and Matthias Schumann1
                         1 University of Goettingen, Platz der Goettinger Sieben 5, 37073 Goettingen, Germany

                                            henrik.wesseloh@uni-goettingen.de



                            Abstract. Gamification is used to influence the motivation and behavior of users.
                            In research, the effect of gamification on motivation and other psychological out-
                            comes has been confirmed in various application contexts. Gamification critics
                            state that a sustained success is unlikely, because of the implementation of ex-
                            trinsic motivation drivers like rewards. Current studies, however, have shown
                            that gamification possesses the potential to support intrinsic motivation. In this
                            article, we introduce a gamified app for testing factual knowledge, which we de-
                            veloped based on current empirical findings and recommendations and analyzed
                            in a longitudinal study for novelty effects. In contrast to other contributions, our
                            app takes up the boss fight concept to support a gameful framing and uses various
                            game elements to provide feedback to students in lectures. Overall, students eval-
                            uated the app to be very useful and fun, and additionally reported positive out-
                            comes concerning the experiences of autonomy and competence.


                            Keywords: Gamification, Education, Motivation, Design Science Research,
                            DSR, Self-Determination Theory, SDT, MDA Framework


                     1      Introduction and background

                     Gamification refers to the use of game elements in a non-game context [2] and is com-
                     monly used in app development to increase the motivation and engagement of users
                     and provide a gameful experience [7]. In the past, many short-term studies have proven
                     the motivational effect of gamification in different – mostly educational – application
                     scenarios [4]. The longitudinal study by Hanus and Fox, however showed a negative
                     outcome when using gamification in the classroom [5]. Since then, the share of publi-
                     cations studying the long-term effects of gamification has grown, but it remains scarce
                     compared to cross-sectional studies. Yet, recent studies for example by Mekler et al.
                     [10], Lieberoth [9], Forde et al. [3] or Sailer et al. [16] suggest that gamification can
                     also support intrinsic and thus long-term motivation [15].
                        The past two years, we used the published knowledge of the gamification community
                     to develop a gamified application for testing factual knowledge in lectures. We chose
                     the educational context on the one hand, because a lot of literature in this context pro-
                     vides recommendations to build a successful application. On the other hand, it allows
                     us to collect large amounts of data from field experiments in our lectures, as the students




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                         31
                     are usually quite interested and critical, when it comes to innovative teaching formats.
                     Ultimately, however, the research project is not intended to serve an end in itself, but
                     should rather provide a meaningful utilization of gamification, which both students and
                     lecturers can benefit from.
                         That is why, in contrast to many other gamification projects, we started to identify
                     requirements for our app and founded them on current literature regarding motivational
                     theory and game design models, to follow the design science research approach for
                     information systems [13]. We did not follow the one-size-fits-all approach [9, 12] by
                     simply adding points, badges and leaderboards, but instead picked up a well-known
                     concept from role play games: the boss fight – a particularly challenging type of quest,
                     where players need to overcome a boss character. To support this concept, we framed
                     the activity with a gameful narrative to foster an actual game-like perception and thus
                     increase enjoyment [9]. Additionally, players got to pick an avatar, which represents
                     them during the boss fight [16]. The app provides evaluative and comparative feedback
                     to be informational, but not controlling [3]. Furthermore, the new concept is based on
                     both collaboration and competition and picks up different features from existing gam-
                     ified learning apps such as Classcraft, Kahoot! or Quizizz. However, compared to other
                     systems, our app provides the functionality of deactivating game mechanics, to analyze
                     individual effects of chosen design elements, which we will do in future research. To
                     sum up, the gamified app helps lecturers to do fully customizable gameful question
                     sessions to test factual knowledge, where students collaboratively quiz against a virtual
                     boss in a narrative setting to receive individual feedback. This way, we want to contrib-
                     ute to the current research on the goal-oriented use of gamification.
                         In order to confirm the motivational effect of our app empirically, we evaluated it.
                     However, instead of measuring the difference to a similar "non-gamified" application,
                     which has already been done throughout various gamification studies [4], we chose to
                     study the long-term effect of using our gamified knowledge testing app in lectures. By
                     doing so, we first want to contribute to closing the current research gap of longitudinal
                     studies and second want to find out whether the phenomenon of novelty effects stated
                     by Koivisto and Hamari [8] also applies to our case study in the educational domain.
                     Thus, we want to share our insights to the following two research questions:

                     ─ RQ1: How to design a gamified application for testing factual knowledge in lectures
                       to foster student’s motivation?
                     ─ RQ2: How do students evaluate the use of the gamified application after first-time
                       and long-term usage?

                     To answer the questions, we first briefly introduce the research design and methodology
                     of our research project. Then, we will describe the prototype artifact considering the
                     identified requirements and the resulting design. After describing the artifact, we pre-
                     sent the results of our first-time and long-term usage evaluation and compare them by
                     doing a statistical mean value comparison. Furthermore, we will discuss the significant
                     differences between the two groups and interpret the results regarding the effectiveness
                     of our gamified app. In the end, we will summarize our findings and underlying limita-
                     tions in a short conclusion and provide a short outlook of our future research endeavors.




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                       32
                     2            Research design

                     In order to address the research questions, we used a mixed-method approach in the
                     manner of the Design Science Research Method (DSRM) according to Peffers et al.
                     [13]. This problem-oriented approach describes a structured procedure in the field of
                     information systems and behavioral science to generate knowledge. In particular, the
                     method includes the well-founded development of IT artifacts and their evaluation in
                     order to solve the identified problems. In our research design, we pursued the following
                     research process (see Fig. 1).
                         STEPS                                                                           ITERATION
                                    1                            2                         3                                  4                        5

                                PROBLEM                  OBJECTIVES OF A              DESIGN &
                                                                                                                     DEMONSTRATION               EVALUATION
                             IDENTIFICATION                SOLUTION                 DEVELOPMENT

                                                        Encourage student
                           Traditional lecture                                                                                                Evaluation of the
                                                        motivation with           Implementation of a                Demonstration of
                           formats do not                                                                                                     artefact and the
                                                        gamification in           mobile, gamified                   the gamified app in
                           motivate students to                                                                                               resulting user
                                                        lectures                  app for question                   lectures & tutorials
                           actively prepare,                                                                                                  experience by
                                                                                  sessions in lectures               with economic
                           participate or learn                                                                                               students in online
                                                        Effectively and           to measure the level               science students
                                                        efficiently determine                                                                 surveys
                                                                                  of knowledge
                           Monitoring the level
                                                        the level of                                                        FIELD
                           of knowledge in                                                                               EXPERIMENT             QUANTITATIVE
                                                        knowledge with an            PROTOTYPING
                           courses is costly and                                                                                                   STUDY
                                                        application
                           time-consuming


                                                     PROBLEM-CENTERED DESIGN SCIENCE RESEARCH METHOD (DSRM)


                         FOUNDATION        LITERATURE REVIEW                         SDT & MDA                           PLAYTESTS                IMI & TAM

                                                                           Fig. 1. DSRM Process Model
                     First, we identified the problem based on academic literature (Step 1). Then, we used
                     current findings from gamification research in education to derive the requirements for
                     a gamified application for testing factual knowledge (Step 2). Subsequently, the derived
                     prototype artifact was implemented based the concepts of Self-Determination Theory
                     (SDT) [15] and the MDA framework [6] (Step 3). The demo took place in multiple
                     questions sessions (each 3 min. long) throughout different playtesting periods (Step 4).
                     We tested the app with economic science students who attended our lectures or tutorials
                     (see Table 1). In the winter term, we demonstrated the app in four different lectures and
                     asked the students to evaluate their first-time user experience in a short survey (Step 5).
                     In the summer term, we regularly used the app in our tutorials after completing a large
                     topic and conducted the survey after the third use at the end of the semester. The online
                     survey included items from the Technology Acceptance Model (TAM) [18] to measure
                     acceptance and the Intrinsic Motivation Inventory (IMI) [1] to measure motivational
                     effects of the artifact. To analyze for novelty effects of gamification, we did a mean
                     comparison of the data collected from the two usage groups (first-time vs. long-term).
                         Table 1. Numbers of participants in gamified question sessions during the field experiments

                      Term                    Use     SESS           Type                Playtesting Period                            PART     SURV               Gr.
                      Winter                  1st       4            Lectures            18.12.18 – 25.01.19                           209*      153                a
                                              1st      15            Tutorials           21.05.19 – 25.05.19                           264        /                 /
                      Summer                  2nd      15            Tutorials           24.06.19 – 28.06.19                           183        /                 /
                                              3rd       4            Tutorials           16.07.19 – 21.07.19                           93*       65                 b
                      Note. SESS: Number of Sessions; PART: Number of Participants; SURV: Completed Surveys; Gr: Comparison groups




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                                                                               33
                     3      Artifact description

                     3.1    Requirements
                     For a structured approach in system development, we identified the requirements for a
                     gamified application for testing factual knowledge in the lecture context and founded
                     them with scientific literature. We have differentiated these requirements in four cate-
                     gories: (1) contextual, (2) motivational, (3) game and (4) research-based requirements.
                         Contextual requirements. The application case foresees that lecturers prepare
                     questions, which are answered by the students (RC1). In order for the lecture to be sched-
                     uled, it should also be possible to schedule the sessions accordingly (RC2). In principle,
                     it should be as easy as possible to test as much knowledge as possible in a short time
                     (RC3). The questions should be evaluated automatically and directly after the question
                     session to provide users with instant feedback (RC4). Students should receive individual
                     feedback on their answers in order to benefit from active participation (RC5). Moreover,
                     the lecturer should be provided with aggregated data on the proficiency of the students
                     so that possible gaps in knowledge can be addressed specifically in the lecture (RC6).
                        Motivational requirements. In current literature, particularly Self-Determination
                     and Cognitive Evaluation Theory explain motivational effects of gamification [14].
                     Therefore, current study results and theory-based assumptions should be integrated in
                     the system development process. From the perspective of learning psychology, intrinsic
                     motivation (resulting from the inherent interest in an activity) seems to be valuable in
                     education. According to SDT, the basic psychological needs for competence, autonomy
                     and social relatedness are considered as prerequisites for intrinsic motivation [15]. To
                     ensure that these three needs are satisfied by a gamified app, three motivational require-
                     ments arise: First, using the app is voluntary [5, 12] and anonymous to support students'
                     experience of autonomy and to inhibit the feeling of an examination situation (RM1).
                     Second, the app needs to provide informative (non-controlling) and meaningful feed-
                     back to strengthen the users' experience of competence (RM2) [3, 15]. Third, the app
                     needs to support group activities to strengthen feelings of social relatedness (RM3) [15].
                        Game requirements. The motivational effect of Gamification is determined by the
                     implemented game design elements. Thus, current study results should be considered
                     in the implementation of the different elements. On the one hand, the different prefer-
                     ences of the users need to be considered (RG1). Different user type or player trait models
                     assume that users have different preferences with regard to the implemented design
                     elements (e. g. socializers prefer collaboration over competition) [17]. According to the
                     MDA framework [6], which categorizes game elements into mechanics, dynamics and
                     aesthetics, users decide to play a game based on the emerging aesthetic (kind of fun,
                     e.g. challenge), that result from the implemented mechanics. Therefore, to address a
                     broad audience, the gamified app should pick up different mechanics (RG2) [11]. More-
                     over, a variety elements could also help to satisfy different psychological needs (e. g.
                     badges for competence, avatars for autonomy & teams for relatedness), as empirical
                     studies suggest [10, 16]. Furthermore, a gameful frame should be created because the
                     app’s perception as an actual game supports enjoyment and thus motivation (RG3) [9].




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                        34
                        Research-based requirements. In order to address the current gaps in gamification
                     research, the prototype needs to be able to address the motivational effects of individual
                     game design elements [16]. Thus, options to deactivate mechanics become mandatory
                     (RR1). Another constraint of many studies is the use of self-reported data from surveys.
                     Objective measurements, e.g. with regard to performance, need to be done to give more
                     precise and rigorous statements on motivational effects (RR2). Combining self-reported
                     and objective data (e.g. question answers), for example, could show comprehensibly to
                     what extent a poor rank on a leaderboard might mitigate motivational effects. Further-
                     more, the gamified app should not be evaluated directly after the first use in order to
                     avoid possible novelty effects [4]. Therefore, a regular use of the application in the field
                     should be considered in order to focus on the long-term impact of gamification (RR3).


                     3.2    Design
                     The gamified knowledge testing in lectures is based on a responsive web application
                     that provides ubiquitous access, does not require user-side installation, and supports
                     various mobile devices such as smartphones, tablets, and laptops. Lecturers can use an
                     authoring tool to create single and multiple-choice questions on lecture content and thus
                     prepare sessions (→RC1). The maximum character length per question is limited, as
                     students will have only limited time to answer the questions during the session (→RC2).
                     Questions already existing from previous sessions can be imported (→RR3). The ques-
                     tion sessions can also be individually customized with regard to duration, difficulty and
                     the feedback elements displayed (e.g. badges or rankings) (→RR1 & RG1). In the lecture,
                     students can anonymously join the gamified question session without a login procedure
                     via an automatically generated QR code, short link or session number (→RM1). This
                     way, the lecturer only knows how many students have joined the session, but not who.
                        At the start, the question session is contextualized by a story (→RG3) of a fictitious
                     comic like medieval setting in which the students are to act as knights. The students
                     have the choice between two avatars (→RG2) to represent during the sessions: Attacker
                     or defender. The avatars differ in their characteristics. Attackers have less life, but can
                     cause more damage per correct answer. Defenders are the more risk-averse option and
                     therefore have more lives to allow for some mistakes. This way, students can choose
                     an individual, meaningful play style based on their own estimated level of knowledge
                     (→RG1), which also supports the experience of autonomy (→RM1).
                        After character selection, the lecturer can start the question session in quiz format
                     (→RC3). The students will then receive randomized questions from the prepared ques-
                     tion pool on their mobile device within a set time limit (→RC2). For each question, 30
                     seconds are available to select and confirm one or more answers from the four possible
                     answers (→RC3). After confirming, there is direct feedback on the question by flashing
                     either red (wrong answer) or green (correct answer) and updating the current winning
                     streak correspondingly (→RC5 & RM2). Then, the next question is given out and the 30
                     second timer resets. For each correct answer, the participants receive points, which can
                     be increased through quick responses, low error rates or winning streaks (→RG2 & RM2).
                     The final score determines the user’s placement on the leaderboard (→RG2).




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                          35
                        Moreover, the question sessions take up the boss fight game concept as a challenging
                     quest mechanic (→RG2,3). All participating students (or knights) collaboratively quiz as
                     a group against the question pool of the lecturer, which is visualized as a boss character
                     (a dragon) with a life bar (→RM3). Each correct answer takes life points from the boss.
                     However, if the answer is wrong, the students lose one life. In order to win the boss
                     fight, the students must correctly answer a minimum number of questions in time. The
                     system calculates the required quantity based on the participants and the lecturer's set
                     duration and level of difficulty. If the time runs out or all students are eliminated, the
                     boss fight is lost (→RG3). Overall, the boss fight is displayed on the lecturer's screen,
                     so that eliminated students, can continue to follow the group activity and possibly help
                     their fellow students (→RM3).
                        At the end of the question session, the students are assigned a pseudonym (→RM1)
                     and receive individual feedback, which are their points, ranking as well as up to three
                     badges for their greatest achievements during the session (→RC5, RM2 & RG2). The
                     badges are collected in different categories, e.g. "winning streak" or "correct answers"
                     and are colored based on difficulty. White badges serve as "consolation prizes", while
                     bronze, silver and gold represent higher levels of a category and are therefore harder to
                     reach. To provide a meaningful achievement, only one student per question session can
                     obtain the diamond level “winner” badge (→RM2). Furthermore, the pseudonymised
                     leaderboard and the three best students with their respective results are presented to
                     honor their performance (→RM2). In addition to the gamified feedback, students receive
                     the solutions for their individual questions, while lecturers receive aggregated results
                     of the question session (→RC4,5). In addition, statistical diagrams and performance
                     graphs are provided for lecturers to determine the level of proficiency (→RC6).
                        Overall, the different requirements lead to 15 key functionalities. Fig. 2 summarizes
                     how most functionalities resulted from multiple requirements. Moreover, it shows how
                     complex the design and development of a gamified learning app is. Therefore, to assure
                     a comprehending artifact design communication, we share prepared screenshots of the
                     functionalities with their respective requirements in the online appendix.

                     CONTEXTUAL REQUIREMENTS (RC)                       FUNCTIONALITY (F)            MOTIVATIONAL REQUIREMENTS (RM)
                     RC1 Prepare questions                               F1    Boss Fight            RM1 Voluntary and anonymous usage

                     RC2 Schedule question session                       F2    Startpage             RM2 Informative and meaningful feedback

                     RC3 Test students quickly                           F3    Sessions              RM3 Enable group activities

                     RC4 Evaluate answers automatically                  F4    Authoring Tool
                     RC5 Provide individual feedback for students        F5    Question Pool
                     RC6 Provide aggregated data for lecturers           F6    Options
                                                                         F7    Lobby
                                                                         F8    Character Selection
                                                                         F9    Quiz Question
                                                                         F10   Game Over
                                                                         F11   Gameful Feedback
                     GAME REQUIREMENTS (RG)                              F12   Learning Feedback     RESEARCH REQUIREMENTS (RR)
                     RG1 Consider different user preferences             F13   Leaderboard           RR1 Implement deactivatable mechanics

                     RG2 Implement a variety of game mechanics           F14   Statistics            RR2 Use objective measurements

                     RG3 Create a gameful frame                          F15   Question Results      RR3 Support regular use



                                                  Fig. 2. Implementation of requirements in key functionalities




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                                                     36
                     4       Results from evaluation

                     Due to our trend study design, we did an independent-samples t-test with a 95 % con-
                     fidence interval to compare the means of the survey results from the first-time use
                     (group a; n = 153) with the results of the long-term use (group b; n = 65). The answers
                     were based on a 7-point Likert scale [completely disagree (1) to completely agree (7)].
                     In advance, we did a Levene’s test to check for equal variances for the different items.
                        The overall concept of the gamified app for knowledge testing receives a good (2)
                     to very good (1) rating from respondents in both groups (Ma = 1.59; Mb = 1.89). The
                     perceived usefulness (Ma = 6.1; Mb = 5.4) and the intention to use (Ma = 6.1; Mb = 5.6)
                     are also high. However, this result also shows that there is a significant difference be-
                     tween the two groups [t(216) = 4.407, p < .001)]. While the perceived usefulness as
                     well as the intention to use decrease after long-term usage, the low feeling of control
                     (Ma = 2.01; Mb = 2.58) increases significantly [t(216) = -2.99, p = .003]. Nevertheless,
                     both groups perceived participation in the questions sessions as voluntary (Ma = 6.61;
                     Mb = 6.27), even though a significant difference between first-time and long-term usage
                     [t(216) = 2.46, p = .014] was measured. Overall, it is not possible to confirm a negative
                     effect of the gamified application on the experience of autonomy.
                        Regarding the experience of competence, however, the gamified application shows
                     a mixed result. Though the students mostly agree (Ma = 5.61; Mb = 5.30) that the results
                     of the boss fight are informative, they were marginally satisfied (Ma = 4.10; Mb = 4.43)
                     with their own performance during the question session. One reason for this finding
                     could be the implemented leaderboard, which ranks all students based on their achieved
                     score. Thus, we additionally did a mean value comparison with two groups based on
                     the students’ ranking, which the app tracked during the question sessions of the long-
                     term usage group (n = 65). The ranking is based on the points the students received for
                     correct answers and was linked to the survey answers. As a result, the Top 10 students
                     of the leaderboard, reported a significantly [t(63) = -2,495, p = .015] higher satisfaction
                     with regard to their performance (n = 31; MR<10 = 4.94) than students who were ranked
                     worse (n = 34; MR>=10 = 3.97). Additionally, we could not determine any other effect
                     of the leaderboard within the scope of this survey. Interestingly, this means the place-
                     ment had no significant [t(63) = -0.502, p = .618] effect on enjoyment (M<10 = 5.92;
                     M>=10 = 6.04). However, we were able to measure a significant difference in enjoyment
                     between the first-time (Ma = 6.333) and long-term use (Mb = 5.954), which decreased
                     over time [t(216) = 2.723, p = .007].
                        Table 2 and 3 show the results from our two independent-samples t-tests, which we
                     carried out in IBM SPSS Statistics 26.
                         Table 2. Results of independent-samples t-test for high (>=10) and low (<10) rankings
                                                                                             M (SD)     Sig.    Difference
                     Construct               Item                                          high low (2-tail.)    95 % CI
                     Competence              I am satisfied with my perfor-                3.97 4.94 .015*    [-1.738,-.192]
                     (IMI) [1]               mance during the boss fight.                 (1.33) (1.76)
                     Enjoyment               I enjoyed the boss fight.                     5.79 6.13 .154 ns [-.799, .129]
                     (IMI) [1]                                                            (1.00) (0.84)
                      Note. IMI: Intrinsic Motivation Inventory; M: Mean; SD: Standard Deviation.; CI: Confidence Interval; *: p ≤ 0.05;




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                                                 37
                          Table 3. Results of independent-samples t-test for first-time (a) and long-term (b) use
                                                                                             M (SD)      Sig.              Difference
                     Construct                Item                                          a     b    (2-tail.)            95 % CI
                     Usefulness               I think the app is useful.                 6.167 5.492 .000***             [.372, .976]
                     (TAM) [18]                                                          (1.00) (1.10)
                     Intention                I would use the app in lectures.           6.157 5.662 .009**              [.127, .863]
                     (TAM) [18]                                                          (1.22) (1.33)
                     Perc. Choice             I took part in the boss fight              6.618 6.277 .014*               [.068, .612]
                     (IMI) [1]                because I wanted to.                        (.99) (0.76)
                     Perc. Control            I felt like I was being controlled         2.010 2.585 .003**              [-.952, -.197]
                     (IMI) [1]                during the boss fight.                     (1.26) (1.36)
                     Competence               I am satisfied with my perfor-             4.108 4.431 .218 ns             [-.838, .192]
                     (IMI) [1]                mance during the boss fight.               (2.05) (1.61)
                     Competence               I find the results of the                  5.618 5.308 .128 ns             [-.090, .709]
                     (IMI) [1]                boss fight informative.                    (1.45) (1.14)
                     Enjoyment                I enjoyed the boss fight.                  6.333 5.954 .007**              [.104, .654]
                     (IMI) [1]                                                           (0.94) (.94)
                     Rating                   How do you rate the overall                 1.59 1.89 .001***              [-.488, -.120.]
                     1: very good ↔ 5: poor   concept of the gamified app?                (.59) (.64)
                     Note. a: first-time use; b: long-term use; IMI: Intrinsic Motivation Inventory; TAM: Technology Acceptance Model;
                     M: Mean; SD: Standard Deviation.; CI: Confidence Interval; ***: p ≤ 0.001; **: p ≤ 0.01; *: p ≤ 0.05; ns: p > 0.05



                     5       Discussion and future research

                     From the perspective of acceptance according to TAM [18], the students regard the
                     gamified app for knowledge testing as useful and intend to use it in future. Thus, the
                     basic prerequisite for successful use of the app is given. In addition, the study provides
                     insights on the gamified app’s positive influence on the motivation of students, as the
                     self-reported behavior based on IMI [1] indicates enjoyment (as indicator for intrinsic
                     motivation), high perceived choice and low perceived control (as indicators for feeling
                     of autonomy) as well as feelings of competence. The cause for motivational effects and
                     the underlying limitations need to be discussed, to determine the role of gamification.
                        First, with regard to autonomy, the students wanted to take part in the gamified ques-
                     tion sessions and thus participate voluntarily. Moreover, our case shows that the stu-
                     dents do not feel that they are in a control or examination situation, even though this is
                     actually the case. One might argue though whether the voluntary participation was
                     based on gamification and it needs to be taken with a grain of salt that we cannot prove
                     it with certainty. However, from our personal observations and experience using dif-
                     ferent non-gamified tools, the gamified app was the most successful so far, which is
                     why we will continue analyzing this aspect in our future studies. Nevertheless, we were
                     able to show that the gamified app supports the students’ experience of autonomy dur-
                     ing question sessions.
                        Second, with regard to experiencing competence, the use of the app showed positive
                     effects, since the students perceive the app as informative and helpful. However, in case
                     of performance feedback, the app might act as a double-edged sword, due to the inte-
                     grated leaderboard. We found that students who ranked higher in the leaderboard (in
                     the Top 10) significantly felt more competent. In contrast, students with lower ranking




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                                                 38
                     reported less experience of competence. This partly proves Hanus and Fox’ suggestion
                     of a negative outcome from leaderboards [5], as the rank was especially highlighted in
                     our gameful feedback. In our case, however, a bad performance can also be associated
                     with the elimination in the boss fight. Therefore, the motivational impact of the leader-
                     board will need further investigation.
                        Third, with regard to enjoyment, the students reported that they had fun using the
                     gamified app. A rather lively atmosphere in the lecture and the fact that this enjoyment
                     did not result from their performance lets us assume that students actually felt an inher-
                     ent pleasure during the activity. In combination with the reported experience of auton-
                     omy and competence we conclude that students were self-determined and thus intrinsi-
                     cally motivated [1, 15] to participate in and (hopefully) learn from our question ses-
                     sions.
                        Fourth, the results of our study allow a short interpretation regarding the long-term
                     effect of gamification. In particular, the significant decrease of the measured items be-
                     tween the first-time and long-term usage group can be considered as an indicator for a
                     novelty effect, as already suspected in the literature [4, 8]. Even though in both groups
                     the evaluation of the gamified application was positive, the effect was already mitigated
                     after a few months of regular use. We therefore suggest to study whether implementing
                     new gameful features on a regular basis helps to take advantage of the novelty effect.
                     We will address this question for example by adding other game modes to our app.
                        In conclusion, our project showed how a literature-based concept for a gamified app
                     to test factual knowledge was successfully realized and led to positive motivational
                     outcomes – even though the effectivity decreased due to the proposed novelty effect.
                     However, our results underlie some limitations, which do not allow generalization. Our
                     biggest constraint is that mostly freshmen students of economics were involved in our
                     case, who might be more competitive in general. It still needs to be determined to what
                     extent the application will appeal to others. Therefore, we plan to do comparative field
                     studies with other faculties in the future. In terms of our experimental app design, we
                     will focus on analyzing motivational effects of individual mechanics, as it will help to
                     design successful, personalized and goal-oriented gamified applications. In the future
                     we will also consider social relatedness [15] and current user type approaches [17], as
                     we haven’t yet covered these aspects of motivation. As of right now, we will be able to
                     investigate the motivational fabric by deactivating points, badges, leaderboards as well
                     as avatars, quests, story and teams – hopefully by not harming a gameful experience…


                     Online appendix

                     In-app screenshots: https://owncloud.gwdg.de/index.php/s/o1ifGN80ttoeqJz


                     References
                     1. Deci, E.L., Eghrari, H., Patrick, B.C., Leone, D.R.: Facilitating Internalization. The
                       Self-Determination Theory Perspective. Journal of Personality 62(1), 119–142
                       (1994).




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                        39
                      2. Deterding, S., Dixon, D., Khaled, R., Nacke, L.E.: From game design elements to
                          gamefulness. Defining "Gamification". Proceedings of the 15th International Ac-
                          ademic MindTrek Conference: Envisioning Future Media Environments, 1–7
                          (2011).
                      3. Forde, S.F., Mekler, E.D., Opwis, K.: Informational, but not Intrinsically Moti-
                          vating Gamification? Preliminary Findings. Proceedings of the CHI PLAY 2016
                          Extended Abstracts, 157–163 (2016).
                      4. Hamari, J., Koivisto, J., Sarsa, H.: Does gamification work? - A literature review
                          of empirical studies on gamification. Proceedings of the 49th Annual Hawaii In-
                          ternational Conference on System Sciences (HICSS), 3025–3034 (2014).
                      5. Hanus, M.D., Fox, J.: Assessing the effects of gamification in the classroom. A
                          longitudinal study on intrinsic motivation, social comparison, satisfaction, effort,
                          and academic performance. Computers and Education 80, 152–161 (2015).
                      6. Hunicke, R., LeBlanc, M., Zubek, R.: MDA. A Formal Approach to Game Design
                          and Game Research. Workshop on Challenges in Game AI, 1–4 (2004)
                      7. Huotari, K., Hamari, J.: A definition for gamification. Anchoring gamification in
                          the service marketing literature. Electronic Markets 27(1), 21–31 (2017).
                      8. Koivisto, J., Hamari, J.: Demographic differences in perceived benefits from gam-
                          ification. Computers in Human Behavior 35, 179–188 (2014).
                      9. Lieberoth, A.: Shallow Gamification Testing Psychological Effects of Framing an
                          Activity as a Game. Games and Culture 10(3), 229–248 (2015).
                      10. Mekler, E.D., Brühlmann, F., Opwis, K., Tuch, A.N.: Do points, levels and lead-
                          erboards harm intrinsic motivation? Proceedings of the 1st International Confer-
                          ence on Gameful Design, Research, and Applications, 66–73 (2013).
                      11. Mora, A., Tondello, G.F., Nacke, L.E., Arnedo-Moreno, J.: Effect of personalized
                          gameful design on student engagement. Proceedings of the IEEE Global Engi-
                          neering Education Conference (EDUCON), 1–9 (2018).
                      12. Nicholson, S.: A recipe for meaningful gamification. In: Wood, L.C., Reiners, T.
                          (eds.) Gamification in Education and Business, pp. 1–20 (2015)
                      13. Peffers, K., Tuunanen, T., Rothenberger, M.A., Chatterjee, S.: A design science
                          research methodology for information systems research. Journal of Management
                          Information Systems 24(3), 45–77 (2007).
                      14. Putz, L.-M., Treiblmaier, H.: Creating a Theory-Based Research Agenda for
                          Gamification. Proceedings of the 20th Americas Conference on Information Sys-
                          tems (AMCIS), 1–13 (2015)
                      15. Ryan, R., Deci, E.: Self-Determination Theory and the facilitation of intrinsic mo-
                          tivation. American Psychologist 55(1), 68–78 (2000).
                      16. Sailer, M., Hense, J.U., Mayr, S.K., Mandl, H.: How gamification motivates. An
                          experimental study of the effects of specific game design elements on psycholog-
                          ical need satisfaction. Computers in Human Behavior 69, 371–380 (2017).
                      17. Tondello, G.F., Wehbe, R.R., Diamond, L., Busch, M., Marczewski, A., Nacke,
                          L.E.: The Gamification User Types Hexad Scale. Proceedings of the 2016 Annual
                          Symposium on Computer-Human Interaction in Play (CHI PLAY), 229–243
                          (2016).
                      18. Venkatesh, V., Davis, F.D.: A Theoretical Extension of the Technology Ac-
                          ceptance Model. Four Longitudinal Field Studies 46(2), 186–204 (2016)




GamiFIN Conference 2020, Levi, Finland, April 1-3, 2020 (organized online)                                       40