<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Affective Games Provide Controlable Context. Proposal of an Experimental Framework</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Z˙uchowska</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Grzegorz J. Nalepa</string-name>
          <email>grzegorz.j.nalepa@uj.edu.pl</email>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>45</fpage>
      <lpage>50</lpage>
      <abstract>
        <p>We propose an experimental framework for Affective Computing based of video games. We developed a set of specially designed mini-games, based of carefully selected game mechanics, to evoke emotions of participants of a larger experiment. We believe, that games provide a controllable yet overall ecological environment for studying emotions. We discuss how we used our mini-games as an important counterpart of classical visual and auditory stimuli. Furthermore, we present a software tool supporting the execution and evaluation of experiments of this kind.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 INTRODUCTION</title>
      <p>
        Emotions constitute an important context for interpretation of human
behavior. Affective computing (AfC) is a field of study devoted to the
computer-based analysis, modeling and synthesis of emotions [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
In our work in this area, we focus on the use of wearable and mobile
devices to support the acquisition and interpretation of bodily signals
in order to the detect changes of affective states and possibly
recognize the corresponding emotional states of subjects. We believe, that
the context-aware systems paradigm considered in computer science,
should take into the account the affective dimension [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
Furthermore, the computer models should be personalized, i.e. take into the
account individual differences of human behavior, as well as
personality traits [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        One of the principal challenges in the AfC experiments is the
actual process to evoke individual emotions for the training and
calibration of computer models. In the psychological literature, some of
the typical experimental procedures assume the use of standardized
visual and auditory stimuli that are supposed to evoke the specific
emotions. From our perspective, such an approach is not sufficient as
the experimental situation very often does not seem natural to the
participant, furthermore it is not personalized. To tackle this challenge,
in our work we employ computer games as the source of specific,
rich, natural, yet controllable context to evoke emotions [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>In this paper we present an experimental setup using affective
games to evoke emotions of the participants. The principal
contributions include: the design of original video games aimed at AfC
experiments, a framework for configuration of experiments using such
games, putting these two in the context of the BIRAFFE experiments
we conducted.</p>
      <p>The rest of the paper is organized as follows: In Section 2 we
discuss the detailed motivation of our work. Then in Section 3 we
describe an experiment in AfC we conducted to acquire data on the
individual affective reactions. In this experiment we used a set of
affective games we specifically developed for this task, as described
in 4. Furthermore, we realized that in order to provide flexibility of
such experiments, we should have a framework supporting the
reconfiguration of such experiments for a range of game levels. We
developed a prototype of such a framework, as described in Section 5.
A short comparison with other solutions is provided in Section 6. In
Section 7 we describe the evaluation of our work. We conclude the
paper in Section 8.
2</p>
    </sec>
    <sec id="sec-2">
      <title>MOTIVATION</title>
      <p>
        Research on emotions requires, on the one hand, a controllable
experimental environment to evoke and detect and emotions, on the
other, natural conditions for experiments in order to minimize a
possible discomfort for the participants. Video games seem to be a good
trade-off between these two extreme requirements. Games allow to
control the appearing stimuli and log everything that happened,
especially the reaction times, Moreover, the environment is rich in stimuli
and allows for user interaction with objects, including emotionally
related interaction framed in the so-called Affective Loop [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>“Regular” games, available on the market, do not meet the
requirements of the experimental environment. First of all, they provide a
(too) rich environment in which the player may do (too) many things.
In such an environment, a very large sample size is needed to get the
right statistical power to draw conclusions, which makes experiments
difficult to conduct. Also, the use of machine learning methods will
not be trivial, as there are many variables in such case, some of which
will only be disruptive noise. Secondly, “regular” games do not allow
for the evaluation of emotions too often. The player is constantly
engaged in the game and interrupting it to complete the questionnaire
will reduce the immersion of the game.</p>
      <p>
        These issues have been observed in our previous experiments [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
including the BIRAFFE1 experiment [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. To address them, a set of
mini games, with restricted experimental conditions, was created.
Each of them is built up on a very limited set of stimuli, with the
aim of evoking a limited set of emotional reactions. The following
sections describe an experiment called BIRAFFE2 (see Section 3) in
which three such games were used (see Section 4).
      </p>
      <p>The BIRAFFE2 experiment has led to observation of further issues
that need to be addressed when conducting game experiments. In
particular, attention has been drawn to the fact that all mini games
should generate event logs in a uniform format to avoid additional
pre-processing steps when analysing the collected data. It is equally
important to implement questionnaires directly in the games, at the
end of each mini game. Filling out the questionnaires at the end of the
gaming session makes the impressions fuzzy and the self-description
may not be accurate enough.</p>
      <p>Therefore, in parallel with the BIRAFFE experiments, a dedicated
framework was developed to automate the preparation of game-based
affective experiments. It allows to generate an experiment template
with questionnaires between different levels and provides a
databasebased logging interface. A detailed description of the framework can
be found in Section 5.
3</p>
    </sec>
    <sec id="sec-3">
      <title>THE BIRAFFE2 EXPERIMENT</title>
      <p>The BIRAFFE2 study included 103 participants (33% female)
between 18 and 26 (M = 21.63, SD = 1.32), recruited among students
of the Artificial Intelligence Basics course at AGH University of
Science and Technology, Kraków, Poland and their friends.</p>
      <p>
        It is a revised version of a previous experiment called BIRAFFE1
(Bio-Reactions and Faces for Emotion-based Personalization)
described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].The aim of the study was to collect physiological data
paired with behavioral data, which can then be used to develop
models for prediction of emotions.
      </p>
      <p>
        Behavioral data were twofold: from the part in which the subjects
played three games (for details see Section 4) and from the
classical experiment, in which sound and visual stimuli (from IADS [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
and IAPS [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] datasets respectively) were presented and then subjects
were asked to assess what emotions they evoked. Specifically, the
stimuli was presented for 6 seconds, what was followed by 6 seconds
for affective rating with the use of custom widget with 2-dimensional
space (valence and arousal). The whole behavioral data was collected
as a set of logs in comma-separated (CSV) files.
      </p>
      <p>
        Physiological signals, Electrocardiogram (ECG) and
Electrodermal activity (EDA), were gathered using BITalino (r)evolution kit,
as it is the most promising of cheap mobile hardware platforms (for
comparison see [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]). Besides ECG and EDA, during the experiment
also the following signals were collected: accelerometer and
gyroscope from gamepad, facial images taken by webcam (every 250
milliseconds), screencast of the whole game session.
      </p>
      <p>
        The whole protocol consisted of several phases:
1. NEO-FFI paper-and-pen questionnaire [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] for personality
measurement (approx. 10 minutes),
2. Physiological devices setup (approx. 2 minutes),
3. Baseline signals recording (1 minute),
4. Instructions and training (approx. 5 minutes),
5. First part of stimuli presentation and rating (17.5 minutes),
6. Games session (up to 15 minutes in total),
7. Second part of stimuli presentation and rating (17.5 minutes),
8. Three paper-and-pen GEQ questionnaires [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] (one for each game)
and gaming experience questionnaire (approx. 10 minutes).
The whole protocol lasts up to 75 minutes. Steps 3-7 were done on
a PC. All of them were controlled by the Python 3.8 with the use of
PsychoPy 3.2.4 library [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Participant interacted with the procedure
only with a gamepad.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>EVOKING EMOTIONS WITH AFFECTIVE</title>
    </sec>
    <sec id="sec-5">
      <title>GAMES</title>
      <p>
        In order to support the game sessions of the experiment, three specific
affective mini-games were created [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. The aim for all the games
was to create an immense amount of emotions in a short time. The
main obstacle was the inability to create an intriguing story, therefore
the whole section of narrative elements was discarded. The only way
of building an affective project was to make different sets of games
with a variety of mechanics and audiovisuals.
      </p>
      <p>The simplest solution to create an emotional-changing
environment was to revolve around the overall difficulty of games. While
making the neutral, peaceful stage can relieve stress for the player,
the loud and hard level can intensify the rage and increase heartbeat.
Therefore, three genres have been selected: roguelike, platform, and
maze. The first level is balanced to be an easy stage, supposed to
develop energetic, happy emotions. On the contrary, the second level is
extremely hard to beat, filled with traps, to give the sensation of
unjust and fury. This juxtaposition is important to the study, given the
sudden change. Last phase is neutral, without any emotion-boosting
elements, it exists to check the player’s decision-making, behavior
and bodily changes due to previous irritation. Additionally, a proper
collection of game patterns was implemented. For every stage,
depending on what emotions it should boost, from that collection
separate elements were chosen.</p>
      <p>Stage one contains elements such as score tracking, weapons,
enemies and looting. The finishing condition is elimination of all
antagonists – no stress-inducting time limit was implemented. The
difficulty in this level was balanced by setting the damage per second
of the protagonist much higher than the one of the antagonist. While
players can shoot up to 5 projectiles per second, enemies can shoot
only one attack per second. Moreover, the speed of player’s
projectiles is 2.5 times higher for the default weapon. An additional blaster
was placed on a map, giving the possibility for the user to eliminate
the enemies even easier. Furthermore, in the case the subject is not
used to playing games, health points can be increased by picking up
heart-shaped objects. In order to unleash more fun and any form of
achievement-getting sensation, a score tracker is incrementing when
picking up money bags from the floor or from the killed enemy (the
amount of bags dropped from antagonist is random).</p>
      <p>In the platform game (stage 2) traps and time limit were
implemented. Both of them are crucial in order to imply stress and rage.
Until the end of the game, the player has to go through the whole
level. However when the player dies, he respawns in the last
checkpoint – a yellow flag with letter ’C’; when touched, a happy, although
very distorted sound is played. There are two possible ways for the
protagonist to lose: falling down off the stage, or stepping into a spike
trap. Considering the fact that this level is supposed to be insanely
hard to get through, two additional traps were implemented to basic
blocks. The first type is an invisible block – before the protagonist
collides with them, they are not to be seen in any way by the user. If
the player dies after triggering the visibility, it is once again set to
invisible. Similar mechanics is once again used for next type of traps –
falling blocks. Once the collision with the user happens, blocks start
to fall down.
every stage. They were proven to change the state of user’s
emotion by their degree of affectiveness. This level was separated into
two values – intensity of the feeling (arousal) and pleasantness of a
sound. Depending on these two values, proper sounds were chosen
and included in the games.</p>
      <p>For the last game in stage 3 memorizing the way through a maze is
the only important part. No time nor score tracking is implemented.
Visuals are very simple, no distracting elements were added. The
choices made by the player are saved into logs, which will be
discussed later on.</p>
      <p>Size and shape of colliders were also adjusted to the game genre
and difficulty intended. For the first scene, the collider for the
protagonist is smaller than his real model. It removes the feeling of being
hit before the projectile hits the player. On the contrary, in the
second game colliders are too big. Player can get hit by a trap before
he touches it with a model. This decision was made to enhance the
irritation and the feeling of unjust. For the last level, colliders were
adjusted to not hit the walls too often, so the movement will be
pleasant and smooth. Another intentional difference in stage two from
others is the protagonist’s movement. It was designed similarly to
the jumping mechanics, although it doesn’t stop at a certain speed –
the player’s model is constantly given acceleration. This is a
perfect example of poorly made mechanics, which are incredibly hard
to control.</p>
      <p>
        To boost the affective part of gameplay, sounds provided by NIMH
Center for the Study of Emotion and Attention [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] were added to the
      </p>
      <p>Furthermore, music themes and in-game sounds were recorded.
The design was created with a view to expected emotions. First
game’s theme consists of electronic/rock music, sounds of picking
items are clear, echo has been added to each sound. To keep the
second level unbalanced and irritating, time signature for background
music was disturbed – the last eighth note was erased. This gives
an unsettling feeling, like someone has been playing off tempo.
Additionally, each time the player dies increases the pitch and
distortion effects for the background theme. Protagonist has a high-pitched
voice, which gets more infuriating with every death. The sound of
winning (which is hard to achieve, given the difficulty of the game)
has a very disappointing and unsatisfying tone. Last level has a
pleasant theme, edited to sound like old arcade, 8-bit music.</p>
      <p>In order to get as much as possible from single gameplay about
the state of players’ emotions, additionally to their bodily functions, a
proper context-gathering mechanism is required. It is implemented as
a set of different event logs that are saved for each stage. Some
information is constantly saved, no matter the level – the data about
current player’s position, ID and timestamp of an affective sound played
in the background. For stage one, events such as killing an enemy,
death, the amount of all objects picked up, current state of health and
points are saved with the proper timestamp. Additionally, the amount
of projectiles shot and their accuracy is recorded – this gives more
insight on the aggressiveness and gaming experience. There are no
enemies and pickable items in the second stage, therefore the distortion
rate of music, number of deaths and the data about traps triggered is
saved for every iteration. In the last stage, the amount of dead ends
encountered and the data about going off the correct path is being
saved.</p>
      <p>All of the games were developed using the Unity Engine. It is a
powerful environment, with tons of possibilities. One of many
features used are previously mentioned colliders. The engine contains a
variety of collider shapes, components and traits. For instance, Box
Colliders were used not only as physical objects, but also as triggers
in rooms for the first stage, logging in the third level etc. Animations
in all games were handled through the Animator Controller feature.
Another remarkable example for the possible power of Unity Engine
is Camera - just a simple change in view can drastically change the
perception of the player. While first-person cameras can increase the
immersion with the protagonist, third-person cameras can give an
insight on what’s going on around the character, escalating the feeling
of stress for the subject. Moreover, sounds and music have separate
components - Audio Listener and Audio Source. Both of them have
a simple mixer, included inside the engine. Those components create
impressive opportunities for manipulation of emotion. The simplest
example used in the game is the distortion attribute, for each death in
the second stage.
5</p>
    </sec>
    <sec id="sec-6">
      <title>FRAMEWORK FOR GAME</title>
    </sec>
    <sec id="sec-7">
      <title>CONFIGURATION IN EXPERIMENTS</title>
      <p>
        In order to get information from subjects about their feelings towards
the games, the GEQ questionnaire was used [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This survey consists
of three parts: The Core Questionnaire, The Social Presence Module
and The Post-game Module. All of them contain important
information about different sections of study. All of them involve questions
about feelings, with a range of possible answers from 0 to 4. Zero
means ’not at all’, one means ’slightly’, two is ’moderately’, three is
’fairly’ and the last, four is ’extremely’. First part of the survey has
33 questions about emotions and sensations felt during the game, for
example: ’I was good at it’ and ’I felt frustrated’. The Social Presence
Module contains 17 questions, however it should only be taken when
any form of social interactions were taken in game, whether it’s
another person or a simple non-playable character interaction. The last
section involves 17 questions about the overall feeling of a subject
after the game has been played, for instance: ’I felt satisfied’ and ’I
found it a waste of time’.
      </p>
      <p>To make this questionnaire a part of study, and also to provide a
unified context-logging mechanism, a software framework has been
written. It is responsible for starting all mini-games and preparing the
survey after each game. To install the plugin you need to copy .dll
files and prefabs into Unity project. After restarting the editor, you
should see “Feedback” menu in the menu bar and the configuration
file in “/Resources” directory. In order to start using the plugin, you
need to create an SQL database with tables for each survey form you
want to include in your game. By default the plugin saves answers
as integers, so each question should have a separate column of this
type.</p>
      <p>Everything is connected through a proper configuration file. It is
required to set correct database provider and connection string.
After that, the model classes can be generated by choosing “Generate
model classes” button from “Feedback” menu. Pressing "Create
survey form" button will open wizard that allows to choose different
types of questions (radio buttons, slider or dropdown). The plugin
will create new scene with questions based on the table in database.
After that, a request to manually add generated script-handling data
persistence to an empty object in the scene will pop up. At the end,
there is a possibility to change the text and position of questions in
the scene. After building the project, a game is started and the survey
pops up next. When all questions are answered, the framework sends
all data to the local SQL database.</p>
      <p>This framework has a very high potential for further studies.
Firstly, it gives an opportunity to create a multi-platform study. More
computers would be available to use for a study. Furthermore, a
mobile version could be implemented. This way, even more subjects
would’ve taken part in a study. Another possible future usage is the
adjustment of level difficulty for every game, dependent on answers
in the survey. This would increase the affective part of study, as
personal change in games would take place.
6</p>
    </sec>
    <sec id="sec-8">
      <title>RELATED WORK</title>
      <p>
        There are quite a few different frameworks for affective research.
On the one hand, one can point out the tools used to build classical
psychological experiments, like PsychoPy [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], OpenSeasame [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
or E-Prime2 and on the other hand, the tools used for affective
experiments with games, e.g. FILTWAM [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], iHEARu-PLAY [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], or
emoCook [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The tools in the first group offer various widgets for collecting
information from users, making it possible to transfer virtually any
paper questionnaire to the electronic version. However, they do not
allow one to control a stimulus-rich game environment. This problem
has been addressed in the second group of tools, where affective
interaction is carried out in games (e.g. educational game “emoCook”).
Nevertheless, these solutions are prepared for specific applications
and do not provide a general solution for affective experiments.</p>
      <p>The framework described in this paper combines the advantages
of these two groups. It both allows for the use of games as a research
environment and is a general solution, allowing for the inclusion of
any games (written in Unity) and any questionnaires (the application
is not limited to the GEQ described in the article).
7</p>
    </sec>
    <sec id="sec-9">
      <title>EVALUATION</title>
      <p>The motivation to introduce a few short mini-games was better
control over the emotions evoked during the experiment. The assumption
was that each game aim is to evoke specific emotions using a small
number of stimuli. These assumptions were confirmed by the results
of the GEQ questionnaire.</p>
      <p>
        Revised list of GEQ factors [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]3 was used for analysis. A series
of one-way ANOVAs was conducted to evaluate the differences
between games. Post-hoc comparisons were done using the Tukey HSD
test. Analysis was performed in Python with scipy4 and statsmodels5
libraries.
      </p>
      <p>The strongest effects can be observed for the second level, which
should give the sensation of unjust and fury. It was connected with
significantly higher Negativity (M = 2.85), significantly lower
Positive Affect (M = 1.14) and significantly lower Competence6
(M = 0.86) than the two other stages (Negativity: M = 1.11 and
M = 0.70, Positive Affect: M = 2.53 and M = 2.49, Competence:
M = 2.42 and M = 2.78, for Stage 1 and Stage 3 respectively).</p>
      <p>Stage 1, designed as an easy stage connected with positive
emotions, and Stage 3, designed as emotionally neutral, were both
evaluated as the ones with the higher Positive Affect (there were no
significant differences between them). Neutrality of the third level is
revealed with the lowest Negativity (M = 0.70; significantly lower
than the first level,M = 1.11).</p>
      <p>Flow, indicating whether or not players have lost control of their
time in the game, was significantly lowest in the third level (M =
1.35) than the other two levels (M = 1.99 and M = 2.02),
indicating that for this factor the most important is the fact that emotions
are evoked, no matter whether they are positive or negative. Finally,
Immersion, the subjective connection to the game, was low for all
levels (M = 1.75, M = 1.23, M = 1.66 for levels 1-3
respectively), which is also consistent with the assumptions. The games
were too short for the players to get fully involved.</p>
      <p>We tested the framework on several platforms including Windows,
Linux and mobile platforms running Android operating systems. It
ran correctly on all of the platforms7 proving its portability between
2 See: https://pstnet.com/products/e-prime/.
3 The values are ranging from 0 (not at all) to 4 (extremely) for each factor.
4 See: https://www.scipy.org/.
5 See: https://www.statsmodels.org/.
6 Competence reflects how well players judged their own performance against
the game’s goals.
7 The only requirement is to use Unity build 2019.2.19f1
most popular operating systems. We also tested it with different
databases including remote MySQL databases and SQLite database
for Android systems, where in both cases it worked correctly. While
experiments presented in this paper did not use the framework, they
will be used by us as a baseline for future evaluation of the
framework.
8</p>
    </sec>
    <sec id="sec-10">
      <title>FUTURE WORK AND SUMMARY</title>
      <p>In the paper we presented our recent work conducted as a part of the
BIRAFFE2 experiment in Affective Computing. As a novel part of
the experiment we developed three specially designed mini-games,
based of carefully selected game mechanics. We believe, that games
provide a controllable yet overall ecological environment for
studying emotions. We used these games as an important counterpart of
classical visual and auditory stimuli during the experiment to evoke
emotions of participants. Moreover, we presented a software tool,
with a built-in context-logging mechanism, supporting the execution,
automation and evaluation of experiments of this kind.</p>
      <p>In the future, we would like to develop our work in several
directions. First of all, based on the analysis of the results of the
experiment, we will continue the development of new games with
improved mechanics to fine tune the evocation of emotions. Ultimately,
we expect games will help us in developing computer-based
personalized models of emotions to be used in different applications.
Furthermore, based on the future findings, we would like to study the
aspects of emotional adaptation and personalization in games using
the machine learning methods. Finally, our current setup is ready to
be used not just in desktop games, but also on mobile devices. We
will explore this direction, as mobile games not only constitute a
very important market for games, but also offer new opportunities
for interaction.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Kiavash</given-names>
            <surname>Bahreini</surname>
          </string-name>
          , Rob Nadolski, and Wim Westera, '
          <article-title>FILTWAM - A framework for online affective computing in serious games'</article-title>
          , in Fourth International Conference on Games and
          <article-title>Virtual Worlds for Serious Applications</article-title>
          , VS-GAMES
          <year>2012</year>
          , Genoa, Italy,
          <source>October 29-31</source>
          ,
          <year>2012</year>
          , eds., Alessandro De Gloria and Sara de Freitas, volume
          <volume>15</volume>
          of Procedia Computer Science, pp.
          <fpage>45</fpage>
          -
          <lpage>52</lpage>
          . Elsevier, (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Margaret</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bradley</surname>
          </string-name>
          and
          <string-name>
            <surname>Peter J. Lang</surname>
          </string-name>
          , '
          <article-title>The international affective digitized sounds (2nd edition; iads-2): Affective ratings of sounds and instruction manual</article-title>
          .
          <source>technical report B-3'</source>
          ,
          <source>Technical report</source>
          , University of Florida, Gainsville,
          <string-name>
            <surname>FL</surname>
          </string-name>
          , (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Jose</given-names>
            <surname>Maria</surname>
          </string-name>
          Garcia-Garcia, Victor M. Ruiz Penichet, María Dolores Lozano, Juan Enrique Garrido, and
          <string-name>
            <given-names>Effie</given-names>
            <surname>Lai-Chong</surname>
          </string-name>
          <string-name>
            <surname>Law</surname>
          </string-name>
          , '
          <article-title>Multimodal affective computing to enhance the user experience of educational software applications'</article-title>
          ,
          <source>Mobile Information Systems</source>
          ,
          <year>2018</year>
          ,
          <volume>8751426</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>8751426</lpage>
          :
          <fpage>10</fpage>
          , (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Simone</given-names>
            <surname>Hantke</surname>
          </string-name>
          , Florian Eyben, Tobias Appel, and Björn W. Schuller, '
          <article-title>ihearu-play: Introducing a game for crowdsourced data collection for affective computing'</article-title>
          ,
          <source>in 2015 International Conference on Affective Computing and Intelligent Interaction, ACII</source>
          <year>2015</year>
          ,
          <article-title>Xi'an, China</article-title>
          ,
          <source>September 21-24</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>891</fpage>
          -
          <lpage>897</lpage>
          . IEEE Computer Society, (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Wijnand</surname>
            <given-names>A. IJsselsteijn</given-names>
          </string-name>
          , Yvonne A. W. de Kort, and Karolien Poels,
          <source>The Game Experience Questionnaire, Technische Universiteit Eindhoven</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Daniel</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Johnson</surname>
          </string-name>
          , M. John Gardner, and Ryan Perry, '
          <article-title>Validation of two game experience scales: The player experience of need satisfaction (PENS) and game experience questionnaire (GEQ)',</article-title>
          <string-name>
            <given-names>Int. J.</given-names>
            <surname>Hum</surname>
          </string-name>
          . Comput. Stud.,
          <volume>118</volume>
          ,
          <fpage>38</fpage>
          -
          <lpage>46</lpage>
          , (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Krzysztof</given-names>
            <surname>Kutt</surname>
          </string-name>
          , Wojciech Binek, Piotr Misiak,
          <string-name>
            <given-names>Grzegorz J.</given-names>
            <surname>Nalepa</surname>
          </string-name>
          , and Szymon Bobek, '
          <article-title>Towards the development of sensor platform for processing physiological data from wearable sensors'</article-title>
          ,
          <source>in Artificial Intelligence and Soft Computing - 17th International Conference, ICAISC</source>
          <year>2018</year>
          , Zakopane, Poland, June 3-7,
          <year>2018</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          , pp.
          <fpage>168</fpage>
          -
          <lpage>178</lpage>
          , (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Krzysztof</given-names>
            <surname>Kutt</surname>
          </string-name>
          , Dominika Dra˛z˙yk, Paweł Jemioło, Szymon Bobek, Barbara Giz˙ycka, Víctor Rodríguez Fernández, and
          <string-name>
            <surname>Grzegorz J. Nalepa</surname>
          </string-name>
          , 'BIRAFFE:
          <article-title>Bio-reactions and faces for emotion-based personalization'</article-title>
          ,
          <source>in AfCAI 2019: Workshop on Affective Computing and Context Awareness in Ambient Intelligence</source>
          , volume
          <volume>2609</volume>
          <source>of CEUR Workshop Proceedings. CEUR-WS.org</source>
          , (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Peter J. Lang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Margaret M. Bradley</surname>
            , and
            <given-names>B. N.</given-names>
          </string-name>
          <string-name>
            <surname>Cuthbert</surname>
          </string-name>
          , '
          <article-title>International affective picture system (iaps): Affective ratings of pictures and instruction manual</article-title>
          .
          <source>technical report B-3'</source>
          ,
          <source>Technical report</source>
          , The Center for Research in Psychophysiology, University of Florida, Gainsville,
          <string-name>
            <surname>FL</surname>
          </string-name>
          , (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Sebastiaan</surname>
            <given-names>Mathôt</given-names>
          </string-name>
          , Daniel Schreij, and Jan Theeuwes, '
          <article-title>OpenSesame: An open-source, graphical experiment builder for the social sciences'</article-title>
          ,
          <source>Behavior Research Methods</source>
          ,
          <volume>44</volume>
          (
          <issue>2</issue>
          ),
          <fpage>314</fpage>
          -
          <lpage>324</lpage>
          , (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Grzegorz</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nalepa</surname>
          </string-name>
          , Krzysztof Kutt, and Szymon Bobek, '
          <article-title>Mobile platform for affective context-aware systems'</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          ,
          <volume>92</volume>
          ,
          <fpage>490</fpage>
          -
          <lpage>503</lpage>
          , (mar
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Grzegorz</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nalepa</surname>
          </string-name>
          , Krzysztof Kutt, Barbara Giz˙ycka, Paweł Jemioło, and Szymon Bobek, '
          <article-title>Analysis and use of the emotional context with wearable devices for games and intelligent assistants'</article-title>
          ,
          <source>Sensors</source>
          ,
          <volume>19</volume>
          (
          <issue>11</issue>
          ),
          <volume>2509</volume>
          , (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Jonathan</surname>
            <given-names>Peirce</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeremy R. Gray</surname>
          </string-name>
          , Sol Simpson, Michael MacAskill, Richard Höchenberger, Hiroyuki Sogo, Erik Kastman, and Jonas Kristoffer Lindelvø, '
          <article-title>Psychopy2: Experiments in behavior made easy'</article-title>
          ,
          <source>Behavior Research Methods</source>
          ,
          <volume>51</volume>
          (
          <issue>1</issue>
          ),
          <fpage>195</fpage>
          -
          <lpage>203</lpage>
          , (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Rosalind</surname>
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Picard</surname>
          </string-name>
          , Affective Computing, MIT Press, Cambridge, MA,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Bogdan</surname>
            <given-names>Zawadzki</given-names>
          </string-name>
          , Jan Strelau,
          <article-title>Piotr Szczepaniak, and Magdalena S´liwin´ska, Inwentarz osobowos´ci NEO-FFI Costy i McCrae</article-title>
          .
          <article-title>Polska adaptacja</article-title>
          ,
          <source>Pracownia Testów Psychologicznych</source>
          , Warszawa,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <article-title>Laura Z˙uchowska, Game Design with Unity for Affective Games</article-title>
          ,
          <source>BSc thesis</source>
          , AGH University of Science and Technology,
          <year>2020</year>
          . Supervisor:
          <string-name>
            <given-names>G.J.</given-names>
            <surname>Nalepa</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>