<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Affective Games Provide Controlable Context. Proposal of an Experimental Framework</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Laura</forename><surname>Żuchowska</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Krzysztof</forename><surname>Kutt</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Krzysztof</forename><surname>Geleta</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Szymon</forename><surname>Bobek</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Grzegorz</forename><forename type="middle">J</forename><surname>Nalepa</surname></persName>
						</author>
						<title level="a" type="main">Affective Games Provide Controlable Context. Proposal of an Experimental Framework</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D74EBE4CE6F25EBA92B38017E240D9E6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T05:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We propose an experimental framework for Affective Computing based of video games. We developed a set of specially designed mini-games, based of carefully selected game mechanics, to evoke emotions of participants of a larger experiment. We believe, that games provide a controllable yet overall ecological environment for studying emotions. We discuss how we used our mini-games as an important counterpart of classical visual and auditory stimuli. Furthermore, we present a software tool supporting the execution and evaluation of experiments of this kind.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Emotions constitute an important context for interpretation of human behavior. Affective computing (AfC) is a field of study devoted to the computer-based analysis, modeling and synthesis of emotions <ref type="bibr" target="#b13">[14]</ref>. In our work in this area, we focus on the use of wearable and mobile devices to support the acquisition and interpretation of bodily signals in order to the detect changes of affective states and possibly recognize the corresponding emotional states of subjects. We believe, that the context-aware systems paradigm considered in computer science, should take into the account the affective dimension <ref type="bibr" target="#b10">[11]</ref>. Furthermore, the computer models should be personalized, i.e. take into the account individual differences of human behavior, as well as personality traits <ref type="bibr" target="#b7">[8]</ref>.</p><p>One of the principal challenges in the AfC experiments is the actual process to evoke individual emotions for the training and calibration of computer models. In the psychological literature, some of the typical experimental procedures assume the use of standardized visual and auditory stimuli that are supposed to evoke the specific emotions. From our perspective, such an approach is not sufficient as the experimental situation very often does not seem natural to the participant, furthermore it is not personalized. To tackle this challenge, in our work we employ computer games as the source of specific, rich, natural, yet controllable context to evoke emotions <ref type="bibr" target="#b11">[12]</ref>.</p><p>In this paper we present an experimental setup using affective games to evoke emotions of the participants. The principal contributions include: the design of original video games aimed at AfC experiments, a framework for configuration of experiments using such games, putting these two in the context of the BIRAFFE experiments we conducted.</p><p>The rest of the paper is organized as follows: In Section 2 we discuss the detailed motivation of our work. Then in Section 3 we describe an experiment in AfC we conducted to acquire data on the individual affective reactions. In this experiment we used a set of affective games we specifically developed for this task, as described in 4. Furthermore, we realized that in order to provide flexibility of such experiments, we should have a framework supporting the reconfiguration of such experiments for a range of game levels. We developed a prototype of such a framework, as described in Section 5. A short comparison with other solutions is provided in Section 6. In Section 7 we describe the evaluation of our work. We conclude the paper in Section 8.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">MOTIVATION</head><p>Research on emotions requires, on the one hand, a controllable experimental environment to evoke and detect and emotions, on the other, natural conditions for experiments in order to minimize a possible discomfort for the participants. Video games seem to be a good trade-off between these two extreme requirements. Games allow to control the appearing stimuli and log everything that happened, especially the reaction times, Moreover, the environment is rich in stimuli and allows for user interaction with objects, including emotionally related interaction framed in the so-called Affective Loop <ref type="bibr" target="#b11">[12]</ref>.</p><p>"Regular" games, available on the market, do not meet the requirements of the experimental environment. First of all, they provide a (too) rich environment in which the player may do (too) many things. In such an environment, a very large sample size is needed to get the right statistical power to draw conclusions, which makes experiments difficult to conduct. Also, the use of machine learning methods will not be trivial, as there are many variables in such case, some of which will only be disruptive noise. Secondly, "regular" games do not allow for the evaluation of emotions too often. The player is constantly engaged in the game and interrupting it to complete the questionnaire will reduce the immersion of the game.</p><p>These issues have been observed in our previous experiments <ref type="bibr" target="#b10">[11]</ref> including the BIRAFFE1 experiment <ref type="bibr" target="#b7">[8]</ref>. To address them, a set of mini games, with restricted experimental conditions, was created. Each of them is built up on a very limited set of stimuli, with the aim of evoking a limited set of emotional reactions. The following sections describe an experiment called BIRAFFE2 (see Section 3) in which three such games were used (see <ref type="bibr">Section 4)</ref>.</p><p>The BIRAFFE2 experiment has led to observation of further issues that need to be addressed when conducting game experiments. In particular, attention has been drawn to the fact that all mini games should generate event logs in a uniform format to avoid additional pre-processing steps when analysing the collected data. It is equally important to implement questionnaires directly in the games, at the end of each mini game. Filling out the questionnaires at the end of the gaming session makes the impressions fuzzy and the self-description may not be accurate enough.</p><p>Therefore, in parallel with the BIRAFFE experiments, a dedicated framework was developed to automate the preparation of game-based affective experiments. It allows to generate an experiment template with questionnaires between different levels and provides a databasebased logging interface. A detailed description of the framework can be found in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">THE BIRAFFE2 EXPERIMENT</head><p>The BIRAFFE2 study included 103 participants (33% female) between 18 and 26 (M = 21.63, SD = 1.32), recruited among students of the Artificial Intelligence Basics course at AGH University of Science and Technology, Kraków, Poland and their friends.</p><p>It is a revised version of a previous experiment called BIRAFFE1 (Bio-Reactions and Faces for Emotion-based Personalization) described in <ref type="bibr" target="#b7">[8]</ref>.The aim of the study was to collect physiological data paired with behavioral data, which can then be used to develop models for prediction of emotions.</p><p>Behavioral data were twofold: from the part in which the subjects played three games (for details see Section 4) and from the classical experiment, in which sound and visual stimuli (from IADS <ref type="bibr" target="#b1">[2]</ref> and IAPS <ref type="bibr" target="#b8">[9]</ref> datasets respectively) were presented and then subjects were asked to assess what emotions they evoked. Specifically, the stimuli was presented for 6 seconds, what was followed by 6 seconds for affective rating with the use of custom widget with 2-dimensional space (valence and arousal). The whole behavioral data was collected as a set of logs in comma-separated (CSV) files.</p><p>Physiological signals, Electrocardiogram (ECG) and Electrodermal activity (EDA), were gathered using BITalino (r)evolution kit, as it is the most promising of cheap mobile hardware platforms (for comparison see <ref type="bibr" target="#b6">[7]</ref>). Besides ECG and EDA, during the experiment also the following signals were collected: accelerometer and gyroscope from gamepad, facial images taken by webcam (every 250 milliseconds), screencast of the whole game session.</p><p>The whole protocol consisted of several phases:</p><p>1. NEO-FFI paper-and-pen questionnaire <ref type="bibr" target="#b14">[15]</ref> for personality measurement (approx. 10 minutes), 2. Physiological devices setup (approx. 2 minutes), 3. Baseline signals recording (1 minute), 4. Instructions and training (approx. 5 minutes), 5. First part of stimuli presentation and rating (17.5 minutes), 6. Games session (up to 15 minutes in total), 7. Second part of stimuli presentation and rating (17.5 minutes), 8. Three paper-and-pen GEQ questionnaires <ref type="bibr" target="#b4">[5]</ref> (one for each game) and gaming experience questionnaire (approx. 10 minutes).</p><p>The whole protocol lasts up to 75 minutes. Steps 3-7 were done on a PC. All of them were controlled by the Python 3.8 with the use of PsychoPy 3.2.4 library <ref type="bibr" target="#b12">[13]</ref>. Participant interacted with the procedure only with a gamepad.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">EVOKING EMOTIONS WITH AFFECTIVE GAMES</head><p>In order to support the game sessions of the experiment, three specific affective mini-games were created <ref type="bibr" target="#b15">[16]</ref>. The aim for all the games was to create an immense amount of emotions in a short time. The main obstacle was the inability to create an intriguing story, therefore the whole section of narrative elements was discarded. The only way of building an affective project was to make different sets of games with a variety of mechanics and audiovisuals.</p><p>The simplest solution to create an emotional-changing environment was to revolve around the overall difficulty of games. While making the neutral, peaceful stage can relieve stress for the player, the loud and hard level can intensify the rage and increase heartbeat. Therefore, three genres have been selected: roguelike, platform, and maze. The first level is balanced to be an easy stage, supposed to develop energetic, happy emotions. On the contrary, the second level is extremely hard to beat, filled with traps, to give the sensation of unjust and fury. This juxtaposition is important to the study, given the sudden change. Last phase is neutral, without any emotion-boosting elements, it exists to check the player's decision-making, behavior and bodily changes due to previous irritation. Additionally, a proper collection of game patterns was implemented. For every stage, depending on what emotions it should boost, from that collection separate elements were chosen. Stage one contains elements such as score tracking, weapons, enemies and looting. The finishing condition is elimination of all antagonists -no stress-inducting time limit was implemented. The difficulty in this level was balanced by setting the damage per second of the protagonist much higher than the one of the antagonist. While players can shoot up to 5 projectiles per second, enemies can shoot only one attack per second. Moreover, the speed of player's projectiles is 2.5 times higher for the default weapon. An additional blaster was placed on a map, giving the possibility for the user to eliminate the enemies even easier. Furthermore, in the case the subject is not used to playing games, health points can be increased by picking up heart-shaped objects. In order to unleash more fun and any form of achievement-getting sensation, a score tracker is incrementing when picking up money bags from the floor or from the killed enemy (the amount of bags dropped from antagonist is random).</p><p>In the platform game (stage 2) traps and time limit were implemented. Both of them are crucial in order to imply stress and rage. Until the end of the game, the player has to go through the whole level. However when the player dies, he respawns in the last checkpoint -a yellow flag with letter 'C'; when touched, a happy, although very distorted sound is played. There are two possible ways for the protagonist to lose: falling down off the stage, or stepping into a spike trap. Considering the fact that this level is supposed to be insanely hard to get through, two additional traps were implemented to basic blocks. The first type is an invisible block -before the protagonist collides with them, they are not to be seen in any way by the user. If the player dies after triggering the visibility, it is once again set to in-visible. Similar mechanics is once again used for next type of trapsfalling blocks. Once the collision with the user happens, blocks start to fall down. For the last game in stage 3 memorizing the way through a maze is the only important part. No time nor score tracking is implemented. Visuals are very simple, no distracting elements were added. The choices made by the player are saved into logs, which will be discussed later on.</p><p>Size and shape of colliders were also adjusted to the game genre and difficulty intended. For the first scene, the collider for the protagonist is smaller than his real model. It removes the feeling of being hit before the projectile hits the player. On the contrary, in the second game colliders are too big. Player can get hit by a trap before he touches it with a model. This decision was made to enhance the irritation and the feeling of unjust. For the last level, colliders were adjusted to not hit the walls too often, so the movement will be pleasant and smooth. Another intentional difference in stage two from others is the protagonist's movement. It was designed similarly to the jumping mechanics, although it doesn't stop at a certain speedthe player's model is constantly given acceleration. This is a perfect example of poorly made mechanics, which are incredibly hard to control. To boost the affective part of gameplay, sounds provided by NIMH Center for the Study of Emotion and Attention <ref type="bibr" target="#b1">[2]</ref> were added to the every stage. They were proven to change the state of user's emotion by their degree of affectiveness. This level was separated into two values -intensity of the feeling (arousal) and pleasantness of a sound. Depending on these two values, proper sounds were chosen and included in the games. Furthermore, music themes and in-game sounds were recorded. The design was created with a view to expected emotions. First game's theme consists of electronic/rock music, sounds of picking items are clear, echo has been added to each sound. To keep the second level unbalanced and irritating, time signature for background music was disturbed -the last eighth note was erased. This gives an unsettling feeling, like someone has been playing off tempo. Additionally, each time the player dies increases the pitch and distortion effects for the background theme. Protagonist has a high-pitched voice, which gets more infuriating with every death. The sound of winning (which is hard to achieve, given the difficulty of the game) has a very disappointing and unsatisfying tone. Last level has a pleasant theme, edited to sound like old arcade, 8-bit music.</p><p>In order to get as much as possible from single gameplay about the state of players' emotions, additionally to their bodily functions, a proper context-gathering mechanism is required. It is implemented as a set of different event logs that are saved for each stage. Some information is constantly saved, no matter the level -the data about current player's position, ID and timestamp of an affective sound played in the background. For stage one, events such as killing an enemy, death, the amount of all objects picked up, current state of health and points are saved with the proper timestamp. Additionally, the amount of projectiles shot and their accuracy is recorded -this gives more insight on the aggressiveness and gaming experience. There are no enemies and pickable items in the second stage, therefore the distortion rate of music, number of deaths and the data about traps triggered is saved for every iteration. In the last stage, the amount of dead ends encountered and the data about going off the correct path is being saved.</p><p>All of the games were developed using the Unity Engine. It is a powerful environment, with tons of possibilities. One of many features used are previously mentioned colliders. The engine contains a variety of collider shapes, components and traits. For instance, Box Colliders were used not only as physical objects, but also as triggers in rooms for the first stage, logging in the third level etc. Animations in all games were handled through the Animator Controller feature. Another remarkable example for the possible power of Unity Engine is Camera -just a simple change in view can drastically change the </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">FRAMEWORK FOR GAME CONFIGURATION IN EXPERIMENTS</head><p>In order to get information from subjects about their feelings towards the games, the GEQ questionnaire was used <ref type="bibr" target="#b4">[5]</ref>. This survey consists of three parts: The Core Questionnaire, The Social Presence Module and The Post-game Module. All of them contain important information about different sections of study. All of them involve questions about feelings, with a range of possible answers from 0 to 4. Zero means 'not at all', one means 'slightly', two is 'moderately', three is 'fairly' and the last, four is 'extremely'. First part of the survey has 33 questions about emotions and sensations felt during the game, for example: 'I was good at it' and 'I felt frustrated'. The Social Presence Module contains 17 questions, however it should only be taken when any form of social interactions were taken in game, whether it's another person or a simple non-playable character interaction. The last section involves 17 questions about the overall feeling of a subject after the game has been played, for instance: 'I felt satisfied' and 'I found it a waste of time'.</p><p>To make this questionnaire a part of study, and also to provide a unified context-logging mechanism, a software framework has been written. It is responsible for starting all mini-games and preparing the survey after each game. To install the plugin you need to copy .dll files and prefabs into Unity project. After restarting the editor, you should see "Feedback" menu in the menu bar and the configuration file in "/Resources" directory. In order to start using the plugin, you need to create an SQL database with tables for each survey form you want to include in your game. By default the plugin saves answers as integers, so each question should have a separate column of this type.</p><p>Everything is connected through a proper configuration file. It is required to set correct database provider and connection string. After that, the model classes can be generated by choosing "Generate model classes" button from "Feedback" menu. Pressing "Create survey form" button will open wizard that allows to choose different types of questions (radio buttons, slider or dropdown). The plugin will create new scene with questions based on the table in database. After that, a request to manually add generated script-handling data persistence to an empty object in the scene will pop up. At the end, there is a possibility to change the text and position of questions in the scene. After building the project, a game is started and the survey pops up next. When all questions are answered, the framework sends all data to the local SQL database. This framework has a very high potential for further studies. Firstly, it gives an opportunity to create a multi-platform study. More computers would be available to use for a study. Furthermore, a mobile version could be implemented. This way, even more subjects would've taken part in a study. Another possible future usage is the adjustment of level difficulty for every game, dependent on answers in the survey. This would increase the affective part of study, as personal change in games would take place.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">RELATED WORK</head><p>There are quite a few different frameworks for affective research. On the one hand, one can point out the tools used to build classical psychological experiments, like PsychoPy <ref type="bibr" target="#b12">[13]</ref>, OpenSeasame <ref type="bibr" target="#b9">[10]</ref>, or E-Prime<ref type="foot" target="#foot_3">2</ref> and on the other hand, the tools used for affective experiments with games, e.g. FILTWAM <ref type="bibr" target="#b0">[1]</ref>, iHEARu-PLAY <ref type="bibr" target="#b3">[4]</ref>, or emoCook <ref type="bibr" target="#b2">[3]</ref>.</p><p>The tools in the first group offer various widgets for collecting information from users, making it possible to transfer virtually any paper questionnaire to the electronic version. However, they do not allow one to control a stimulus-rich game environment. This problem has been addressed in the second group of tools, where affective interaction is carried out in games (e.g. educational game "emoCook"). Nevertheless, these solutions are prepared for specific applications and do not provide a general solution for affective experiments.</p><p>The framework described in this paper combines the advantages of these two groups. It both allows for the use of games as a research environment and is a general solution, allowing for the inclusion of any games (written in Unity) and any questionnaires (the application is not limited to the GEQ described in the article).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">EVALUATION</head><p>The motivation to introduce a few short mini-games was better control over the emotions evoked during the experiment. The assumption was that each game aim is to evoke specific emotions using a small number of stimuli. These assumptions were confirmed by the results of the GEQ questionnaire.</p><p>Revised list of GEQ factors <ref type="bibr" target="#b5">[6]</ref> <ref type="foot" target="#foot_4">3</ref> was used for analysis. A series of one-way ANOVAs was conducted to evaluate the differences between games. Post-hoc comparisons were done using the Tukey HSD test. Analysis was performed in Python with scipy <ref type="foot" target="#foot_5">4</ref> and statsmodels <ref type="foot" target="#foot_6">5</ref>libraries.</p><p>The strongest effects can be observed for the second level, which should give the sensation of unjust and fury. It was connected with significantly higher Negativity (M = 2.85), significantly lower Positive Affect (M = 1. Stage 1, designed as an easy stage connected with positive emotions, and Stage 3, designed as emotionally neutral, were both evaluated as the ones with the higher Positive Affect (there were no significant differences between them). Neutrality of the third level is revealed with the lowest Negativity (M = 0.70; significantly lower than the first level, M = 1.11).</p><p>Flow, indicating whether or not players have lost control of their time in the game, was significantly lowest in the third level (M = 1.35) than the other two levels (M = 1.99 and M = 2.02), indicating that for this factor the most important is the fact that emotions are evoked, no matter whether they are positive or negative. Finally, Immersion, the subjective connection to the game, was low for all levels (M = 1.75, M = 1.23, M = 1.66 for levels 1-3 respectively), which is also consistent with the assumptions. The games were too short for the players to get fully involved.</p><p>We tested the framework on several platforms including Windows, Linux and mobile platforms running Android operating systems. It ran correctly on all of the platforms<ref type="foot" target="#foot_8">7</ref> proving its portability between most popular operating systems. We also tested it with different databases including remote MySQL databases and SQLite database for Android systems, where in both cases it worked correctly. While experiments presented in this paper did not use the framework, they will be used by us as a baseline for future evaluation of the framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">FUTURE WORK AND SUMMARY</head><p>In the paper we presented our recent work conducted as a part of the BIRAFFE2 experiment in Affective Computing. As a novel part of the experiment we developed three specially designed mini-games, based of carefully selected game mechanics. We believe, that games provide a controllable yet overall ecological environment for studying emotions. We used these games as an important counterpart of classical visual and auditory stimuli during the experiment to evoke emotions of participants. Moreover, we presented a software tool, with a built-in context-logging mechanism, supporting the execution, automation and evaluation of experiments of this kind.</p><p>In the future, we would like to develop our work in several directions. First of all, based on the analysis of the results of the experiment, we will continue the development of new games with improved mechanics to fine tune the evocation of emotions. Ultimately, we expect games will help us in developing computer-based personalized models of emotions to be used in different applications. Furthermore, based on the future findings, we would like to study the aspects of emotional adaptation and personalization in games using the machine learning methods. Finally, our current setup is ready to be used not just in desktop games, but also on mobile devices. We will explore this direction, as mobile games not only constitute a very important market for games, but also offer new opportunities for interaction.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Stage 1: an example screen of the game</figDesc><graphic coords="2,313.05,226.56,239.00,126.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .Figure 3 .Figure 4 .Figure 5 .</head><label>2345</label><figDesc>Figure 2. Stage 2: falling block before trigger Figure 3. Stage 2: falling block after trigger</figDesc><graphic coords="3,190.68,103.09,98.95,71.68" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 6 .</head><label>6</label><figDesc>Figure 6. Stage 3: protagonist colliders for different stages.</figDesc><graphic coords="3,225.95,549.69,64.53,73.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 7 .</head><label>7</label><figDesc>Figure 7. Illustration on how the data about wrong path is saved</figDesc><graphic coords="4,65.61,66.89,239.00,99.22" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 8 .Figure 9 .</head><label>89</label><figDesc>Figure 8. 'Feedback' option available in menu</figDesc><graphic coords="4,313.05,66.89,239.00,50.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 10 .</head><label>10</label><figDesc>Figure 10. Survey example</figDesc><graphic coords="4,313.05,390.24,239.01,86.06" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>14) and significantly lower Competence 6 (M = 0.86) than the two other stages (Negativity: M = 1.11 and M = 0.70, Positive Affect: M = 2.53 and M = 2.49, Competence: M = 2.42 and M = 2.78, for Stage 1 and Stage 3 respectively).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Affective sounds used in study</figDesc><table><row><cell>Sound Puppy Bees Vomit Babies cry Baby cry Scream Child abuse Applause Rollercoaster Colonial music Bugle Rock n roll Funk music</cell><cell>Pleasure 2.88 2.16 2.08 2.04 2.75 2.05 1.57 7.32 6.94 6.53 6.32 7.90 6.94</cell><cell>Arousal 4.91 7.03 6.59 6.87 6.51 8.16 7.27 5.55 7.54 5.84 6.35 6.85 5.87</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Jagiellonian University, Poland, email: krzysztof.kutt@uj.edu.pl, szymon.bobek@uj.edu.pl, grzegorz.j.nalepa@uj.edu.pl</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_1">Eleventh International Workshop Modelling and Reasoning in Context (MRC) @ECAI 2020</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2">Copyright c 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_3">See: https://pstnet.com/products/e-prime/.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_4">The values are ranging from 0 (not at all) to</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_5">(extremely) for each factor.<ref type="bibr" target="#b3">4</ref> See: https://www.scipy.org/.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_6">See: https://www.statsmodels.org/.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_7">Competence reflects how well players judged their own performance against the game's goals.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_8">The only requirement is to use Unity build 2019.2.19f1   </note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">FILTWAM -A framework for online affective computing in serious games</title>
		<author>
			<persName><forename type="first">Kiavash</forename><surname>Bahreini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rob</forename><surname>Nadolski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wim</forename><surname>Westera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Fourth International Conference on Games and Virtual Worlds for Serious Applications, VS-GAMES 2012</title>
		<title level="s">Procedia Computer Science</title>
		<editor>
			<persName><forename type="first">Alessandro</forename><surname>De</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Gloria</forename></persName>
		</editor>
		<editor>
			<persName><forename type="first">Sara</forename><surname>De Freitas</surname></persName>
		</editor>
		<meeting><address><addrLine>Genoa, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2012">October 29-31, 2012. 2012</date>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="45" to="52" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">Margaret</forename><forename type="middle">M</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><forename type="middle">J</forename><surname>Lang</surname></persName>
		</author>
		<idno>B-3</idno>
		<title level="m">iads-2): Affective ratings of sounds and instruction manual</title>
				<meeting><address><addrLine>Gainsville, FL</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
		<respStmt>
			<orgName>University of Florida</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
	<note>The international affective digitized sounds (</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Multimodal affective computing to enhance the user experience of educational software applications</title>
		<author>
			<persName><forename type="first">Jose</forename><surname>Maria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Garcia-</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Ruiz</forename><surname>Victor</surname></persName>
		</author>
		<author>
			<persName><surname>Penichet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dolores</forename><surname>María</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Juan</forename><forename type="middle">Enrique</forename><surname>Lozano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Effie</forename><surname>Garrido</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-Chong</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><surname>Law</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mobile Information Systems</title>
		<imprint>
			<biblScope unit="volume">8751426</biblScope>
			<biblScope unit="page">10</biblScope>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">ihearu-play: Introducing a game for crowdsourced data collection for affective computing</title>
		<author>
			<persName><forename type="first">Simone</forename><surname>Hantke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Florian</forename><surname>Eyben</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tobias</forename><surname>Appel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Björn</forename><forename type="middle">W</forename><surname>Schuller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015</title>
				<meeting><address><addrLine>Xi&apos;an, China</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2015">September 21-24, 2015. 2015</date>
			<biblScope unit="page" from="891" to="897" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Wijnand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yvonne</forename><forename type="middle">A W</forename><surname>Ijsselsteijn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Karolien</forename><surname>De Kort</surname></persName>
		</author>
		<author>
			<persName><surname>Poels</surname></persName>
		</author>
		<title level="m">The Game Experience Questionnaire</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>Technische Universiteit Eindhoven</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Validation of two game experience scales: The player experience of need satisfaction (PENS) and game experience questionnaire (GEQ)</title>
		<author>
			<persName><forename type="first">M</forename><surname>Daniel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">John</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ryan</forename><surname>Gardner</surname></persName>
		</author>
		<author>
			<persName><surname>Perry</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Hum. Comput. Stud</title>
		<imprint>
			<biblScope unit="volume">118</biblScope>
			<biblScope unit="page" from="38" to="46" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Towards the development of sensor platform for processing physiological data from wearable sensors</title>
		<author>
			<persName><forename type="first">Krzysztof</forename><surname>Kutt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wojciech</forename><surname>Binek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Piotr</forename><surname>Misiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Grzegorz</forename><forename type="middle">J</forename><surname>Nalepa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Szymon</forename><surname>Bobek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence and Soft Computing -17th International Conference, ICAISC Eleventh International Workshop Modelling and Reasoning in Context (MRC) @ECAI 2020 2018</title>
				<meeting><address><addrLine>Zakopane, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">June 3-7, 2018. 2018</date>
			<biblScope unit="page" from="168" to="178" />
		</imprint>
	</monogr>
	<note>Proceedings, Part II</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">BIRAFFE: Bio-reactions and faces for emotion-based personalization</title>
		<author>
			<persName><forename type="first">Krzysztof</forename><surname>Kutt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dominika</forename><surname>Dr Ążyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paweł</forename><surname>Jemioło</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Szymon</forename><surname>Bobek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barbara</forename><surname>Giżycka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rodríguez</forename><surname>Víctor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Grzegorz</forename><forename type="middle">J</forename><surname>Fernández</surname></persName>
		</author>
		<author>
			<persName><surname>Nalepa</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">AfCAI 2019: Workshop on Affective Computing and Context Awareness in Ambient Intelligence</title>
		<title level="s">CEUR Workshop Proceedings. CEUR-WS</title>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">2609</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">International affective picture system (iaps): Affective ratings of pictures and instruction manual</title>
		<author>
			<persName><forename type="first">Peter</forename><forename type="middle">J</forename><surname>Lang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Margaret</forename><forename type="middle">M</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">N</forename><surname>Cuthbert</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<pubPlace>Gainsville, FL</pubPlace>
		</imprint>
		<respStmt>
			<orgName>The Center for Research in Psychophysiology, University of Florida</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">OpenSesame: An open-source, graphical experiment builder for the social sciences</title>
		<author>
			<persName><forename type="first">Sebastiaan</forename><surname>Mathôt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Schreij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jan</forename><surname>Theeuwes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavior Research Methods</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="314" to="324" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Mobile platform for affective context-aware systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Grzegorz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krzysztof</forename><surname>Nalepa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Szymon</forename><surname>Kutt</surname></persName>
		</author>
		<author>
			<persName><surname>Bobek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Future Generation Computer Systems</title>
		<imprint>
			<biblScope unit="volume">92</biblScope>
			<biblScope unit="page" from="490" to="503" />
			<date type="published" when="2019-03">mar 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Analysis and use of the emotional context with wearable devices for games and intelligent assistants</title>
		<author>
			<persName><forename type="first">J</forename><surname>Grzegorz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krzysztof</forename><surname>Nalepa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barbara</forename><surname>Kutt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paweł</forename><surname>Giżycka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Szymon</forename><surname>Jemioło</surname></persName>
		</author>
		<author>
			<persName><surname>Bobek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page">2509</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Psychopy2: Experiments in behavior made easy</title>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Peirce</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeremy</forename><forename type="middle">R</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sol</forename><surname>Simpson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Macaskill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Höchenberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hiroyuki</forename><surname>Sogo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Erik</forename><surname>Kastman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonas</forename><surname>Kristoffer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lindeløv</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavior Research Methods</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="195" to="203" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Affective Computing</title>
		<author>
			<persName><forename type="first">Rosalind</forename><forename type="middle">W</forename><surname>Picard</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge, MA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">Bogdan</forename><surname>Zawadzki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jan</forename><surname>Strelau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Piotr</forename><surname>Szczepaniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Magdalena</forename><surname>Śliwińska</surname></persName>
		</author>
		<title level="m">Polska adaptacja, Pracownia Testów Psychologicznych</title>
				<meeting><address><addrLine>Warszawa</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
	<note>Inwentarz osobowości NEO-FFI Costy i McCrae</note>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Game Design with Unity for Affective Games</title>
		<author>
			<persName><forename type="first">Laura</forename><surname>Żuchowska</surname></persName>
		</author>
		<editor>G.J. Nalepa</editor>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
		<respStmt>
			<orgName>AGH University of Science and Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Supervisor</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
