BIRAFFE: Bio-Reactions and Faces f or Emotion-based Personalization? 1 2 1 1 Krzysztof Kutt , Dominika Dr¡»yk , Paweª Jemioªo , Szymon Bobek , 1 3 2,1 Barbara Gi»ycka , Victor Rodriguez-Fernandez , and Grzegorz J. Nalepa 1 AGH University of Science and Technology Al. Mickiewicza 30, 30-059 Krakow, Poland {kkutt,pjm,sbobek,bgizycka,gjn}@agh.edu.pl 2 Jagiellonian University ul. Goª¦bia 24, 31-007 Kraków, Poland dominika.drazyk@student.uj.edu.pl 3 Universidad Autónoma de Madrid (UAM) 28049, Madrid, Spain victor.rodriguezf@uam.es Keywords: aective computing, personality, mobile devices, games, benchmark Abstract. In this paper we introduce the BIRAFFE data set which is the result of the experiment in aective computing we conducted in early 2019. The experiment is part of the work aimed at the development of computer models for emotion classication and recognition. We strongly believe that such models should be personalized by design as emotional responses of dierent persons are subject to individual dierences due to their personality. In the experiment we assumed data fusion from both visual and audio stimuli both taken from standard public data bases (IADS and IAPS respectively). Moreover, we combined two paradigms. In the rst one, subjects were exposed to stimuli, and later their bodily reactions (ECG, GSR, and face expression) were recorded. In the second one the subjects played basic computer games, with the same reactions constantly recorded. We decided to make the data set publicly available to the research community using the Zenodo platform. As such, the data set contributes to the development and replication of experiments in AfC. 1 Introduction Aective Computing (AfC) [17] is an interdisciplinary eld of study regarding human emotions. An important thread in AfC is emotion recognition, which requires proper understanding and modeling of this complex phenomena [3,6]. To build computer models for recognition, a proper experimental setup has to ? Copyright c 2019 for this paper by its authors. Use permitted under Creative Com- mons License Attribution 4.0 International (CC BY 4.0). be provided. It requires conditions where human subjects are exposed to spe- cic emotion evoking stimuli, and furthermore more their reactions are somehow measured. In our work we assume the so-called James-Lange approach to emo- tion modeling. It roughly assumes that the measurement of bodily reactions to stimuli can serve as proper foundation for emotion recognition. Moreover, in our work we assume a representation of aective data which is common in many experiments in psychology and human-computer interaction, i.e. the two dimensional Valence/Arousal space. Finally, for the sake of possible replication and wider experimentation, reference data sets are published and used in the experiments. Examples include DEAP , DECAF 4 5 and others. This work is in fact the continuation of a longer eort in building AfC models, previously presented on the AfCAI workshop, as well as later on in [14]. In our work we assume the use of wearable devices as well as other sensors which are possibly non-intrusive to the subjects. The data processing should also be possibly conducted with the use of devices that users have with them, e.g. mobile phones. In our works we aim at improvement of the user experience when using mobile devices (games, cognitive assistants, etc) through the so-called aective feedback loop between the user and the aective computer system. Moreover, we are using games not only as one of the future area of applications, but more importantly as a complex yet highly controllable experimental environment. Our recent results in this regards are summarized in [15]. This paper presents results of a recent experiment we conducted in early 2019, as well as the resulting data set we named BIRAFFE: Bio-Reactions and Faces f or Emotion-based Personalization. We decided to make the data set publicly available to the AfC research community using the Zenodo 6 platform. In Sect. 2 we introduce our methodology. In the experiment we assumed data fusion from both visual and audio stimuli, both taken from standard public data bases. Moreover, we combined two paradigms. The rst where subjects are exposed to stimuli, while their bodily reactions (ECG, GSR (Galvanic Skin Response)), as well as face expression) are recorded. In the second one the subjects played basic computer games, with the same reactions constantly recorded. In Sect. 3 we describe the structure of the resulting data set. Then, the paper is concluded in Sect. 4. 2 Methodology 2.1 Outline 206 participants (31% female) between 19 and 33 (M = 22.02, SD = 1.96) 7 took part in the study . They were students of the Articial Intelligence Basics 4 See http://www.eecs.qmul.ac.uk/mmv/datasets/deap. 5 See http://mhug.disi.unitn.it/wp-content/DECAF 6 See: https://dx.doi.org/10.5281/zenodo.3442143. 7 Statistics were calculated for 183 subjects for whom information about age and sex is included in the nal dataset. course at AGH University of Science and Technology, Krakow, Poland, as well as their friends or family members. Participation was not an obligatory part of the course, but one could get bonus points for it. The whole experiment lasts up to 90 minutes and consists of several phases: 1. The subject is welcomed and a participant consent is signed. 2. The subject lls out the paper-and-pen Polish adaptation [19] of the NEO- FFI inventory [7], used to measure the Big Five personality traits. 3. Set up of the measuring devices, headphones and a gamepad. 4. Baseline signals recording (1 minute). 5. Stimuli presentation and rating with one widget (16 minutes). 6. Aective SpaceShooter 2 game (10 minutes). 7. Stimuli presentation and rating with the second widget (16 minutes). 8. Freud me out 2 game (20 minutes). 9. The equipment is switched o and the subject's questions are answered. 2.2 Hardware Two research stands were prepared in the examination room and the subjects sat with their backs to each other. Each of the stands consists of three elements: PC controlled by 64-bit Windows 7 Professional with Full HD 23 LCD screen, Beyerdynamic DT-770 Pro 32 Ohm headphones, external web camera Cre- ative Live! Cam Sync HD 720p and Sony PlayStation DualShock 4 gamepad and used to display the procedure and the games. Keyboard and mouse were used only by the researcher to start the protocol. BITalino (r)evolution kit8 platform used to obtain the ECG and GSR sig- nals. Electrocardiogram was obtained by variation on the classical 3 leads montage with electrodes placed in the suprasternal notch (V), under the last rib on the left side of the body (V+) and on the pelvic iliac crest (ref- erence) [1,8]. GSR signal was gathered by 2 leads placed on the classical palmar location, i.e. on the thenar eminences on the volar surface of the left hand [2,9]. Both signals were probed with 1000 Hz sampling rate. Laptop used to save the biosignals transmitted via Bluetooth interface from the BITalino platform. It was decided to save the data on the second computer in order not to aect the performance of the PC running the procedure. Both computers were synchronized with the time.windows.com server at the beginning of each examination day to ensure proper timestamps. The whole protocol (consisting of phases 4-8, see Sect. 2.1) was running under Python 3.6 and was written with PsychoPy 3.0.6 library [16]. Both games (phases 6 and 8) were developed in Unity (see Sect. 2.4), but their start and end were managed automatically by the Python code. Participants were instructed to navigate the whole protocol via gamepad. ECG and GSR signal are collected continuously during the whole experiment. Facial photos are taken every 333 milliseconds 9 while the stimulus is displayed (phases 5 and 7), and every 1 second during the games (phases 6 and 8). 8 See: https://bitalino.com/en/. 9 Every 20 frames of the stimuli presentation. Stimuli is presented with 60 fps rate. 2.3 Emotion-evoking stimuli Standardized emotionally-evocative images and sounds from IAPS [12] and IADS [4] datasets were used as stimuli. All of their elements have ratings in the Valence- Arousal space. These ratings were used to divide the stimuli into three groups for the purpose of the experiment: + (positive valence and high arousal), 0 (neutral valence and medium arousal),  (negative valence and high arousal). Afterwards, sounds and pictures were paired in two ways (60 pairs for each condition). First condition involved consistent types of pairs (20 pairs for each type): + picture was paired with + sound (p+s+ ), 0 picture was paired with 0 sound (p0s0 ), and  picture was paired with  sound (ps). The second condition was incon- sistent, composed of types (30 pairs for each type): + picture matched with  sound (p+s), and  picture matched with + sound (ps+ ). The same set of 120 stimuli pairs was used for all participants. Consistent pairs were mixed with inconsistent ones within both stimuli sessions (phases 5 and 7) Both sessions started with the instruction and four training stimuli pairs. Then regular stimuli pairs were presented. Each presentation lasts 6 seconds 10 and is followed by 9 seconds for aective rating. Subjects are instructed to focus on their rst impression. Trials are separated with 1 second interval. The only thing that distinguishes both stimuli sessions is the widget used for aective rating. Two widgets were prepared: Valence-arousal faces widget (emospace; see Fig. 1a) gives user the possi- bility to rate emotions in 2D Valence-Arousal space (see [18] for original widget). As a hint we also placed 8 emoticons from the AectButton [5] (specically, we used the EmojiButton, which is less complicated graphi- cally [13]). Ratings are transformed into continuous values in the range [-1,1] on both axes. 5-faces widget (emoscale; see Fig. 1b) consists of 5 emoticons and was intro- duced to provide simple and intuitive method of emotion assessment. Ratings are saved as numbers in the set {1,2,3,4,5}. Both widgets were controlled using the left joystick of a gamepad. They were randomly assigned to stimuli sessions (phases 5 and 7) for each participant. 2.4 Games The modied versions of two prototype aective games designed and developed by our team were used during the study (for comprehensive overview of the previous versions see [10]): Aective SpaceShooter 2: the player controls a spaceship in order to bring down as many asteroids as possible. Asteroids are spawned in series of 10. In every series there are nine gray asteroids (non-aective) worth 10 points 10 Each sound in IADS set lasts 6 seconds. 11 Widget presented as in the study (in Polish). X axis has labels negative, neutral, positive, while Y axis has labels: high arousal and low arousal. (b) 5-faces widget 11 (a) Valence-arousal faces widget Fig. 1: Widgets used in the study. Pictures presented with a negative lter each, and one coloured (aective) worth 50 points. The aective ones are connected to one of the four stimuli conditions: p+s+, ps+, p+s, ps described in Sect. 2.3. When it is destroyed, the stimuli pair is presented (see Fig. 2a). Freud me out 2: the player has to ght dierent enemies (worth 10-26 points) on three levels in order to face the boss at fourth level. Players can either use the gun to ght enemies singly or use the SuperPower  a special ability that allows to destroy multiple opponents at once. The amount of enemies was limited to 30 (in two rst weeks of experiment) and reduced to 12 in the next three weeks. Players can additionally collect stars, receiving 4 points for each of them (see Fig. 2b). 3 The BIRAFFE Dataset BIRAFFE dataset is available to download at Zenodo 12 . It consists of a metadata le and six archives related to dierent elements of the experiment: BIRAFFE-metadata.csv contains short summary of each participant: age, sex, key timestamps (procedure start/end), personality prole and information about subsets available for given person (whether there is a BioSigs, Freud, . . . le available for the person), BIRAFFE-biosigs.zip contains biosignals (ECG and GSR), BIRAFFE-procedure.zip contains a log of all the stimuli presented to a given user (timestamp of the stimuli presentation, condition, widget, stimuli ID...), 12 See: https://dx.doi.org/10.5281/zenodo.3442143. (b) Freud me out 2 gameplay (a) Aective SpaceShooter 2 gameplay Fig. 2: Games used in the study BIRAFFE-freud.zip contains logs from the Freud me out 2 game, BIRAFFE-space.zip contains logs from the Aective SpaceShooter 2 game, BIRAFFE-photo.zip contains face emotions description calculated by MS Face API, BIRAFFE-photo-full.zip contains all information available in BIRAFFE-photo.zip but also other face-related values recognized by MS Face API, e.g. recognized age, whether the person wears glasses, what is the color of the hair, . . . All les have Unix timestamps which can be used for synchronization between dierent subsets. Detailed low-level specication of all values is provided in Sect. 3.1-3.7. The whole BIRAFFE dataset consists of data gathered from 201 out of 206 participants 13 . Unfortunately, for some participants some of the data were not properly collected (see Tab. 1) due to e.g. applications crashed, Bluetooth signal was lost, electrode contact was poor. Finally, the whole data is available for 141 subjects. We have published also the incomplete records, as in many analysis only selected of the subsets will be used and it will not be the problem. Missing values in all les are represented by NaN. Table 1: Size of subsets collected (out of 201 participants) NEO-FFI BioSigs Procedure Space Freud Photos 153 171 186 193 191 192 13 Due to the technical errors, there is no data saved for 5 subjects. 3.1 BIRAFFE-metadata.csv Each line of this le represents one subject and consists of the following values: ID;AGE;SEX;PROCEDURE-BEGIN-TIMESTAMP;PROCEDURE-END-TIMESTAMP;BIOSIGS- BEGIN-TIMESTAMP;BIOSIGS-END-TIMESTAMP;OPENNESS;CONSCIENTIOUSNESS; EXTRAVERSION;AGREEABLENESS;NEUROTICISM;NEO-FFI;BIOSIGS;PROCEDURE;SPACE; FREUD;PHOTOS where 14 : ID  a randomly assigned subject ID from range {1000,9999}. It is used to identify all subject-related les as lenames. Filenames are created according to the format SUBxxxx-yyyy, where xxxx is the ID, and yyyy is the data type identier (e.g. BioSigs, Freud), PROCEDURE-BEGIN-TIMESTAMP  timestamp of the procedure log le creation, PROCEDURE-END-TIMESTAMP  last timestamp of the Freud me out 2 log15 , BIOSIGS-BEGIN-TIMESTAMP;BIOSIGS-END-TIMESTAMP  rst and last timestamps from BioSigs recording, OPENNESS;CONSCIENTIOUSNESS;EXTRAVERSION;AGREEABLENESS;NEUROTICISM  ve personality traits calculated from raw NEO-FFI results; values represent ten scores, i.e. the possible values are in {1,2,3,. . . ,10} set and represent stan- dard normal distribution with M = 5.5 and SD = 2. For further analyses they can be transformed to low (1-3), medium (4-6) and high (7-10) trait levels, NEO-FFI;BIOSIGS;PROCEDURE;SPACE;FREUD;PHOTOS  information about sub- sets available for given person, i.e. whether there is a BioSigs le, Freud le, etc. available for the person (Y or NaN). The NEO-FFI column only indicates whether there is a personality prole calculated in the previous columns or not. 3.2 BIRAFFE-biosigs.zip Each SUBxxxx-BioSigs.csv le represents one subject and consists of one line per each sensor recording. Values were recorded with 1 kHz frequency. The elds contained in each line are: TIMESTAMP;ECG;GSR where: ECG  signal (units: mV) gathered by BITalino, after band-pass ltration (cuto frequency: [0.5 Hz, 20 Hz], order: 2), 14 Note that not all values are described in details, as some of them are obvious. 15 Procedure begin and end are here understood as the whole protocol, not only the part related to the stimuli pairs presentation. GSR  signal (units: µS) gathered by BITalino, after median ltration (window: 100). It is important to note that the GSR signal is sensitive to changing pressure on the gamepad: the higher the pressure, the greater the amplitude of the GSR signal. For that reason, in the parts of the procedure where the subject is using it (when rating emotions and playing the games), it may not be possible to detect the dierence between the real signal and the changing pressure caused by the use of the gamepad. However, one should be able to use the signal in the rst part of the procedure, during the rst 6 seconds after the presentation of each stimuli, since the subject is not using the gamepad in those intervals. 3.3 BIRAFFE-procedure.zip Each SUBxxxx-Procedure.csv le represents one subject and consists of one line per each stimuli presentation. The elds contained in each line are: TIMESTAMP;COND;COND-DETAILS;IADS-ID;IAPS-ID;WIDGET-TYPE;ANS;ANS-TIME where: TIMESTAMP  Unix timestamp when the stimuli appeared on the screen, COND  general condition: consistent (con) vs inconsistent (inc), COND-DETAILS  specic condition (p0s0, p+s+, ps+, p+s, ps), IADS-ID;IAPS-ID  IADS/IAPS IDs of stimuli. Both IADS and IAPS datasets provide Valence/Arousal scores for each stimuli that can be used for further analyses (these values describe emotions that were evoked by the stimuli). Contact with the CSEA at University of Florida to obtain your own copy of the datasets for research 16 , WIDGET-TYPE  emoscale or emospace, ANS  one of {1,2,3,4,5} set for emoscale or two values (rst for valence, second for arousal) in [−1; 1] range for emospace, ANS-TIME  response time (0 is a moment when widget appeared on the screen); NaN indicates that the subject has not made any choice but left the default option. 3.4 BIRAFFE-freud.zip Each SUBxxxx-Freud.json le represents one subject and consists of one col- lection of name/value pairs per each game event. Each game event contains several attributes, such as the timestamp, the type of object, type of event and some details. An example of the contents of a game event is show below: 1 { 2 "Timestamp": Double, 3 Object: Event, 4 Detail1: Value1, 16 See: https://csea.phhp.ufl.edu/media.html. 5 Detail2: Value2 6 } Below is the full list of logged type of objects and related events (alphabeti- cally) 17 :  Alert: 1 { 2 "Alert": "Show", 3 "Time": Int, # How long the alert was shown 4 "Content": String 5 }  Enemy: 1 { 2 "Enemy": "Death", 3 "ID": Int, 4 "By": "SuperPower"/"Gun"/"LevelEnd", 5 "PositionX": Float, "PositionY": Float, "PositionZ": Float, 6 "RotationX": Float, "RotationY": Float, "RotationZ": Float, 7 "RotationW": Float 8 }, 9 { 10 "Enemy": "Spawn", 11 "ID": Int, 12 "SpawnPoint": String, # Name of the spawn point (randomly 13 # placed when the game starts) 14 "Type": Int, 15 "Mechanic": "RegularSpawn"/"AdditionalMediumSpawn"/" AdditionalHardSpawn"/"mrNightmareSpawn" 16 }, 17 { 18 "Enemy": "Health", 19 "ID": Int, 20 "DecreaseTo": Int 21 } "Type" and "Mechanic" combinations need further explanation. There are four regular levels and the nal boss level. On the rst three levels you have to face dierent monsters 18 : ZomBunnies on the 1st level, ZomBears on the 2 nd level and Hellephants on the 3rd level. On these levels all mechanics except the last one are used. "Type": -1 indicates simple monster spawned with "RegularSpawn". Other types are spawned with "Additional[...]Spawn "19 and indicate: easy (0), medium (1) or hard (2) enemy. On the fourth 17 Timestamp is always present, so for the sake of simplicity it is not shown here. 18 If you want to know about the storyline, see: https://afcai.re/pub:prototypes. 19 In Freud me out 2 there is no dierence between "AdditionalMediumSpawn" and "AdditionalHardSpawn" mechanics. This is a legacy of the rst version of the game. level "Additional[...]Spawn" always generates the hard enemy and the type indicates whether it is ZomBunny (0), ZomBear (1) or Hellephant (2). Dur- ing the nal boss level, "mrNightmareSpawn" mechanic is used and "Type" is used to indicate both strength and the kind of the enemy: 0 is for easy Hellephant, 1  easy ZomBear, 2  easy ZomBunny, [3-5]  medium enemies (in the same order as easy ones), [6-8]  hard enemies.  Game: 1 { 2 "Game": "Over", 3 "Animation": "PlayerDeath" 4 }, 5 { 6 "Game": "Introduction"/"TextLevel"/"Over"/"End"/"ScoreBoard", 7 "CountDown": "Text", 8 "ChangeTo": String # Text displayed on the screen 9 }  Joystick  events are logged only during the game: 1 { 2 "Joystick": "Left"/"Right", # Left used for move, 3 # Right used for rotation 4 "Horizontal": Float # [-1,1] 5 "Vertical": Float # [-1,1] 6 }  Key  events are logged only when user is allowed to press the key. Only keys useful in game are logged: 1 { 2 "Key": "X"/"O"/"R2" # X/O for language selection (PL vs EN), 3 # R2 for shooting 4 }, 5 { 6 "Key": "L2", # used for SuperPower 7 "Time": Float # How long was pressed 8 # (longer press = more SuperPower) 9 }  Pickup: 1 { 2 "Pickup": "Collected"/"Destroyed", 3 "ID": Int 4 }, 5 { 6 "Pickup": "Spawn", 7 "ID": Int, 8 "SpawnPoint": String, # Name of the spawn point (randomly 9 # placed when the game starts) 10 }  Player: 1 { 2 "Player": "Range"/"Health"/"TimeBetweenTwoBullets"/"DamagePerShot"/ "SuperPower", 3 "IncreaseTo"/"DecreaseTo"/"StartingValue": Int/Float 4 }, 5 { 6 "Player": "Health", 7 "DecreaseTo": Int, 8 "By": "Enemy", 9 "ID": Int # ID of the Enemy 10 }, 11 { 12 "Player": "SuperPower", 13 "Time"/"Killed"/"Alpha": Int/Float # Alpha is the intensity 14 # of the visual flash 15 }, 16 { 17 "Player": "Death", 18 "PositionX": Float, "PositionY": Float, "PositionZ": Float, 19 "RotationX": Float, "RotationY": Float, "RotationZ": Float, 20 "RotationW": Float 21 }  Scene: 1 { 2 "Scene": "Load", 3 "ID": Int 4 }  Score: 1 { 2 "Score": "Update"/"ToLevelUp", 3 "Value": Int 4 }, 5 { 6 "Score": "LvlEnd"/"GameEnd"/"PlayerDeath", 7 "Value": Int, 8 "Level": Int 9 }  ScoreBoard: 1 { 2 "ScoreBoard": "Show" 3 }  Slider  visual slider indicating current SuperPower/Health values: 1 { 2 "Slider": "SuperPower"/"Health", 3 "IncreaseTo"/"DecreaseTo"/"StartingValue": Float/Int 4 } 3.5 BIRAFFE-space.zip SUBxxxx-Space.json les are analogous to the BIRAFFE-freud.zip. Full list of logged type of objects and related events (alphabetically):  Affective  randomized assignment of colors to aective asteroids. After 300 s (5 min) the Inconsistent Reality Logic is applied and the stimuli are no longer connected with specied color: 1 { 2 "Affective": "AsteroidOrder", 3 "Order1": Int, # p-s- 4 "Order2": Int, # p+s+ 5 "Order3": Int, # p+s- 6 "Order4": Int # p-s+ 7 # Possible values are: 0 (blue), 1 (green), 8 # 2 (red), 3 (yellow) 9 }  Asteroid: 1 { 2 "Asteroid": "Spawn", 3 "Type": Int, # color of the asteroid: 4 # -1 =normal, 0-3 = affective, as in AsteroidOrder 5 "ID": Int, 6 "PositionX": Float, "PositionY": Float, "PositionZ": Float, 7 "RotationX": Float, "RotationY": Float, "RotationZ": Float, 8 "RotationW": Float 9 }, 10 { 11 "Asteroid": "Destroy", 12 "By": "Boundary"/"Player"/"Shot", 13 "ID": Int, 14 "PositionX": Float, "PositionY": Float, "PositionZ": Float, 15 "RotationX": Float, "RotationY": Float, "RotationZ": Float, 16 "RotationW": Float 17 }  Bolt  a missing shot made by an user (event of shooting is reported as "Key": "X"): 1 { 2 "Bolt": "Destroy", 3 "By": "Boundary", 4 "ID": Int, 5 "PositionX": Float, "PositionY": Float, "PositionZ": Float, 6 "RotationX": Float, "RotationY": Float, "RotationZ": Float, 7 "RotationW": Float 8 }  Game: 1 { 2 "Game": "Over"/"End" 3 }  Joystick  events are logged only during the game: 1 { 2 "Joystick": "Left", # only left was used (for move) 3 "Horizontal": Float # [-1,1] 4 "Vertical": Float # [-1,1] 5 }  Key  events are logged only when user is allowed to press the key. Only gamepad keys useful in game are logged: 1 { 2 "Key": "X" # X is for shooting 3 }  Main and Menu: 1 { 2 "Main"/"Menu": "CountdownText", 3 "ChangeTo": String # Text displayed on the screen 4 }  Scene: 1 { 2 "Scene": "Load", 3 "ID": Int 4 }  Score: 1 { 2 "Score": "Update"/"GameOver"/"GameEnd", 3 "Value": Int # Current score counter 4 } 3.6 BIRAFFE-photo.zip Each SUBxxxx-Face.csv le represents one subject and consists of one line per each photo taken. Raw photos are not available. File consists of values calculated by MS Face API with recognition_02 model. Photos were taken with 3 Hz frequency (every 20 frames at 60 fps) during the stimuli presentation, i.e., during 6 seconds, but not while the subject was responding on the widget. During the games, photos were taken with 1 Hz frequency. When no face was recognized or two faces were found (the second was the experimenter face) NaN value was used. COND;GAME-TIMESTAMP;FRAME-NUMBER;IADS-ID;IAPS-ID;ANGER;CONTEMPT;DISGUST; FEAR;HAPPINESS;NEUTRAL;SADNESS;SURPRISE where: COND  consistent stimuli (con), inconsistent stimuli(inc), game (space or freud) GAME-TIMESTAMP  Unix timestamp available only during the game (NaN value during the stimuli presentation), FRAME-NUMBER  Index of the photo within the context of the stimuli presenta- tion, measured in frames since the beginning of the stimuli presentation: −1 for pre-stimulation photo, 0 for photo in the moment when stimuli appears, 20 for the next photo (1/3 s later), up to 340 (frame 360 = 6 s = time when stimuli disappears), IADS-ID;IAPS-ID  IADS/IAPS IDs of stimuli (see Sect. 3.3), ANGER;CONTEMPT;DISGUST;FEAR;HAPPINESS;NEUTRAL;SADNESS;SURPRISE  prob- ability distribution of eight emotions calculated by MS Face API (all values sum up 1). It is important to note that this distribution is highly skewed to the NEUTRAL emotion, having values close to 1 in that emotion and values close to zero in the rest of them. 3.7 BIRAFFE-photo-full.zip SUBxxxx-Face.csv les are analogous to the BIRAFFE-photo.zip, but with the full output from MS Face API. Besides the values described in Sect. 3.6, they also have the following face-related values 20 : FACEATTRIBUTES-ACCESSORIES;FACEATTRIBUTES-AGE;FACEATTRIBUTES-BLUR- BLURLEVEL;FACEATTRIBUTES-BLUR-VALUE;FACEATTRIBUTES-EMOTION-ANGER; FACEATTRIBUTES-EMOTION-CONTEMPT;FACEATTRIBUTES-EMOTION-DISGUST; FACEATTRIBUTES-EMOTION-FEAR;FACEATTRIBUTES-EMOTION-HAPPINESS; FACEATTRIBUTES-EMOTION-NEUTRAL;FACEATTRIBUTES-EMOTION-SADNESS; FACEATTRIBUTES-EMOTION-SURPRISE;FACEATTRIBUTES-EXPOSURE-EXPOSURELEVEL; FACEATTRIBUTES-EXPOSURE-VALUE;FACEATTRIBUTES-FACIALHAIR-BEARD; FACEATTRIBUTES-FACIALHAIR-MOUSTACHE;FACEATTRIBUTES-FACIALHAIR-SIDEBURNS; 20 See also MS Face API documentation at https://westus.dev.cognitive. microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/ 563879b61984550f30395236. FACEATTRIBUTES-GENDER;FACEATTRIBUTES-GLASSES;FACEATTRIBUTES-HAIR-BALD; FACEATTRIBUTES-HAIR-HAIRCOLOR-BLACK;FACEATTRIBUTES-HAIR-HAIRCOLOR-BLOND; FACEATTRIBUTES-HAIR-HAIRCOLOR-BROWN;FACEATTRIBUTES-HAIR-HAIRCOLOR-GRAY; FACEATTRIBUTES-HAIR-HAIRCOLOR-OTHER;FACEATTRIBUTES-HAIR-HAIRCOLOR-RED; FACEATTRIBUTES-HAIR-INVISIBLE;FACEATTRIBUTES-HEADPOSE-PITCH; FACEATTRIBUTES-HEADPOSE-ROLL;FACEATTRIBUTES-HEADPOSE-YAW;FACEATTRIBUTES- MAKEUP-EYEMAKEUP;FACEATTRIBUTES-MAKEUP-LIPMAKEUP;FACEATTRIBUTES-NOISE- NOISELEVEL;FACEATTRIBUTES-NOISE-VALUE;FACEATTRIBUTES-OCCLUSION- EYEOCCLUDED;FACEATTRIBUTES-OCCLUSION-FOREHEADOCCLUDED;FACEATTRIBUTES- OCCLUSION-MOUTHOCCLUDED;FACEATTRIBUTES-SMILE;FACEID;FACELANDMARKS- EYEBROWLEFTINNER-X;FACELANDMARKS-EYEBROWLEFTINNER-Y;FACELANDMARKS- EYEBROWLEFTOUTER-X;FACELANDMARKS-EYEBROWLEFTOUTER-Y;FACELANDMARKS- EYEBROWRIGHTINNER-X;FACELANDMARKS-EYEBROWRIGHTINNER-Y;FACELANDMARKS- EYEBROWRIGHTOUTER-X;FACELANDMARKS-EYEBROWRIGHTOUTER-Y;FACELANDMARKS- EYELEFTBOTTOM-X;FACELANDMARKS-EYELEFTBOTTOM-Y;FACELANDMARKS-EYELEFTINNER- X;FACELANDMARKS-EYELEFTINNER-Y;FACELANDMARKS-EYELEFTOUTER-X;FACELANDMARKS -EYELEFTOUTER-Y;FACELANDMARKS-EYELEFTTOP-X;FACELANDMARKS-EYELEFTTOP-Y; FACELANDMARKS-EYERIGHTBOTTOM-X;FACELANDMARKS-EYERIGHTBOTTOM-Y; FACELANDMARKS-EYERIGHTINNER-X;FACELANDMARKS-EYERIGHTINNER-Y;FACELANDMARKS -EYERIGHTOUTER-X;FACELANDMARKS-EYERIGHTOUTER-Y;FACELANDMARKS-EYERIGHTTOP- X;FACELANDMARKS-EYERIGHTTOP-Y;FACELANDMARKS-MOUTHLEFT-X;FACELANDMARKS- MOUTHLEFT-Y;FACELANDMARKS-MOUTHRIGHT-X;FACELANDMARKS-MOUTHRIGHT-Y; FACELANDMARKS-NOSELEFTALAROUTTIP-X;FACELANDMARKS-NOSELEFTALAROUTTIP-Y; FACELANDMARKS-NOSELEFTALARTOP-X;FACELANDMARKS-NOSELEFTALARTOP-Y; FACELANDMARKS-NOSERIGHTALAROUTTIP-X;FACELANDMARKS-NOSERIGHTALAROUTTIP-Y; FACELANDMARKS-NOSERIGHTALARTOP-X;FACELANDMARKS-NOSERIGHTALARTOP-Y; FACELANDMARKS-NOSEROOTLEFT-X;FACELANDMARKS-NOSEROOTLEFT-Y;FACELANDMARKS- NOSEROOTRIGHT-X;FACELANDMARKS-NOSEROOTRIGHT-Y;FACELANDMARKS-NOSETIP-X; FACELANDMARKS-NOSETIP-Y;FACELANDMARKS-PUPILLEFT-X;FACELANDMARKS-PUPILLEFT -Y;FACELANDMARKS-PUPILRIGHT-X;FACELANDMARKS-PUPILRIGHT-Y;FACELANDMARKS- UNDERLIPBOTTOM-X;FACELANDMARKS-UNDERLIPBOTTOM-Y;FACELANDMARKS-UNDERLIPTOP -X;FACELANDMARKS-UNDERLIPTOP-Y;FACELANDMARKS-UPPERLIPBOTTOM-X; FACELANDMARKS-UPPERLIPBOTTOM-Y;FACELANDMARKS-UPPERLIPTOP-X;FACELANDMARKS- UPPERLIPTOP-Y;FACERECTANGLE-HEIGHT;FACERECTANGLE-LEFT;FACERECTANGLE-TOP; FACERECTANGLE-WIDTH 4 Summary In the paper we described the BIRAFFE data set which is the result of the ex- periment in AfC we conducted in early 2019. Our work aims at the development of computer models for emotion classication an recognition. We strongly be- lieve that such models should be personalized by design as emotional responses of dierent persons are subject to individual dierences due to their personality. As such, instead of the development of some general model, which is possibly applicable to a certain part of the population, we seek to develop a multi-modal model self adapting to the specic person. An extended description of the ex- perimentation in this area can be found in [11]. We believe, that the data set described in this paper an important original contribution that supports the development and replication of experiments in AfC. Acknowledgments Presented research was conducted under the supervision of Krzysztof Kutt, Szymon Bobek and Grzegorz J. Nalepa; these three also drafted the proto- col. Dominika Dr¡»yk and Paweª Jemioªo implemented the protocol. The ex- periment was conducted by Krzysztof Kutt, Dominika Dr¡»yk, Paweª Jemioªo and Barbara Gi»ycka. Paweª Jemioªo cleaned and prepared the dataset. Vic- tor Rodriguez-Fernandez tested the dataset and prepared the JSON format. Fi- nally, Krzysztof Kutt and Grzegorz J. Nalepa wrote the paper, using comments from Víctor Rodríguez. References 1. Allen, H.D., Goldberg, S.J., Sahn, D.J., Ovitt, T.W., Goldberg, B.B.: Suprasternal notch echocardiography. assessment of its clinical utility in pediatric cardiology. Circulation 55(4), 605612 (1977), https://www.ahajournals.org/doi/abs/10. 1161/01.CIR.55.4.605 2. Bailey, R.L.: Electrodermal activity (eda). In: Matthes, J., Davis, C.S., Potter, R.F. (eds.) The International Encyclopedia of Communication Research Methods, pp. 115. John Wiley & Sons, Hoboken, NJ (2017), https://onlinelibrary.wiley. com/doi/abs/10.1002/9781118901731.iecrm0079 3. Barrett, L.F., Lewis, M., Haviland-Jones, J.M. (eds.): Handbook of Emotions. The Guilford Press, New York, NY, 4th edn. (2016) 4. Bradley, M.M., Lang, P.J.: The international aective digitized sounds (2nd edi- tion; iads-2): Aective ratings of sounds and instruction manual. technical report B-3. Tech. rep., University of Florida, Gainsville, FL (2007) 5. Broekens, J., Brinkman, W.P.: Aectbutton: A method for reliable and valid aective self-report. International Journal of Human-Computer Studies 71(6), 641  667 (2013), http://www.sciencedirect.com/science/article/pii/ S1071581913000220 6. Calvo, R.A., D'Mello, S.K., Gratch, J., Kappas, A. (eds.): The Oxford Handbook of Aective Computing. Oxford Library of Psychology, Oxford University Press, Oxford (2015) 7. Costa, P., McCrae, R.: Revised NEO Personality Inventory (NEO-PI-R) and NEO Five Factor Inventory (NEO-FFI). Professional manual. Psychological Assessment Resources, Odessa, FL (1992) 8. van Dijk, A.E., van Lien, R., van Eijsden, M., Gemke, R.J., Vrijkotte, T.G., de Geus, E.J.: Measuring cardiac autonomic nervous system (ans) activity in chil- dren. Journal of Visualized Experiments 74, e50073 (2013) 9. van Dooren, M., de Vries, J.J.G., Janssen, J.H.: Emotional sweating across the body: Comparing 16 dierent skin conductance measurement locations. Physiology & Behavior 106(2), 298  304 (2012), http://www.sciencedirect.com/science/ article/pii/S0031938412000613 10. Jemioªo, P., Gi»ycka, B., Nalepa, G.J.: Prototypes of arcade games enabling aec- tive interaction (2019), accepted to The 18th International Conference on Articial Intelligence and Soft Computing 2019 11. Kutt, K., Dr¡»yk, D., Bobek, S., Gi»ycka, B., Jemioªo, P., Nalepa, G.J.: Multimodal emotion detection with use of ai methods and contextual information. European Journal of Personality (2020), submitted, in review 12. Lang, P.J., Bradley, M.M., Cuthbert, B.N.: International aective picture system (iaps): Aective ratings of pictures and instruction manual. technical report B-3. Tech. rep., The Center for Research in Psychophysiology, University of Florida, Gainsville, FL (2008) 13. Lis, A.: Methods of interaction with user through mobile devices in aective exper- iments. BSc thesis, AGH University of Science and Technology (2018), supervisor: G.J. Nalepa 14. Nalepa, G.J., Kutt, K., Bobek, S.: Mobile platform for aective context-aware systems. Future Generation Computer Systems 92, 490503 (mar 2019), https: //doi.org/10.1016/j.future.2018.02.033 15. Nalepa, G.J., Kutt, K., Gi»ycka, B., Jemioªo, P., Bobek, S.: Analysis and use of the emotional context with wearable devices for games and intelligent assistants. Sensors 19(11), 2509 (2019), https://doi.org/10.3390/s19112509 16. Peirce, J., Gray, J.R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., Lindeløv, J.K.: Psychopy2: Experiments in behavior made easy. Behavior Research Methods 51(1), 195203 (2019), https://doi.org/10.3758/ s13428-018-01193-y 17. Picard, R.W.: Aective Computing. MIT Press, Cambridge, MA (1997) 18. Russell, J., Weiss, A., Mendelsohn, G.: Aect grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology 57(3), 493502 (1989), http://dx.doi.org/10.1037/0022-3514.57.3.493 19. Zawadzki, B., Strelau, J., Szczepaniak, P., ‘liwi«ska, M.: Inwentarz osobowo±ci NEO-FFI Costy i McCrae. Polska adaptacja. Pracowania Testów Psychologicznych PTP, Warszawa (1998)