=Paper= {{Paper |id=Vol-2313/KEG_2019_paper_7 |storemode=property |title=Kwiri - What, When, Where and Who: Everything You Ever Wanted to Know About Your Game But Didn't Know How to Ask |pdfUrl=https://ceur-ws.org/Vol-2313/KEG_2019_paper_7.pdf |volume=Vol-2313 |authors=Tiago Machado,Daniel Gopstein,Andy Nealen,Julian Togelius |dblpUrl=https://dblp.org/rec/conf/aaai/MachadoGNT19 }} ==Kwiri - What, When, Where and Who: Everything You Ever Wanted to Know About Your Game But Didn't Know How to Ask == https://ceur-ws.org/Vol-2313/KEG_2019_paper_7.pdf
                    Kwiri - What, When, Where and Who:
Everything you ever wanted to know about your game but didn’t know how to ask

    Tiago Machado                       Daniel Gopstein                   Andy Nealen                     Julian Togelius
    New York University                New York University        University of Southern California      New York University
  tiago.machado@nyu.edu                dgopstein@nyu.edu             anealen@cinema.usc.edu            julian.togelius@nyu.edu



                           Abstract                                  that the games that Cicero work have formally defined game
                                                                     mechanics, and provides the ability to interrogate replays for
  We designed Kwiri, a query system, as a support tool for
                                                                     particular combinations of events.
  an AI Game Design Assistant. With our tool we allow users
  to query for game events in terms of what (was the event),            Kwiri contribution relies on its generality. It is a single
  when (did it happen), where (did it happen), and who (was          query system, whose interface allows designers to ask ques-
  involved). With such a tool, we explore the possibilities of       tions about any genre of game they can design with the plat-
  applying a query system to provide game-general AI-based           form. It also makes use of gameplay simulations and does
  design assistance. Throughout this paper, we will discuss the      not depend on human users to generate in-data game to be
  motivation, the design of Kwiri, use cases, and a preliminary      analyzed.
  qualitative study. Our first results shows that Kwiri has the
  potential to help designers in game debugging tasks, and has                              Background
  been served as infrastructure to build another system which
  relies on querying for game events.                                This section discusses the primary references and inspira-
                                                                     tions related to the development of our system.
   AI-assisted design systems promise to help humans per-
form design tasks faster, better and/or more creatively.             AI-assisted game design tools
Within the field of game design there have been several              Tanagra (Smith, Whitehead, and Mateas 2010) is a tool that
prototype systems that showcase how artificial intelligence          assists humans in the task of designing levels for 2D plat-
can be used in a specific role to assist a human game de-            form games. The system works in real time. It creates many
signer (Smith, Whitehead, and Mateas 2010; Liapis, Yan-              different possibilities for a level and provides a guarantee
nakakis, and Togelius 2013; Shaker, Shaker, and Togelius             that they are playable. Therefore, there is no need to playtest
2013; Butler et al. 2013; Smith et al. 2012). For example,           the level to verify inconsistencies.
AI can be used to evaluate game content that the human de-              Similar to Tanagra, Ropossum (Shaker, Shaker, and To-
signs, creating new content to human specifications, and/or          gelius 2013), also generates and solves levels for the popu-
suggesting changes to content that is being created.                 lar physics puzzle game Cut The Rope. The user is assisted
   Most existing prototypes focus on a single type of AI-            in the tasks of level design and evaluation. The tool is op-
assisted design for content (typically levels) in a single           timized to allow real-time feedback from a given state, af-
game. In contrast, Cicero is a tool that aims to be a gen-           ter users inputs. It generates the next possible actions of the
eral AI-assisted game design tool, that is, providing multi-         player until it finds a solution, if available.
ple types of AI-based design assistance not just for a sin-             Sentient Sketchbook (Liapis, Yannakakis, and Togelius
gle game, but for any game that can be represented in the            2013) is also a tool to assist the creation of game levels,
language used by the tool (Machado, Nealen, and Togelius             but offers more generality than the two previous tools in this
2017a). It includes features such as a recommendation en-            section since it provides assistance for strategy and roguelike
gine for game mechanics, AI-based automatic playtesting,             games. The system shows level suggestions in real-time. It
and editable game replays.                                           allows users to interact by editing their levels while generat-
   On top of Cicero, we designed Kwiri. A query system mo-           ing recommendations based on choices made previously by
tivated by the idea that a designer (or tester) of a game will       users.
often want to figure out when and where something happens,              Also in the field of level generation, we have the work
and this might not be evident when either playing a game or          of Smith et al. (Smith et al. 2012) and Butler et al. (But-
watching a replay. For example, imagine that a particular            ler et al. 2013). They present independent implementations
NPC (Non-Player Character) occasionally dies even when               of three diverse level design automation tools in the game
one of the player’s bullets do not hit it. To find out what’s        Refraction. They use Procedural Content Generation (PCG)
going on, a designer would need to rewatch endless replays           techniques and Answer Set Programming (ASP) to explore
attentively. However, what if you could simply ask the game          the intended design space and offer levels with playability
when and where an NPC died? Kwiri makes use of the fact              guarantee.
   All of these works presented significant results in the           Finally, Varvaressos et al. (Varvaressos et al. 2014) detail
realm of AI-assisted game design tools. However, they are         the process of implementing a bug tracker in six different
attached to a single game or a game genre in the best case.       games. The infrastructure is concentrated on the game main
They have a lack of generality because their techniques need      “loop”. The authors implemented specific code that captures
to be reimplemented every time someone starts a new game          events of the games. The data is stored in an XML file. The
project. Their main focus is not querying for game events,        process of finding bugs is based on properties about the ex-
but the way they provide assistance based on AI, even lim-        pected behavior of the game being expressed in a formal
ited to one game, is an inspiration for our system’s design.      language. During runtime, a monitor observes the game and
                                                                  notifies the user when some property is violated.
Game Visualization systems
                                                                  GVGAI and VGDL
Game visualization is a topic which is gaining more atten-
tion every day (El-Nasr, Drachen, and Canossa 2013). Ma-          The General Video Game AI framework (GVGAI) was de-
jor game companies like Unity and Bioware have released           signed for serving as a testbed for general video game play-
their solutions with specific features to work with visual-       ing research (Perez et al. 2015). A competition, based on the
ization analysis, Unity Analytics (Uni 2017) and Skynet           framework, runs annually and allow competitors to submit
(Zoeller 2010), respectively. In the academic field, many         their agents. These are then judged based on how well they
projects have been developed using different visualization        can play a set of unseen games. The Video Game Descrip-
techniques.                                                       tion Language (Schaul 2013; Ebner et al. 2013) (VGDL) is
   One of the games of the Dead Space franchise uses the          the language used to describe games in this framework; the
tool Data Cracker to collect, analyze and summarize data          language is compact and human-readable. Despite its sim-
about player performance in a visual way (Medler and others       plicity, it is capable of expressing a large range of 2D games,
2009).                                                            like clones of classical games developed for the Atari 2600
   The game Cure Runners has a visualization system used          and the Nintendo Entertainment System (NES). The GVGAI
to track the player and assist designers on level balance         competition has now, about 100 VGDL games available and
tasks. This work is a case study in the integration of an ana-    several dozen effective AI agents, with different strengths on
lytics tool into a commercial game (Wallner et al. 2014).         a variety set of games (Bontrager et al. 2016).
   GPlay (Canossa, Nguyen, and El-Nasr 2016) presents vi-            The use of a simple, and analytically tractable game de-
sualization and event queries on a spatial-temporal interface.    scription language, as well as a collection of AI agents, gives
The UI allows users to select the game elements whose be-         GVGAI important benefits related to game editors and en-
haviors they want to track.                                       gines such as Unity, Unreal or GameMaker. The AI agents
                                                                  allow us to perform automatic gameplay and game testing,
   As well as the AI-assisted tools, most of these works are
                                                                  what is still not possible in the cited engines because of the
directly related to a single game. The visualization package
                                                                  lack of uniformity in how their games are specified. They
of Unity Analytics is a more general approach, but it does
                                                                  are, of course, versatile and powerful, but not flexible about
not have agents to play the games and generate gameplay
                                                                  the use of general AI agents in the way we need for collect-
simulations.
                                                                  ing data and implement a general query system.
   Kwiri takes influence from these systems and applies vi-
sualization as a way to enhance the answers to the designer’s     Cicero
question (What, When, Where, and Who).
                                                                  Cicero is a general AI-assisted game design tool based on
Declarative game engines and query systems for                    the GVGAI framework and the VGDL. As stated before,
                                                                  existing tools are mostly limited to a single game and a sin-
games                                                             gle mode of design assistance. As the AI-assisted game de-
Declarative game engines go beyond the common idea of             sign tool is still an emerging paradigm (Lucas et al. 2012)
having databases only as a data storage method. The work          it is sometimes difficult to know what features and interac-
of White et al (White et al. 2008) is an example. It develops     tions should be included in an individual tool. Because of
the concept of state-effect patterns, a technique that let game   this problem, we adopted an Interaction Design approach
designers develop parts of their games declaratively.             to developing our tool. In the first iteration, we developed
   Deutch et al. (Deutch et al. 2012) developed a framework,      the general features for creating and editing games. Accom-
based on SQL, to perform data sourcing in games. It ex-           panying that, the system also included three additional fea-
tends SQL commands to allow recursive rule invocations            tures: a statistics tool, a visualization system, and a me-
and probabilistic operations. It has a runtime monitor that       chanics recommender. After a first evaluation, it was not
watches the game execution and give notifications to the user     enough for the users to perform an accurate analysis of the
of properties violations. These properties formally specify       data collected by the agents (humans or AI algorithms) dur-
expected behaviors of the game.                                   ing a gameplay session. These data were pretty quantita-
   A more traditional use of databases can be seen                tive, and the users said that they would want information
on (Srisuphab et al. 2012). Here the authors store gameplay       about when and where some events happened when an agent
sessions of Go matches in a database. The goal is to use the      was playing a game. This motivated us to develop Seek-
stored matches to train novice human players through a GUI        Whence(Machado, Nealen, and Togelius 2017b), a retro-
(Graphical User Interface).                                       spective analysis tool. SeekWhence allows users to replay
a stored gameplay session frame-by-frame as if it were a           the events are the rules of the games. The use of a database
video. Informal evaluations showed us that SeekWhence              makes the implementation of the storage and the searching
was well accepted. However, it required time and focused           process easier and straightforward. The queries responsible
attention from the users because a single frame can con-           for searching answers for the questions What, Where, When
tain multiple events. Therefore, if many sprites are in the        and Who are provided to the users as a GUI so that they can
same space, even by playing a game step-by-step, it is hard        insert the query’s parameters [Figure 2].
to identify which is doing what. To solve this problem, we
developed Kwiri, which allows users to make queries about
what, where, and when a specific event happened and which
sprites were involved with it. The query system is integrated
with SeekWhence and the visualization system.

                     Design of Kwiri
As stated before, the query system is developed on top of
Cicero, and it integrates SeekWhence and the visualization
system. In this section, we will highlight some of the imple-
mentation details about these two systems in order to facili-
tate further discussion about how the query system works.

How SeekWhence and the visualization system
work
SeekWhence stores every frame of a gameplay session
played by an agent. In order to store the frames, we capture
every game state, at every game tick. The game state con-
tains all the information of the set of game elements avail-
able, such as their positions in the level for example. The
indexation by a game tick is what makes SeekWhence runs
like a video player.
   Every game element can be assigned to a specific color
by the designer. It activates the visualization system [Fig-
ure 1], which captures all the positions of the elements in
the level and apply a heatmap to show which areas were ex-
plored more.                                                       Figure 2: (1) The user query for ”kill” events involving the
                                                                   use of the avatar’s sword. (2) the system presents the query
Kwiri Implementation                                               results. (3) After the user clicks on the middle panel, the sys-
Our query system is implemented on top of everything               tem changes its focus to SeekWhence and shows the exact
we previously discussed. It adds a database (MySQL). The           frame when the event happened. It also highlights the posi-
database stores events and performs searches related to            tion Where the event happened. (4) By moving one frame
them. An event in VGDL happens whenever two sprites col-           backwards, the user can see the sprites positions after the
lide, it is up to the designer to define what kind of event will   event.
be fired at the moment of the collision. It can be a killSprite,
a cloneSprite, and more than 20 other ones. In other words,
                                                                                        Example usage
                                                                   We believe that Kwiri can be used to explore solutions for
                                                                   common and novel game problems. In the following subsec-
                                                                   tions we show some examples.

                                                                   Quantitative User Study Kwiri was also used in a quanti-
                                                                   tative user study (Machado et al. 2018). The goal of the work
                                                                   was showing that humans with AI assistance can be more
                                                                   accurate in-game bug detection than humans without assis-
                                                                   tance. In one of the tasks, agents collected data and the users
                                                                   of group A had to use Kwiri as a way to filter events and
                                                                   figure out what was causing the failures. For the same task,
                                                                   group B users just had SeekWhence available. The possibil-
                                                                   ity of filter the events made the users approximately 32%
 Figure 1: SeekWhence panel and its visualization control          better than the ones without it.
Automatic Game Tutorials The work of Green et al                   • Are destroyed when hit by enemy bombs, by player
(Green et al. 2018). introduces a fully automatic method for         bullets or when colliding with enemies.
generating video game tutorials. The AtDELFI system (Au-         4. Bombs/Bullets
Tomatically DEsigning Legible, Full Instructions for games)
was designed to research procedural generation of tutorials        • Just kills enemies/ Just kills the player;
that teach players how to play video games. In the paper,          • Are destroyed by walls.
the authors present models of game rules and mechanics us-          The inconsistency here is that some barriers are not de-
ing a graph system as well as a tutorial generation method.      stroyed by bombs.
The concept was demonstrated by testing it on games within          FireStorms - A puzzle game
the General Video Game Artificial Intelligence (GVG-AI)
framework. AtDELFI uses Kwiri as a way to search for the         1. Player
critical events that make a player win and lose a game. The        • Cannot go through walls;
graph generated by AtDELFI starts the query engine that            • Cannot kill enemies;
captures the events and returns a sequence of frames, which
are used to generate videos of the tutorials.                      • Is killed when colliding with enemies or fire balls;
                                                                   • Wins the game when it reaches the closed gate.
               Preliminary User study                            2. Enemies
We employed a qualitative method in order to understand,           • Cannot go through walls;
from our users, the benefits of Kwiri, as well as to solicit       • Can occupy the same sprite of another enemy, a purple
suggestions for future additions and improvements.                   portal, the closed gate and the fireballs;
                                                                   • Cannot kill other enemies;
Study Design
                                                                   • Kill Player when colliding with it.
We created one inconsistency in the rules for each of three
different VGDL games: Zelda, Aliens, and FireStorms.                The inconsistency is that one of the enemies can walk
Their official rules are listed below, as well their inconsis-   through walls.
tencies.
   Zelda - An action game (a clone of the cave levels of         User Tasks
Zelda: A Link to The Past)                                       The user tasks consisted of finding the inconsistency in each
                                                                 one of the three games. To perform the tasks they were al-
1. Player                                                        lowed to use Kwiri, and obviously, combine it with Seek-
   • Cannot go through walls;                                    Whence and the visualization system.
   • Can kill enemies with a sword;
                                                                 Participants
   • Changes its sprites when it gets the key;
   • To win the level needs to get the key and access the        The study had nine participants, all of them male, eight of
     gate.                                                       them were enrolled in a university program (7 Ph.D. stu-
                                                                 dents and one undergraduate student), and one was a digi-
2. Enemies                                                       tal media professional. Just two of them did not study/work
   • Cannot go through walls;                                    with games/gaming. The other ones had mixed experience
                                                                 between industry and academic fields, varying from two to
   • Can occupy the same sprite of another enemy, a key
                                                                 fifteen years. The most cited engines and frameworks by
     and a gate;
                                                                 them were Unity, Phaser, and GameMaker. They were re-
   • Cannot kill other enemies;                                  cruited through the department e-mailling list.
   • Kill Player when colliding with it.
   The inconsistency in this game is that some enemies can
                                                                 Procedure
kill other enemies.                                              The first step of the study was to ask the participants to fill
   Aliens - A Space Invaders clone                               out a form about their demographic data and game devel-
                                                                 opment experience. After that, we asked if they would be
1. Player                                                        comfortable with their voices being recorded. Then we ex-
   • Kill enemies and barriers by shooting towards them;         plained how SeekWhence and Kwiri work. The explanation
   • Cannot go through walls;                                    took less than five minutes. In the end, we asked the users if
                                                                 they wanted to ask any question or skip it to use the tools for
   • To win the level needs to kill all the enemies.             a quick warm up.
2. Enemies                                                          The second step started by introducing the users to what
   • Cannot go through walls;                                    would be their tasks during the experiment. We informed
                                                                 them that we would give them three different tasks and that
   • Kill Player by shooting towards it or when colliding        they should use the set of features to find a design inconsis-
     with it.                                                    tency in each one of them. For each task, first, we handed the
3. Barriers                                                      users a sheet of paper with all the rules of the game in the
evaluation. After they read the rules, we ran the game with
the agent ”adrienctx”. We choose this agent because it is a
former winner of the GVGAI competition. It is able to play
the games well, and most of the times can beat the levels it
is playing. Then, all the data were available to the user to
actually start to work with them in order to try to solve the
task.                                                              Figure 3: (Left) The users found a bug we were not aware of
   The third and last step of the procedure consisted of a con-    by querying for kill events involving a bomb and a barrier.
versation where the users could express their opinions about       (Center and Right) The users navigating forward and back-
the tested features and compare it with others they used be-       ward to confirm the inconsistency: the bomb was destroying
fore.                                                              a barrier which wasn’t on its line of fire. Later on during
   We checked in our users after five minutes of working on        the interviews, some users stated that the green and red pat-
a task. We decided on this number after three pilot tests ex-      terns (Left figure), used to express the sprite who does the
ecuted with a preliminary version of the system. However,          action and the one who suffers, should also be indicated on
this time is not a measure of success. It was only used to         the query panel.
verify if the user is feeling tired and/or uninterested in con-
tinue after five minutes of effort. We gave them the option to
keep trying or move to another task.                               considered suitable to analysis text materials, which varies
                                                                   from media products to interview data (Bauer and Gaskell
Source of Data                                                     2000). Normally it creates key common points identified
We used three sources of data in order to collect the users’       among different users interviews in a technique called by
activities: direct observation, audio recordings of test ses-      codification theory, which helps researches in quick orga-
sions, and a design questionnaire.                                 nizing and managing qualitative data. To facilitate the cod-
                                                                   ification process, we used the trial version of the software
Direct observation As participants interacted with three           Atlas.ti (GmbH 2017).
different games and were allowed to speak about their find-
ings, problems, and suggestions related to the tools, they had                               Results
their voices recorded, and written notes were made of their
overall patterns of use and verbal comments. Attention was         As explained in the previous section, we did a qualitative
paid to participants’ interaction between the different sys-       study to know from our users which are the significant gains
tems (SeekWhence, Kwiri, and visualization), how hard they         of our tools and which are the points of improvements in a
had to work to find the inconsistency, what design features        next iteration of our design process. We will present the data
attracted their attention, and whether, at any stage during the    first with a general discussion of our findings by observing
study, they seemed to lose interest in the activity.               the users. Then we will present our conclusions based on the
                                                                   users’ point of view.
Audio recordings of test sessions Every user had their
voice recorded. This helped us to validate our written notes       Task results
since we used them as tags to pay attention when listening
                                                                   Most of the users were able to complete all three of the tasks
to their recordings. Also, we used this source to clarify some
                                                                   with success. Just two gave up, one during the Zelda task,
actions they performed which were not initially clear to us.
                                                                   and another during the Aliens task.
Design questionnaire Our design questionnaire is based                They started by exploring their options, right after reading
on semi-structured interview and required factual (e.g.,           the game rules given to them. Some started by using Seek-
”When I was querying for what a sprite was doing.”), per-          Whence and tried to figure out the problem just by replaying
ceptual (e.g., ”I think that the query results annoyed me with     the game frame-by-frame. Then they switched to the query
too much information.”) and comparative (e.g., ”It would be        system to filter based on their suspicions. Other users did the
a plus to have these features in the tools I have used before”).   opposite, starting by querying and then switched to Seek-
We started the questionnaire by asking general questions in        Whence. This exploratory step was skipped by two users
order to let the user feel comfortable like ”Which are your        who decided to do a warm-up during the explanation of how
thoughts about these features?”. Then we moved to specific         the system works.
ones, many of them influenced by our notes, like: ”In the             One fact that grabbed our attention was that five of our
second task, could you explain what you were trying to do          users could identify one inconsistency that we were not
when you asked if you could type your queries?”. Finally,          aware of. In the game Alien they could observe that a bomb
we asked if the users had more suggestions besides the ones        was destroying a barrier in a position it was not designed to.
they suggested during the tasks.                                   Figure 3.

Data analysis                                                      Praise
Our data analysis is based on text transcriptions of the design    In general, the users agreed that Kwiri was a valuable re-
questionnaire discussed in the previous section. We used a         source for finding game design inconsistencies. Some of
classical procedure, Qualitative Content Analysis since it is      them pointed to personal experiences with situations in
                                                                    Figure 5: A quick prototype of a user suggestion. Yellow
                                                                    ticks on the timeline tells in which frames we have event(s)
                                                                    happening. By hovering the mouse over them we can see a
                                                                    preview of the event(s).
Figure 4: User solving the inconsistency on the third task.
By navigating on the gameplay session with SeekWhence
and using the visualizations he could see the enemy walking         filter tool does not seem enough. At least for the tasks evalu-
through walls.                                                      ated, despite the majority of the users were able to use it cor-
                                                                    rectly, it should provide details about the sprite roles in the
                                                                    game and be more explicit about the event as a user stated:
which they needed tools like the one presented here, but in-        ”I wasn’t sure about what this event - transfrom to - mean. I
stead, they ended up hand-rolling their own.                        was doing assumptions based on what I have seen before on
   One of the users explained that he had to write his system       other tools like GameMaker. Fortunately, it is similar, but I
in order to evaluate how an AI was behaving. ”I liked these         would like to have this information before.”
features. Some time ago, I had some issues with an AI that
I was developing for playing a game. The tool I was using           Suggestions
didn’t have this feature about navigating and use visualiza-
tions. Then I had to write my own system to do exactly what         We got many suggestions from our users that we will add in
we were doing here! Because here, I really can see what the         future iterations of the system. One user said that he would
agent is doing.”                                                    like to see small ticks on the timeline bar. The ticks would be
   Another user, similarly, does not have the appropriate           used to let a user know that one or more event is happening
tools to inspect his game projects. ”It is a really cool feature,   on that part of the gameplay session. To assure what he was
pretty deal! I would like to use it on my work now, especially      suggesting we used the Youtube (Youtube 2017) yellow ad-
this frame navigating tool. But I’m having to implement ev-         vertise ticks as a design metaphor. They are used to explain
erything from scratch.”                                             to a user when an add will pop up. He promptly confirmed
   A third user revealed that his work consisted of play his        that it was exactly what he had in mind. Figure 5.
game to figure what was causing the problems. ”I developed             Another user suggested us to present the query options as
a game once, but I did not have how to record it and play           a tree visualization where he could filter based on options
it again as I did here. So every time that I found a bug, the       available in each branch of the tree. It is an interesting sug-
only way to debug it was by playing it again. If I had a tool       gestion, and we think it can reduce user efforts and present
like that, at least my debug process would be way easier”.          a better way to lead them to what they are trying to find.
Figure 4                                                               Finally, another user said that the same color patterns used
                                                                    to represent the agents that does (green) and suffers (red)
Issues                                                              the actions should be available on the panel used to make
                                                                    the queries. He said that in general the queries help when
Some users said they had problems in understanding the
                                                                    one has to identify who is involved in an event. However, it
queries, for example, one of them stated that at some point
                                                                    is also important to know who started the cause and who is
it pops up too much information making the process not so
                                                                    getting the consequences. It was also stated as an issue by
attractive to follow. However, another user stated the oppo-
                                                                    another user and is something straightforward to fix in the
site. In his speech, information volume does not seem to be
                                                                    query UI panel.
a problem. The way that the UI leads to them is what con-
cerns him. ”I really liked these tool, but I would like to see
more information, however with less clicks.” Still, about the                  Conclusion and Future Work
query system, one user said that he was not sure about the          We have presented a system for querying game events in
roles of a sprite in an event. ”I would like to know who is         space and time. This system is a new addition to Cicero,
killing who in a ’killSprite’ event. The query helped me to         a general mixed-initiative game design assistance tool. We
confirm my suspicion that an enemy was killing another, but         evaluated our tool in the context of finding design inconsis-
the panel in the Who area should say who is the one do-             tencies/bugs. The users were able to solve their given tasks
ing the action.”. Overall, as pointed in this section, the main     promptly and provided us with valuable suggestions; no-
issues were related to the query system. Design them as a           tably, they found bugs we were not even aware of, speaking
to the usefulness of the system. To our surprise, the users      in Games (Dagstuhl Seminar 12191). Dagstuhl Reports
were significantly attracted to the replay analysis system       2(5):43–70.
(SeekWhence) than to the query one (Kwiri) to solve their        Machado, T.; Gopstein, D.; Nealen, A.; Nov, O.; and To-
tasks. It contradicts a previous quantitative study that shows   gelius, J. 2018. Ai-assisted game debugging with cicero. In
users having much better results, in similar tasks, by using a   2018 IEEE Congress on Evolutionary Computation (CEC),
query system than a replay analysis.                             1–8.
   We plan to test the system further, on a large audience of
game designers. We also want to design specific tasks for        Machado, T.; Nealen, A.; and Togelius, J. 2017a. Cicero:
evaluating each one of the features separately and in differ-    Computationally intelligent collaborative environment for
ent kinds of combinations. Besides finding inconsistencies,      game and level design. In 3rd workshop on Computational
we will also focus on agent evaluation and game balancing.       Creativity and Games (CCGW) at the 8th International Con-
                                                                 ference on Computational Creativity (ICCC17).
                   Acknowledgment                                Machado, T.; Nealen, A.; and Togelius, J. 2017b. Seek-
                                                                 whence a retrospective analysis tool for general game de-
Tiago Machado is supported by the Conselho Nacional de           sign. In Proceedings of the 12th International Conference on
Desenvolvimento Cientı́fico e Tecnológico (CNPQ), under         the Foundations of Digital Games, FDG ’17, 4:1–4:6. New
the Science without Borders scholarship 202859/2015-0            York, NY, USA: ACM.
                                                                 Medler, B., et al. 2009. Generations of game analytics,
                       References                                achievements and high scores. Eludamos. Journal for Com-
Bauer, M. W., and Gaskell, G. 2000. Qualitative researching      puter Game Culture 3(2):177–194.
with text, image and sound: A practical handbook for social      Perez, D.; Samothrakis, S.; Togelius, J.; Schaul, T.; Lucas,
research. Sage.                                                  S.; Couëtoux, A.; Lee, J.; Lim, C.-U.; and Thompson, T.
Bontrager, P.; Khalifa, A.; Mendes, A.; and Togelius, J.         2015. The 2014 general video game playing competition.
2016. Matching games and algorithms for general video            Schaul, T. 2013. A video game description language for
game playing. In Twelfth Artificial Intelligence and Inter-      model-based or interactive learning. In Computational In-
active Digital Entertainment Conference.                         telligence in Games (CIG), 2013 IEEE Conference on, 1–8.
Butler, E.; Smith, A. M.; Liu, Y.-E.; and Popovic, Z. 2013.      IEEE.
A mixed-initiative tool for designing level progressions in      Shaker, N.; Shaker, M.; and Togelius, J. 2013. Ropossum:
games. In Proceedings of the 26th annual ACM symposium           An authoring tool for designing, optimizing and solving cut
on User interface software and technology, 377–386. ACM.         the rope levels. In AIIDE.
Canossa, A.; Nguyen, T.-H. D.; and El-Nasr, M. S. 2016.          Smith, A. M.; Andersen, E.; Mateas, M.; and Popović, Z.
G-player: Exploratory visual analytics for accessible knowl-     2012. A case study of expressively constrainable level de-
edge discovery.                                                  sign automation tools for a puzzle game. In Proceedings of
Deutch, D.; Greenshpan, O.; Kostenko, B.; and Milo, T.           the International Conference on the Foundations of Digital
2012. Declarative platform for data sourcing games. In           Games, 156–163. ACM.
Proceedings of the 21st International Conference on World
                                                                 Smith, G.; Whitehead, J.; and Mateas, M. 2010. Tanagra:
Wide Web, WWW ’12, 779–788. New York, NY, USA:
                                                                 A mixed-initiative level design tool. In Proceedings of the
ACM.
                                                                 Fifth International Conference on the Foundations of Digital
Ebner, M.; Levine, J.; Lucas, S. M.; Schaul, T.; Thompson,       Games, 209–216. ACM.
T.; and Togelius, J. 2013. Towards a video game description
                                                                 Srisuphab, A.; Silapachote, P.; Chaivanichanan, T.; Ratana-
language.
                                                                 pairojkul, W.; and Porncharoensub, W. 2012. An application
El-Nasr, M. S.; Drachen, A.; and Canossa, A. 2013. Game          for the game of go: Automatic live go recording and search-
Analytics: Maximizing the Value of Player Data. Springer         able go database. In TENCON 2012-2012 IEEE Region 10
Publishing Company, Incorporated.                                Conference, 1–6. IEEE.
GmbH, S. S. D. 2017. Atlas.ti.                                   2017. Unity technologies. unity game engine. https://
Green, M. C.; Khalifa, A.; Barros, G. A. B.; Machado, T.;        unity3d.com. Accessed: 2017-03-01.
Nealen, A.; and Togelius, J. 2018. Atdelfi: Automatically        Varvaressos, S.; Lavoie, K.; Massé, A. B.; Gaboury, S.; and
designing legible, full instructions for games. In Proceed-      Hallé, S. 2014. Automated bug finding in video games:
ings of the 13th International Conference on the Founda-         A case study for runtime monitoring. In Software Testing,
tions of Digital Games, FDG ’18, 17:1–17:10. New York,           Verification and Validation (ICST), 2014 IEEE Seventh In-
NY, USA: ACM.                                                    ternational Conference on, 143–152. IEEE.
Liapis, A.; Yannakakis, G. N.; and Togelius, J. 2013. Sen-       Wallner, G.; Kriglstein, S.; Gnadlinger, F.; Heiml, M.; and
tient sketchbook: Computer-aided game level authoring. In        Kranzer, J. 2014. Game user telemetry in practice: A case
FDG, 213–220.                                                    study. In Proceedings of the 11th Conference on Advances in
Lucas, S. M.; Mateas, M.; Preuss, M.; Spronck, P.; and To-       Computer Entertainment Technology, ACE ’14, 45:1–45:4.
gelius, J. 2012. Artificial and Computational Intelligence       New York, NY, USA: ACM.
White, W.; Sowell, B.; Gehrke, J.; and Demers, A. 2008.
Declarative processing for computer games. In Proceedings
of the 2008 ACM SIGGRAPH symposium on Video games,
23–30. ACM.
Youtube. 2017. Youtube.
Zoeller, G. 2010. Development telemetry in video games
projects. In Game developers conference.