<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Co-constructing Subjective Narratives for Understanding Interactive Simulation Sessions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anne-Gwenn Bosser Lab-STICC</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ENIB Brest</string-name>
          <email>dieguez@enib.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France bosser@enib.fr</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Franc¸ois Legras Deev Interaction Brest</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France francois.legras@deev-interaction.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ariane Bitoun MASA Group Paris</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Martin Die ́guez Lab-STICC</institution>
          ,
          <addr-line>CERV, ENIB Brest</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2004</year>
      </pub-date>
      <volume>2</volume>
      <fpage>690</fpage>
      <lpage>695</lpage>
      <abstract>
        <p>Stories are used by human beings to transmit knowledge, explain events, and generally make sense of the world. Creating a narrative is then considered as an activity producing meaning, allowing the narrator to formulate causal relations between selected events. Through the act of telling a story, the narrator makes explicit their own understanding of a given situation. In this paper, we describe an ongoing project, based on an existing simulationbased training software. We are creating a set of tools allowing users of such software to make sense of what happened during the simulation from their point of view. Based on previous work allowing to represent the causal flow of events formalized as actions, we describe the current issues we tackle for providing a tool helping users to describe their own view of what happened. In addition to helping with reflection about the training session, such a tool has the potential to support learning by allowing to contrast different points of view during pedagogical activities such as cooperative learning or tutored debriefing.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Simulation Training is a form of experiential pedagogy, considered as particularly effective. It may be underpinned by
software (the family of serious games, or useful games dedicated to learning), and associated with a debriefing session
allowing the participants to understand what has happened during training [FG07]. This allows a safe and cost-effective
solution where participants learn from the actions they performed during the simulation.</p>
      <p>Military Training often relies on a simulation whether it occurs at a sophisticated instrumented range, in a collective
training simulator system, or in a command and staff exercise using a mathematical model driven war game. Training
occurs in live, virtual, constructive, or mixed simulations of battlefield environments. In the live environment, units use
operational equipment and actual terrain to perform against an opposition force composed of military personnel (live
force-on-force) or targets (live fire). In virtual environments, units use simulators to represent equipment and weapons.
Weapon effects, terrain and enemy forces are computer generated. In constructive environments, battlefield outcomes are
determined by a computer simulation in order to provide battle effects supporting command and staff training. Training in
all of these simulation environments should provide individuals and units with feedback about how their actions contributed
to mission success or failure.</p>
      <p>Broadly speaking, a training session starts with a phase of preparation, where the realistic operational environment
is created, followed by a phase of exercise where all the participants take part in the simulation and, finally, a phase of
debriefing where the players have an interactive discussion (guided by a moderator) in order to understand what happened
during the training and why, as well as how to improve or sustain performance in similar situations in the future. Duration
and timing of the discussion [AS94] are very important: too many details lead to a lack of concentration among the
participants, and inadequate timing tends to make them forget the reasons behind them taking a specific course of actions.</p>
      <p>The large amount of data generated during training complicates this task 1. To alleviate this, we propose to develop
a narrative creation toolkit for assisting human-made explanations, in terms of a story (or narrative), of a given
simulation session, to all the participants involved in it. The idea is to provide a semi-automated analysis of the course of the
simulation-based narrative reconstruction of the (potential) causal links that exist between the events that occurred in the
simulation, and tools to support their structured presentation. Causal graphs in the tradition of [Pea09] will then be
integrated to the war game replay interface by a system of vignettes that provides the participants with useful information for
explaining a given situation which occurred during the simulation.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Narrative debriefing for simulation-based training</title>
      <p>Humans have always used stories to make sense of the world and explain the unfolding of past events. A number of
technology enhanced learning approaches have therefore naturally adopted narrative-based pedagogy [DP09]. In such an
approach, telling a story entails formulating causal relationships between selected events [vdBvOPV08, Abo10]. In the field
of education or serious games, storification [AAH09] is used to describe the creation of a causal structure by establishing
links between narrative events. One of the challenges in these areas is the realization of systems to automate or
semiautomate this activity in order to educate various user profiles. Conversely, studies in psychology of story understanding
have also shown the importance of the perception of causal relationships between narrative events [TS85].</p>
      <p>While modeling causality occupies a central place in Artificial Intelligence (AI) [Pea09], Narrative Intelligence’s point
of view is closer to commonsense reasoning: the narrator must select the events to be told, express the causal links among
them and select a level of granularity of such connections in order to make the final story meaningful. Contrary to
classical approaches from AI, narrative intelligence attempts to provide an explanation in a form that is presumably more
understandable for a human user [Rie16].</p>
      <p>Our aim is not to provide a fully automated story construction system such as in recent machine learning approaches:
our system must support the confrontation of different points of view during debriefing and cooperative learning activities.
As such, it should help each user to construct and explain their own subjective narrative, depending on the information
they had access to (depending on the roles of the participants this may widely vary) and their decision rationale. The tutor
in charge will have access to all information and their constructed narrative will be different as well.
3</p>
    </sec>
    <sec id="sec-3">
      <title>A Linear Logic based approach to story construction and analysis</title>
      <p>The formalisation of narratives is a problem that has often been approached in Artificial Intelligence from the perspective
of Knowledge representation and Reasoning about Action and Change (RAC), starting from the atomic modelling of a
narrative action, and describing its impact on the environment. Authors of [BCC10, BCFC11] use Linear Logic [Gir87a]
for the modelling, which has led to formal approaches to story analysis and property verification. Among other advantages,
this approach allows to model in a declarative way each event, by describing its impact on the environment in terms of
consumption and production of resources. This has led to systems where stories generated from a linear-logic based
declarative specification could be described by reconstructing causal relationships between events and displayed in terms
of causal diagrams [MBFC13, MFBC14].</p>
      <p>Building on these previous work, we propose to extend these formalisms in order to cope with two important aspects of
human understanding from the perspective of narrative analysis: the exploration of counterfactuals, and the granularity of
causality.</p>
      <p>Counterfactuals: from a psychological perspective, counterfactual reasoning is the mental simulation of alternative
scenarios of the type “what if ...”, where the invalidation of one or more events leads to the deduction of an
alternative reality, which plays a central role in the judgement of causality associated to a set of events [Maz04]. In AI,</p>
      <sec id="sec-3-1">
        <title>1A short simulation may imply the generation of approximately 60000 messages</title>
        <p>counterfactual reasoning was first formalised by D. Lewis [Lew73], who provided a clear semantics based on spheres
leading to a great deal of results in argumentation [Sak14], causality [Ort99] and hypothetical reasoning [Hal99]. The
problem has been recently revisited in [BBG18] where a novel formalisation in Answer Set Programming [BET11]
is provided. Exploring counterfactual scenarios entails analysing variants produced by the simulation in the replay
mode available in the tool.</p>
        <p>Granularity: the relationship between the number of causal relationships on a set of events and the importance of
perceiving an event in a story has been widely developed in [Maz04, TvdB85, TS85]. Based on these contributions, we
expect to develop heuristics that work on a predefined narrative structure. Those heuristics would allow, at least, the
assisted construction of a well-formed story that explains a given situation. Other heuristics based on domain-specific
knowledge as well as interaction patterns between the various actors of the simulation are also being explored.
We are currently working on identifying complex events as well as higher level actions and their possible decomposition2
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>A constructive simulation for military training</title>
      <p>The training software we use relies on a constructive simulation which allows us to engage brigade and division command
staff in large-scale conflict scenarios such as stabilisation operations, terrorist threats or natural disasters. It simulates a
diverse range of situations in realistic environments and lets trainees lead thousands of autonomous subordinate units (at
platoon and company levels) on the virtual field. Agents can receive operation orders and execute them without additional
input from the players, while adapting their behavior accordingly as the situation evolves.</p>
      <p>Models capturing such behaviors consist of two components: the algorithms that make agents perceive, move,
communicate and shoot, and the description of the capabilities of the underlying equipment stored in a database. The simulation
session database contains three different types of information:</p>
      <p>The data regarding the physical element: Constitutions of the units are described here. Because the simulation
is constructive, most of the features of the equipment or units are described by their effects or their capacities. This
facilitates their description in terms of action and change.</p>
      <p>The initialization data for the scenario containing the following information: terrain, order of battle, weather, data
provided by the simulation such as events, knowledge obtained by the agents, etc.</p>
      <p>The data generated by the simulation describing the evolution of the situation: information describing the
evolution of the game containing all events, knowledge about the environment and all mission reports.</p>
      <p>All this information is presented to the participants as a set of messages exchanged among the agents during the simulation
that contains all the information described above. We show an extract of the simulation below:
[07:29:47] - Report - ENG.Counter mobility platoon: Disembarkment</p>
      <p>started
.....
[07:30:17] - Report - INF.Mortar troop: Unit detected at ...
.....
[07:30:17] - Report - INF.Rifle platoon: Unit detected at ...</p>
      <p>Once the initialisation data, the elements of the simulation, and the report messages are translated into formal action
description, a raw analysis in the fashion of [MBFC13] is constructed. Visualization tools must then be developed to
support human explanation. The training software we use is equipped with a replay function which we intend to improve
with narrative content. Traditional military Command&amp;Control tools are a suitable starting point (map layers overlaid with
specific symbology), but to make sense, certain points of view must be chosen for each node in order to understand each
situation and their relationships. We propose to build Vignettes to represent the most salient nodes of the narrative graph.3
This can provide an analysis of the current maneuver in order to :
1. replace the operator in the current situation and explain the current maneuver;
2. propose an automatic synthesis of the tactical situation. It can be the calculation of the current force ratio or simply a
realistic view of the geographic capacities of units (fire, intelligence, etc.) and illustrate a bad use of the forces on the
field.
2Work on higher task decomposition has been considered in [HPX16].</p>
      <p>3The selection of the important nodes of the graph will be done via a mixed-initiative strategy involving automated scoring and user input.
remntaiaptethsabtedtwiffeeernenztoianteess tbheattwmeeunstzboeneresctohgantimzeuds,t cboenrqeuceorgendi,zceodn, tcronllqeude,reetdc.,, coornetrnoelmleide,s
teudst obresteolpimp einda. teTdhiosrcsotuolpdpebde. aTchhiisevceodultdhrboeugahchtiheeveindtethrproreutgahtiotnheofinttheerparedtvaatinocnemofe
,uarnredntthmeinsastiuornes,oafnpdlatnhneendamtuirsesoiofnpsl.anned missions.</p>
      <p>3. calculate and alter the consequences of specific events . For example, calculating the delay for logistical units or
support units after a bridge broken event.
lyupmpaonrat/gseumppenlyt management
Fsuigpuproert13dsuupFproirart etcaiposaunbiplictypaiosl ccroutlolrda-cuotdireodan:tfirfooomnrgocaoadl(gcrueelFna)ittogipouonrore(froe1dr4). aWSeuupsepNFoAiTrgOtuJdoriunet rM1ai4littaiSoryunSpymfpoboorlorgstyuadpsudperfionaredttieodnufon
in [Nat14]
unit unit
5</p>
    </sec>
    <sec id="sec-5">
      <title>Related work</title>
      <p>tohreebaacthtleuInfniaietplodosi,ntiownt hpeaeppebrr,oaNtviethiladeufesieeatladlc.,a[NlwYcRue+l1pa7rt]iopoovninitdooeuft atthheceuasetlicomfunleaarrtaitaiovkeneinnotelfltiogtehnbceeeftoirmnseeansetpmaoakksienigntiaostonapbtrooemisisniungpappoortsiat
.acTiongacahnieleinvneeeof mtrhesyieas.rcTthhoweitahscpiomhteinuetivlalaeatpiptolhinciasticotnahilnecsuesvlieamratleusscleantahtriieoosnbsuecchsaatlscrmouilulitaatrteye,hsfeoatlrhth-ecaabreceohrsbtuursnionietus.stTientheflloiisgrencecaea.lcTchhueyluantiiot.nTt
propose a machine learning-based approach that takes into account the narrative structure as well as the causal links among
the different events of the story.</p>
      <p>The Bardic [BBCR+17] system uses narrativisations to describe the activity in complex domains in such a way that
the information becomes accessible to non-experts. This system translates a given log file into a first-order logical theory
APER NBeRxpr-es8sed in Impulse [EBY15]4 and, from this representation, the system is able to obtain a causal graph SbyTaOna-lyMsinPg-aIlSlT-160
action preconditions with their corresponding effects.</p>
      <p>Our proposal differs from [NYR+17] in the approach: we want the participants in the simulation to be able to author
(with assistance) what happened from their point of view, so whilst we plan to incorporate some localised supervised
learning to facilitate repetitive tasks, we are not looking for a fully automated machine learning based storyfication system.
With regards to the work reported in [BBCR+17], we find several similarities with the Bardic system : both systems are
supported by a logical formalism (In our case, we use a resource-based encoding of linear logic [Gir87b]) and both tools
are oriented towards the extraction of causal information from a source of data. However, the way such an extraction
is obtained differs: Bardic uses a STRIPS-based approach [FN71]. We work from an expressive logical formalism for
representing actions in Linear Logic, and plan to further explore the use of counterfactual causality [Lew73].
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions and future work</title>
      <p>In this paper we presented a research project relying on the use of logical tools for debriefing in simulation-based scenarios.
We base our work on previous approaches relying on the use of Linear Logic-based formalisms, and intend to exploit
counterfactual reasoning for allowing the co-construction of a causal graph that explains a given simulation by the user and
a narrative assistant. Finally, in order to display the causal information, we propose an interface based on vignettes.</p>
      <p>We would like to remark that the use of a logical formalism for the narrative representation allows us to isolate the
kernel of our approach, and to apply it, modulo minimal changes, in a variety of simulation based training environments.</p>
      <p>4Impulse is a first-order temporal-epistemic framework for describing narratives that allows expressing temporal properties by means of Allen’s
relations [All83] as well as epistemic information of the characters thanks to the use of epistemic modalities [vDHvdHK15].
This paper is based on the STRATEGIC research project funded by the Direction Ge´ne´rale de l’Armement (DGA) through
the ASTRID Maturation program.
[AAH09]</p>
      <p>G. Allen and R. Smith. After action review in military training simulations. In Proceedings of Winter
Simulation Conference, pages 845–849, 1994.</p>
      <p>Camille Barot, Michael Branon, Rogelio E Cardona-Rivera, Markus Eger, Michelle Glatz, Nancy Green,
James Mattice, Colin M Potts, Justus Robertson, Makiko Shukonobe, et al. Bardic: Generating multimedia
narrative reports for game logs. In 10th International Workshop on Intelligent Narrative Technologies,
2017.</p>
      <p>Fiona Berreby, Gauvain Bourgne, and Jean-Gabriel Ganascia. Event-based and scenario-based causality
for computational ethics. In Proceedings of the 17th International Conference on Autonomous Agents and
MultiAgent Systems, AAMAS ’18, pages 147–155, 2018.</p>
      <p>Anne-Gwenn Bosser, Marc Cavazza, and Ronan Champagnat. Linear Logic for non-linear storytelling. In
ECAI 2010, volume 215 of Frontiers in Artificial Intelligence and Applications. IOS Press, 2010.</p>
      <p>Anne-Gwenn Bosser, Pierre Courtieu, Julien Forest, and Marc Cavazza. Structural analysis of narratives
with the Coq proof assistant. In Proceedings of Interactive Theorem Proving - Second International
Conference (ITP-2011), 2011.</p>
      <p>Gerhard Brewka, Thomas Eiter, and Mirosław Truszczyn´ski. Answer set programming at a glance.
Commun. ACM, 54(12):92–103, 2011.</p>
      <p>Giuliana Dettori and Ana Paiva. Narrative Learning in Technology-Enhanced Environments, pages 55–69.
Springer Netherlands, Dordrecht, 2009.</p>
      <p>M. Eger, C. Barot, and R. M. Young. Impulse: a formal characterization of story. In CMN’15, 2015.
Ruth M Fanning and David M Gaba. The role of debriefing in simulation-based learning. Simulation in
healthcare, 2(2):115–125, 2007.</p>
      <p>Richard E. Fikes and Nils J. Nilsson. STRIPS: A new approach to the application of theorem proving to
problem solving. Artificial Intelligence, 2(3):189 – 208, 1971.</p>
      <sec id="sec-6-1">
        <title>Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50:1–102, 1987.</title>
      </sec>
      <sec id="sec-6-2">
        <title>Jean-Yves Girard. Linear Logic. Theoretical Computer Science, 50(1):1–102, 1987. J. Y. Halpern. Hypothetical knowledge and counterfactual reasoning. International Journal of Game Theory, 28(3):315–330, 1999. Andreas Herzig, Laurent Perrussel, and Zhanhao Xiao. On hierarchical task networks. In Logics in</title>
        <p>Artificial Intelligence - 15th European Conference, JELIA’16, pages 551–557, 2016.</p>
      </sec>
      <sec id="sec-6-3">
        <title>D. Lewis. Counterfactuals. Blackwell, 1973. Lawrence J Mazlack. Granular causality speculations. In Fuzzy Information, 2004. Processing NAFIPS’04. IEEE Annual Meeting of the, volume 2, pages 690–695. IEEE, 2004. [MBFC13]</title>
        <p>[MFBC14]
[NYR+17]
[Pea09]
[Sak14]</p>
        <p>Chris Martens, Anne-Gwenn Bosser, Joao F Ferreira, and Marc Cavazza. Linear logic programming for
narrative generation. In International Conference on Logic Programming and Nonmonotonic Reasoning,
pages 427–432. Springer, 2013.</p>
        <p>Chris Martens, Joao F Ferreira, Anne-Gwenn Bosser, and Marc Cavazza. Generative story worlds as linear
logic programs. In Seventh Intelligent Narrative Technologies Workshop, 2014.</p>
        <p>Nato joint military symbology (MIL-STD-2525D), 2014. Department of Defense Interface Standard.
James Niehaus, R. Michael Young, Scott Neal Reilly, Peter Weyhrauch, and James Tittle. Towards
intelligent narrative-based interfaces for information discovery. In 10th International Workshop on Intelligent
Narrative Technologies, 2017.</p>
        <p>C. L. Ortiz. Explanatory update theory: Applications of counterfactual reasoning to causation. Artificial
Intelligence, 108(1):125 – 178, 1999.</p>
        <p>Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge university press, 2009.
M. O. Riedl. Computational narrative intelligence: a human-centered goal for artificial intelligence. In
Proceedings of the CHI 2016 Workshop on Human Centered Machine Learning, 2016.
C. Sakama. Counterfactual reasoning in argumentation frameworks. In Computational Models of
Argument - Proceedings of COMMA’14, pages 385–396, 2014.</p>
        <p>Tom Trabasso and Linda L Sperry. Causal relatedness and importance of story events. Journal of Memory
and Language, 24(5):595 – 611, 1985.</p>
        <p>Tom Trabasso and Paul van den Broek. Causal thinking and the representation of narrative events. Journal
of Memory and Language, 24(5):612 – 630, 1985.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Nat14] [Ort99] [Rie16] [vdBvOPV08]
          <string-name>
            <surname>Susan</surname>
            <given-names>W. van den Braak</given-names>
          </string-name>
          , Herre van Oostendorp,
          <string-name>
            <surname>Henry Prakken</surname>
          </string-name>
          , and
          <string-name>
            <surname>Gerard</surname>
            <given-names>A. W.</given-names>
          </string-name>
          <string-name>
            <surname>Vreeswijk</surname>
          </string-name>
          .
          <article-title>Representing narrative and testimonial knowledge in sense-making software for crime analysis</article-title>
          .
          <source>In Proceedings of the 2008 Conference on Legal Knowledge and Information Systems: JURIX</source>
          <year>2008</year>
          :
          <article-title>The Twenty-First Annual Conference</article-title>
          , pages
          <fpage>160</fpage>
          -
          <lpage>169</lpage>
          . IOS Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [vDHvdHK15]
          <string-name>
            <surname>H. van Ditmarsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          , W. van der Hoek, and
          <string-name>
            <given-names>B.P.</given-names>
            <surname>Kooi</surname>
          </string-name>
          . Handbook of Epistemic Logic. College Publications,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>