<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>LegalAIIA, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>An Approach to Human-Machine Teaming in Legal Investigations Using Anchored Narrative Visualisation and Machine Learning ∗</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Simon Attfield†</string-name>
          <email>s.attfield@mdx.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Windridge</string-name>
          <email>d.windridge@mdx.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bob Fields</string-name>
          <email>b.fields@mdx.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kai Xu</string-name>
          <email>k.xu@mdx.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>eDiscovery, TAR, anchored narratives,</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Middlesex University</institution>
          ,
          <addr-line>London</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>sensemaking</institution>
          ,
          <addr-line>distributed cognition.</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>17</volume>
      <issue>2019</issue>
      <abstract>
        <p>During legal investigations, analysts typically create external representations of an investigated domain as resource for cognitive offloading, reflection and collaboration. For investigations involving very large numbers of documents as evidence, creating such representations can be slow and costly, but essential. We believe that software tools, including interactive visualisation and machine learning, can be transformative in this arena, but that design must be predicated on an understanding of how such tools might support and enhance investigator cognition and team-based collaboration. In this paper, we propose an approach to this problem by: (a) allowing users to visually externalise their evolving mental models of an investigation domain in the form of thematically organized Anchored Narratives; and (b) using such narratives as a (more or less) tacit interface to cooperative, mixed initiative machine learning. We elaborate our approach through a discussion of representational forms significant to legal investigations and discuss the idea of linking such representations to machine learning.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Legal investigations, particularly in regulatory and litigation
contexts, tend to be characterised by the simultaneous challenge
and opportunity of very large numbers of documents as a source of</p>
    </sec>
    <sec id="sec-2">
      <title>Background - External Representations for</title>
    </sec>
    <sec id="sec-3">
      <title>Investigatory Sensemaking</title>
      <p>
        The creation, augmentation and use of representations, whether
internal (in the head) or external (in the world), are a central part of
sensemaking. This idea is reflected in most significant theories and
models of sensemaking. For example, Klein et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] discuss the
role of mental ‘frames’ in sensemaking, and Pirolli and Card [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
emphasise the way intelligence analysts externally structure
information into representations as part of a wider sensemaking
process (referring to this step as ‘schematization’).
      </p>
      <p>
        External representations, when created, can be intimately
involved in the cognitive processes of sensemaking. The approach
of Distributed Cognition is predicated on the idea that cognitive
activities make use of external as well as internal representations,
with external representations seen not only as sources of
information, but as structures that transform the cognitive task itself
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Having an effective representation can lead to different and
better strategies for carrying out a task, better performance, and
lower mental effort. The form and properties of external
representations can lead to changes in cognitive processes as these
become integrated into and participate within these processes.
Distributed Cognition aims to dissolve the traditional division of
inside/outside the individual when analysing cognition in order to
explore the complex relationships between people, artefacts and
technology when accounting for how thinking gets done.
      </p>
      <p>
        In an attempt to render the concepts of distributed cognition
more useful and applicable to the design of human-computer
interaction, Wright et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] identified a collection of ‘abstract
information resources’ that can form a part of the process of
carrying out activities. Such abstract structures can be represented
in a variety of forms, embodied in physical media (possibly as a
result of the design of interactive technologies) or located in the
minds of members of a distributed cognitive system. More recently,
Attfield et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] applied this idea to sensemaking, identifying a
taxonomy of abstract information resources that can be represented
internally or externally during sensemaking and which are
transformed during the process of sensemaking. These resources
include representations of the domain (specific or general), intents
(high-level values to low-level and goals), and representations of
action (possible, planned or performed). Actors involved in the
sensemaking activity may make use of any or all of these, and the
nature of their representation determine how they may do so.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Narrative</title>
      <p>
        External representations can take many forms depending on the
entities and relationships being represented. Faisal, Attfield and
Blandford [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposed six basic types: spatial, sequential
(including narrative), networks, hierarchical, argumentation
structures and faceted. Here we discuss two types which are
important for constructing domain representations during
investigatory sensemaking: narrative and argument. Later we
extend this with a discussion of thematic organisation.
      </p>
      <p>
        For example, Attfield and Blandford [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] reported a study of the
cognitive work of lawyers involved in some large corporate
investigations. As part of their work, the lawyers represented their
analyses in the form of sequences of connected events or
chronologies, created around different themes of an investigation.
These narrative representations, which were ultimately very large,
played a central role in the way that the lawyers thought about and
collaborated around the investigations and they were central in the
generation of insights. The lawyers reported that this was a natural
way for them to think about an investigation.
      </p>
      <p>
        Research shows that narrative representations play a
particularly important role in the way that people reason about
evidence. For example, Pennington and Hastie [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] conducted a
series of studies into the way that jurors mentally comprehend
evidence in legal cases. They found that, irrespective of how
evidence was presented, jurors structured it in terms of narratives
that made sense to them. Not only that, they added information to
make the stories make more sense. This finding is typical of studies
into evidential reasoning and provided a basis for what Pennington
and Hastie called their Story Model. According to the Story Model
people find it easiest to make sense of legal evidence through
narratives that they construct in order to explain the evidence.
Importantly, the resulting narrative is constructed not just from the
evidence, but by reasoning from evidence to explanation.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Argumentation</title>
      <p>
        Investigatory sensemaking involves drawing conclusions from
evidence using generalised beliefs about the way the world works
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. For example, an investigator may infer from reading an email
in which person a thanks person b for a gift, that a gift was
exchanged, with this inference depending on both the text in the
email and the more general belief that people don’t usually express
gratitude in this way when in reality no gift has been exchanged.
This is an example of an abductive inference (reasoning to the best
possible explanation) which is characteristic of investigatory
sensemaking. Many thousands of such inferences may be made
during an investigation, and given their generally defeasible nature,
it can be important that they are amenable to review. For example,
Attfield and Blandford [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] reported on the way that lawyers
maintained links from chronology entries to supporting
documentary evidence and traversed them frequently.
      </p>
      <p>
        Based on a study of how Dutch judges reasoned about cases,
Wagenaar [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] observed their prominent use of narrative
connections and argumentation links and developed from this the
notion of Anchored Narratives. An anchored narrative is a hybrid
representational form combining narrative with argumentational
links to supporting evidence. Bex [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] has used this approach to
develop a formal theory that combines stories with evidential
arguments in a hybrid framework for structured argumentation.
      </p>
      <p>Figure 1 shows an example of an Anchored Narrative in which
events are represented as a connected narrative (from top to bottom
in figure 1) attached to supporting evidence (where available).</p>
      <p>Significantly, events are anchored, not only in evidence, but
within the context of the unfolding story. The plausibility of each
event is then judged not solely in virtue of its supporting evidence,
but also by the support of plausibility afforded by its position in the
surrounding narrative and how this relates to generalised beliefs
about how the world words. Figure 1 also shows the representation
of multiple competing narratives with a point of divergence based
on evidence from interview 1 and interview 2. Explicitly
representing such competing conclusions can be a helpful in a
context of defeasible reasoning where multiple interpretations or
claims may be explicitly considered.</p>
    </sec>
    <sec id="sec-6">
      <title>3. Interactive Visualisation</title>
      <p>
        Data visualisation has a capability of supporting insight from
abstract data by leveraging the power of the human perceptual
system to convert cognitive problems into perceptual problems
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. It can, reveal insights that are otherwise difficult to discover
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Interest has developed in extending data visualisation beyond
the display of large datasets to support other aspects of
sensemaking (including what Pirolli and Card [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] referred to as
schematization) and also to enhance human sensemaking by
coupling representations to computational components such as
machine learning; this is an approach emphasised by Visual
Analytics. Figure 2 shows Kohlhammer et al’s [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] model of the
Visual Analytics process. The main difference between this model
and a data visualisation pipeline is the addition of the ‘model’
component (representing the product of automated data analysis
such as machine learning) and its interactions with other
components.
      </p>
      <p>
        Visual Analytics tools can facilitate the process of constructing
narratives from data and capturing the data and analysis that lead to
them. Figure 3 shows a tool we have developed called SenseMap
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. SenseMap provides the user with a freeform interactive space
(right) which can be used for constructing anchored narratives from
data. The user interacts with data and represents interesting
discoveries as a boxes in the main panel (right) by a simple click.
Discoveries can be moved freely to form thematic groups or
evolving narratives. SenseMap also captures the provenance of the
discovery such that clicking on a discovery will restore the original
data source i.e. discoveries are anchored in source data.
      </p>
      <p>
        In addition to organizing discoveries into evolving narratives,
we see value in organising narratives into identifiable episodes and
themes. Investigations can be complex. Investigation teams have
been shown to divide analyses along the lines of episodes and
themes as these become apparent. This has the value of reducing
cognitive complexity and supporting the division of labour [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Different episodes and themes will also have different theories of
relevance, and we anticipate that such structuring can be exploited
by machine learning for the (further) identification of relevant
information in large evidential collections. Hence, we propose
structuring events at the interface into discrete episodes and by
hierarchical theme. Figure 4 shows a conceptual model of this idea
in which connected events form episodes, which in turn become
components in anchored narratives. Similarly, discoveries can be
grouped as hierarchically organised themes.
      </p>
      <p>
        Besides interfacing with users, there are many examples in
which Visual Analytics can provide the interface between domain
experts and machine learning algorithms [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Some of these allow
users to provide feedback on the machine learning outcomes (such
as classification or prediction), and improving the underlying
machining learning model. These are often known as
active/interactive learning. Other methods focus on exposing the
inner workings of a machine learning model, i.e. how the model
makes classification or prediction. This is known as explainable AI
(XAI) and critical to the issues related to model transparency such
as model bias and user trust. These issues are closely related to the
discussions in the next section.
      </p>
    </sec>
    <sec id="sec-7">
      <title>4. Coupling with Machine Learning</title>
      <p>The nature of the problem as defined implicates a unique nexus
between machine learning, human computer interfacing (HCI) and
machine representation. While domain summarisation is a
wellestablished aspect of machine learning-based textual and image
analytics, it is necessarily a passive, feedforward process unless
explicit human-in-the-loop considerations are incorporated. Our
problem, when cast in machine learning terms, can be specified as
the building of a recommender system for returning evidence in
relation to significant, or user-salient, aspects of the chronological
data stream at arbitrary levels of hierarchical
aggregation/representation. The problem of relevance has both a
'vertical' (abstractive) as well as 'horizontal' (chronological)
aspects, given that narrative sequences and events (evidence) exist
in a subsumptive relationship.</p>
      <p>Thus, we seek a system in which user and machine exist within
a convergent hermeneutic feedback cycle, for which potentially
supportive evidence is returned to the user on the basis of the
current narrative representation at some appropriate level of
hierarchical aggregation. In response, the user feeds back
information on the utility of this evidence as part of the constructed
narrative sequence (at its appropriate level of representation) in
order to either to further develop an existing , or else initiate a novel
representational frame.</p>
      <p>The hierarchical aspect of the problem significantly multiplies
the complexity of the machine learning methodology required to
approach it. In particular, sequence-based recommender systems
typically rely on query proximity within some appropriate metric
(or quasi-metric) space. However, we here require that the proximal
region to the user's query (anchor) within 'narrative space' takes
into account arbitrary levels of aggregation (or narrative
coarsegraining) in a way that both encompasses (potentially evolving)
user preference and does not burden the user with excessive
feedback requirements.</p>
      <p>To this end, we propose to use active learning within the context
of the querying of the sequential aggregation so as to achieve the
optimal reduction in the bandwidth of user feedback required to
obtain a convergent recommender platform for narrative
construction. Active learning is a process by which machine
learning hypotheses are fed back to the user (here via appropriate
visualisation techniques) in a manner such that preference feedback
to the machine learner is optimally exploited to improve learning
performance. This typically provides a logarithmic improvement in
user feedback requirements with respect to labelling effort/user
load associated with classical machine learning approaches.
Maximally rapid mutual convergence on hypotheses of interest to
the user is thus ensured, such that human and machine mutually
adapt to take advantage of their respective capabilities in the most
synergistic fashion.</p>
      <p>The proposed system would thus exploit feedback from the user
in its learning-loop in order to develop a better tailored model of
narrative and chronological salience via the use of active learning
to pro-actively present representation alternatives to the user across
the interface. Crucial to bootstrapping this process is an initial 'seed'
set of domain-annotated data, constituting an initial extraction of
salient descriptors from the narrative stream.</p>
    </sec>
    <sec id="sec-8">
      <title>5. Discussion/Conclusion</title>
      <p>We believe that there is a prospect of achieving high quality,
synergistic relationships between human and machine cognition in
which one supports the other to enable rapid convergence on
significant and important narratives during investigatory
sensemaking. An approach that we propose involves the use of
interactive visualisation to allow users to construct structured
external representations of the investigated domain, coupled to
machine learning models that might exploit this structure to model
and predict investigators’ evolving interests around different parts
of the investigation. This is essentially a mixed initiative approach
to sensemaking in which computational and human agents establish
common ground around investigatory goals through common
access to a visualisation interface. In future work we seek to
develop a prototype of this approach to provide proof-of-concept
validation and to develop the techniques involved through iterative
empirical trials.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phillips</surname>
            ,
            <given-names>J. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rall</surname>
            ,
            <given-names>E. L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Peluso</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>A dataframe theory of sensemaking. In Expertise out of context: Proceedings of the sixth international conference on naturalistic decision making (pp</article-title>
          .
          <fpage>113</fpage>
          -
          <lpage>155</lpage>
          ). New York, NY, USA: Lawrence Erlbaum.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Pirolli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Card</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2005</year>
          , May).
          <article-title>The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis</article-title>
          .
          <source>In Proceedings of international conference on intelligence analysis (Vol. 5</source>
          , pp.
          <fpage>2</fpage>
          -
          <lpage>4</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Hutchins</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>1995</year>
          ).
          <article-title>Cognition in the Wild</article-title>
          . MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Wright</surname>
            ,
            <given-names>P. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fields</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Harrison</surname>
            ,
            <given-names>M. D.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Analyzing HumanComputer Interaction as Distributed Cognition: The Resources Model</article-title>
          . Human-Computer Interaction,
          <volume>15</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Attfield</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fields</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Baber</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>A resources model for distributed sensemaking</article-title>
          .
          <source>Cognition, Technology &amp; Work</source>
          ,
          <volume>20</volume>
          (
          <issue>4</issue>
          ),
          <fpage>651</fpage>
          -
          <lpage>664</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Faisal</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Attfield</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Blandford</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>A classification of sensemaking representations</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Attfield</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Blandford</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Making sense of digital footprints in team-based legal investigations: The acquisition of focus</article-title>
          .
          <source>HumanComputer Interaction</source>
          ,
          <volume>26</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>38</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Pennington</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hastie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1991</year>
          ).
          <article-title>A cognitive theory of juror decision making: The story model</article-title>
          . Cardozo L. Rev.,
          <volume>13</volume>
          ,
          <fpage>519</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Wagenaar</surname>
            ,
            <given-names>W. A.</given-names>
          </string-name>
          (
          <year>1995</year>
          ).
          <article-title>Anchored narratives: A theory of judicial reasoning and its consequences. Psychology, law and criminal justice: International developments in research</article-title>
          and practice,
          <volume>267</volume>
          -
          <fpage>285</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Bex</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>An Integrated Theory of Causal Stories and Evidential Arguments</article-title>
          .
          <source>In Proceedings of the 15th international conference on artificial intelligence and law</source>
          (pp.
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Few</surname>
          </string-name>
          (
          <year>2013</year>
          )
          <article-title>Data Visualization for Human Perception, In: The Encyclopedia of Human-computer Interaction</article-title>
          . Interaction Design Foundation.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Card</surname>
            ,
            <given-names>S. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mackinlay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Shneiderman</surname>
            ,
            <given-names>B</given-names>
          </string-name>
          . (Eds.). (
          <year>1999</year>
          ).
          <article-title>Readings in Information Visualization: Using Vision to Think</article-title>
          . Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Kohlhammer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keim</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pohl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Santucci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Andrienko</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Solving problems with visual analytics</article-title>
          .
          <source>Procedia Computer Science</source>
          ,
          <volume>7</volume>
          ,
          <fpage>117</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>P. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bardill</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salman</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herd</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>B. W.</given-names>
          </string-name>
          (
          <year>2016</year>
          ,
          <article-title>October)</article-title>
          .
          <article-title>Sensemap: Supporting browser-based online sensemaking through analytic provenance</article-title>
          .
          <source>In 2016 IEEE Conference on Visual Analytics Science and Technology (VAST)</source>
          (pp.
          <fpage>91</fpage>
          -
          <lpage>100</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Endert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ribarsky</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turkay</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>B. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nabney</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blanco</surname>
            ,
            <given-names>I. D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2017</year>
          , December).
          <article-title>The state of the art in integrating machine learning into visual analytics</article-title>
          .
          <source>In Computer Graphics Forum</source>
          (Vol.
          <volume>36</volume>
          , No.
          <issue>8</issue>
          , pp.
          <fpage>458</fpage>
          -
          <lpage>486</lpage>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>