Algorithmic Transparency of Conversational Agents Sam Hepenstal Neesha Kodagoda Leishi Zhang Middlesex University Middlesex University Middlesex University London London London SH1966@live.mdx.ac.uk N.Kodagoda@mdx.ac.uk L.X.Zhang@mdx.ac.uk Pragya Paudyal B.L. William Wong Middlesex University Middlesex University London London P.Paudyal@mdx.ac.uk W.Wong@mdx.ac.uk ABSTRACT applications which reason from semantic knowledge graphs. These A lack of algorithmic transparency is a major barrier to the adoption applications are the focus for this paper, however, the principles of artificial intelligence technologies within contexts which require identified within the framework presented also apply to other types high risk and high consequence decision making. In this paper we of application which exhibit shared human-machine reasoning for present a framework for providing transparency of algorithmic critical decision making. processes. We include important considerations not identified in research to date for the high risk and high consequence context of defence intelligence analysis. To demonstrate the core concepts of our framework we explore an example application (a conversa- 1.1 Focus Study: Conversational agents to tional agent for knowledge exploration) which demonstrates shared explore semantic knowledge graphs human-machine reasoning in a critical decision making scenario. We include new findings from interviews with a small number of Conversational agents, namely applications which allow users to analysts and recommendations for future research. communicate with machines through natural language, are be- coming commonplace in many business and home environments. CCS CONCEPTS Technologies such as Google Home, Siri and Amazon Alexa present us with an easy way to access music, films, or plan our day. Many • Information systems → Information systems applications; services, for example banking, have incorporated chatbots into ex- Decision support systems; • Human-centered computing → isting processes to manage interactions with customers, including Human computer interaction (HCI). to direct them to the right information or department. This saves companies money and can save customers time waiting in a queue. KEYWORDS Typical applications for conversational agents tackle concise user Explainable AI; Graph Analysis; Conversational Agents tasks for mundane processes which can be translated to a finite ACM Reference Format: set of user intentions. Here the risks of an incorrect or misleading Sam Hepenstal, Neesha Kodagoda, Leishi Zhang, Pragya Paudyal, and B.L. response are low and the resulting consequences limited, particu- William Wong. 2019. Algorithmic Transparency of Conversational Agents. larly given the ease with which a user can validate results against In Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, March an expected and desired conclusion to their interaction. Take the 20, 2019 , 11 pages. example of a user wishing to listen to a playlist of a specific music genre. They can task the conversational agent with finding and 1 INTRODUCTION playing such a playlist. It does not necessarily matter to the user With advances in artificial intelligence (AI) technologies, reasoning exactly how the agent has reasoned what music should be played in like cognitive processes are no longer restricted to the human mind. the playlist, what the track order should be, or many other aspects. In cases this has led to shared human-machine reasoning, where The users intention is straightforward and the consequences of an both parties are able to explore information by interpreting, infer- unwanted track are limited i.e. the user will make an assessment of ring, and learning, before reaching a common understanding. One the track as soon as they hear it, at which point they may decide example of shared reasoning can be found in conversational agent to skip the track, or ask for a different playlist. The result of the interaction therefore provides some information to the user which IUI Workshops’19, March 20, 2019, Los Angeles, USA they can easily interpret, validate and make an appropriate, timely, © 2019 Copyright 2019 for the individual papers by the papers’ authors. Copying response. If the user were repeatedly presented with the wrong permitted for private and academic purposes. This volume is published and copyrighted genre of music, however, their need to understand the underlying by its editors. © Crown copyright 2019 Defence Science and Technology Laboratory UK. Approval algorithmic process and constraints will become more important. for wider use or release must be sought from: Intellectual Property Group, Defence We believe there is desire and benefit to using conversational Science and Technology Laboratory, Porton Down, Salisbury, Wiltshire SP4 0JQ. agents for natural and shared human-machine reasoning in appli- © Crown copyright 2019 Middlesex University. Middlesex University, The Burroughs, Hendon, London NW4 4BT. cations for which the interpretation of responses are high risk and high consequence, such as critical decision making environments. IUI Workshops’19, March 20, 2019, Los Angeles, USA S. Hepenstal, N. Kodagoda, L. Zhang, P. Paudyal and B.L. W. Wong However, there are significant differences between the require- skills and training. Current approaches to allow exploration of mul- ments for this and typical conversational agents, which must be tidimensional data, however, are complex and inflexible, requiring considered in design. chart or graph interactions which can feel unnatural and incon- Consider the example where a user wishes to perform analysis sistent to non-technical users. This inhibits the analyst’s ability to of a certain entity and explore it’s associations with another entity derive underlying narratives and test their hypotheses. Addition- through a conversational agent. The agent can provide responses ally, common data visualisations such as chart dashboards do not and a simple explanation. This interaction can be an example of clearly translate to some analysis methodologies which require the shared reasoning, where the user is directing the conversation based interpretation of conflicting hypotheses alongside uncertainty. As upon their own thoughts and the agent is interpreting the user’s in- described by Wong et. al [41] “analysts need a kind of user inter- tentions and objects of interest, before making inferences to extract face that allows them to easily explore different ways to organise data to include in it’s response. There are dangers present where and sequence existing data into plausible stories or explanations actions which are informed by shared reasoning are high risk and that can eventually evolve into narratives that bind the data to- high consequence. For example, if through shared reasoning the gether into a formal explanation.” Such exploration can be defined user incorrectly confirms their hypothesis and directs a subsequent as ‘storyboarding’ where an analyst will attempt to draw together a action, such as arresting an innocent person or launching an unnec- plausible narrative, involving missing and uncertain data, where an essary offensive operation. Some specific risks include; the way the analysts hypothesised connections also need representation. When agent interprets subtleties in the intention of the user request, the conducting this type of analysis an audited, flexible, conversation introduction of bias, the way the agent explains a complex series of with an agent could be beneficial. connections, the way it translates the uncertainty involved in those There are a variety of approaches to developing conversational connections and exposes missing data, and the propagation of un- agents, however the neural models which power most of the com- certainty along the conclusion pathway. Additionally, the algorithm mercially available smart assistants lack a sense of context and selected by the agent to explore data influences which pathway is grounding which their human interlocutors possess [16]. Instead described, this needs explaining to a user. Errors in mitigating any knowledge augmented models which make use of ‘semantic knowl- of these risks could lead to a mistaken, or deliberately manipulated, edge graphs’ may be the answer to provide more contextual, and action with adverse effects. Unlike typical conversational agent ap- meaningful, interactions. Semantic knowledge graphs are develop- plications, the user does not necessarily have an expectation or an ing as an important approach to manage and store information and easy way to validate results. They also need to access, understand observations for use in intelligence analysis. An example of such an and interpret the evidence underpinning any response. A simple observation is a connection between a person and organisation i.e. chat with summarised text responses cannot fully address the risks “Person A works for Organisation C”. By using knowledge graphs noted above and therefore lacks transparency and a mechanism that we are able to describe any type of information, with many proper- supports situation awareness, rigour, visible reasoning, and sense ties and classes which algorithms can call upon when performing making. Wong et al. [41] present the ‘Fluidity and Rigour Model’ queries. This provides an analyst with the ability to ask powerful which helps explain the design requirements for applications which queries, such as semantic search, if they have a necessary under- aim to mitigate these risks and aid the reasoning of intelligence standing of the query syntax [11]. Semantic knowledge graphs analysts. allow for some automated reasoning to be performed and thus ap- To date, research into conversational agents has looked to im- plications which use them can demonstrate shared human-machine prove the agent itself, by making it human-like or its responses reasoning. more contextual. This paper however considers the vulnerabilities Studies to date have focused upon the development of methods of shared human-machine reasoning and the requirements for visi- and technologies for conversational agents to deliver believable and bility of interactions, identifying key considerations. A framework contextual conversations. While potentially extremely helpful, this is presented, with input from experienced military intelligence ana- paper proposes that the use of conversational agents to interpret lysts, of foundational research areas for developing shared human- intelligence observations through semantic knowledge graphs can machine reasoning applications, such as conversational agents, for introduce risks due to a loss of situational awareness (SA). SA plays evidence based critical decision making environments. a vital role in dynamic decision making environments [43] such as In situations where a user aims to retrieve information or data, intelligence analysis. For military or police commanders to make particularly if they do not already know how to access it or what the best possible decisions in complex and uncertain environments they are looking for, a conversation can be the prefered way to they need to maximise their SA by making optimum use of avail- reach their desired outcome. The conversational agent provides a able knowledge. By introducing a conversational agent to parse gateway to the information they seek, extracting the users inten- queries, traverse the graph with an appropriate algorithm or set tions through a two way dialogue, then translating these intentions of algorithms and describe results, all as decided by the agent, a into query language and describing the results back. For many users layer of abstraction is introduced which masks true SA. The process this is a far more intuitive approach to retrieve information [3] than which interprets a users query before returning a response can be a complex query, particularly if the query language is uncommon to described in this way as a ‘black box’, as identified as a key issue in them. We propose that a more intuitive interaction with data could research in the area of machine learning and neural networks [4]. also benefit the areas of sense making and intelligence analysis. While it is theoretically possible to explain the algorithm which Intelligence analysts require the ability to explore large volumes is chosen, and each of the steps according to semantic reasoning, of multidimensional data in a way which feels natural given their Algorithmic Transparency of Conversational Agents IUI Workshops’19, March 20, 2019, Los Angeles, USA this process is not visible to a user through a conversational in- human by providing human-like responses. While work towards terface. To allow for evidence based sense making, conversational these early bots focused on the ability of a machine to be able to interfaces must, therefore, be designed to provide visibility of large imitate a human, for the purposes of this paper we are interested and complex reasoning paths and the surrounding contexts. It must in conversational agents which can be used to aid intelligence also be possible for analysts who are not expert statisticians or analysts to perform reasoning, and we will therefore apply the data scientists to understand interactions, perhaps making use of definition of ‘spoken dialogue systems’ given by McTear [18] to accompanying visual aids. describe the type of conversational agents relevant to our research. These are defined as computer systems that use spoken language 1.2 Research Contribution to interact with users to accomplish a task. The potential uses for We propose that there are some critical vulnerabilities in the field task based conversational agents is extremely broad. Examples of intelligence analysis and other evidence based decision making include ‘Anna’, who was introduced by IKEA in the mid 2000’s environments. These are magnified by the use of applications which and developed with personification, and CHARLIE. In Anna’s case, share reasoning ability between both human and machine, such as the task was to direct IKEA customers towards products they may conversational agents. be interested in buying. CHARLIE is a chat bot to assist students, Research is required to reach an understanding of how machine for example by allowing them to find information about tests [19]. reasoning can be introduced alongside human reasoning in a way Students can ask CHARLIE for a complete test, for a personalised which mitigates vulnerabilities, whilst still exploiting the signifi- test (choosing the number of questions), and ask ‘free questions’ cant benefits of more natural and powerful interactions between which are not part of any particular test. These two examples of humans and data. This paper delivers a framework for providing task based conversational agents have commonality in that they are algorithmic transparency, with associated research areas which are low risk; other than annoying the user, an incorrect response does the foundation to exploring how applications can be designed to not lead to catastrophe. Additionally, errors are quickly identifiable deliver shared human-machine reasoning. We examine the example with little uncertainty. An IKEA customer has a clearly defined goal of conversational agents used in conjunction with semantic knowl- for their task when they communicate with Anna, and they will edge graphs. The research is specifically tailored to an evidence know when their task has been completed. Likewise, a student will based decision making scenario (intelligence analysis) informed recognise if CHARLIE is asking questions which are not on their by semi-structured interviews with analysts who have experience syllabus. The consequences to incorrect or misleading responses working in intelligence environments. The framework helps seg- from either Anna or CHARLIE will therefore be limited. ment key considerations and vulnerabilities for agent design and One area where conversational agents have been applied to identifies challenges and areas for further research. higher risk and more uncertain environments is in health care. It is dangerous in decision making environments if a conversational agent is able to bias a decision, or influence the decision maker. 2 RELATED WORK Robertson et al. [26] evaluate a chat application, built in their case The framework proposed in this paper links various research topics as an aid for diagnosing prostate cancer, and found that using the which are each significant in their own right. We take a broad look app helped to “take fear out of decision making”. Without complete at previous work on one example of an application technology visibility of how an application had guided a user to the decision, which provides shared human-machine reasoning, that of conver- including the background processes beneath the thinking, conver- sational agents for querying semantic knowledge graphs. There are sational agents demonstrate serious risk of manipulating a decision important aspects of our framework which have not received at- maker. Another example where this is potentially a problem is in tention in research to date, specifically an understanding of how to chat interfaces which provide news stories, such as the NBC Politics make machine reasoning more visible to a user when intertwined Bot which was launched prior to the 2016 US Presidential election. with human reasoning. This is crucial when agents are used in How can we be sure the bot is not biased, particularly if it has been decision making environments and is a central question for our trained using selected data and machine learning approaches [17], research. or that the bot is not choosing an adverse path or filter to access and describe information to a user? To date, to the knowledge of the au- 2.1 Development of Conversational Agents thors of this paper, there has not been research to understand how The desire for humans to be able to speak with machines through and when we should shed light on the thinking of a conversational human-like language has been around for some time, with relevant agent alongside agent responses. Laranjo et al. [14] find that the use research published as early as the 1950’s [33]. Important advances in of conversational agents in health care includes a mixture of finite- technology over the past few decades, in particular the development state (where there are predetermined steps), frame-based (where of the internet as a source for knowledge, have led to rapid increases questions are based upon a template), and agent-based (where com- in conversational agent capabilities [6] with accompanying research munication depends on the reasoning of human and agent). We are publications. The focus of research to date has been on improving interested in agent-based conversational agents as these demon- an agents conversational abilities including their understanding of strate shared human-machine reasoning. Agent-based applications a user’s meaning and the flow of the conversation. include Siri, Allo, Alexa and Cortana, referenced by Mathur and Early chat interfaces, notably ELIZA [37] and ALICE (Artificial Singh [16]. Significant concerns have been identified with these Linguistic Internet Computer Entity), were built with the aim of types of agent, for example by Miner et al. [20] that “when asked deceiving humans into believing they were interacting with another simple questions about mental health, interpersonal violence, and IUI Workshops’19, March 20, 2019, Los Angeles, USA S. Hepenstal, N. Kodagoda, L. Zhang, P. Paudyal and B.L. W. Wong physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely.” To be used in high risk and high Figure 1: Example SPARQL Query Syntax consequence decision making environments where responses can- not be easily verified, conversational agents must provide visibility of their thinking and justifications through the underlying data or evidence. Laranjo et al. [14] recognise the risk that comes with applying conversational agents in high risk scenarios, including “privacy breaches, technical problems, problematic responses, or patient harm.” The issues of capturing context, managing inconsistency in re- sponses, providing trust and confidence, and removing bias in- formed by training data, can be mitigated if we provide the agent with a foundation for knowledge from which it can extract mean- ing and content deterministically. This is the case if we allow the conversational agent to interact with a semantic knowledge graph and also provides good reasons for doing so. Using a knowledge to perform specific search and reasoning tasks. graph to explore entities extracted in a users input text allows for a more contextual understanding of what the user requires, as well as the opportunity to provide added value to their request. The 2.2 Making inferences with semantic architecture of conversational agents described by Willemsen [40] knowledge graphs includes aspects such as a domain model (ontology), a text under- Hoppe et al. [11] do not define an application as semantic due standing layer (natural language processing), a knowledge graph to a particular technology used, rather they consider a “semantic layer which is built from the previous two layers, and a user context application as one where semantics (i. e., the meaning of domain layer. The user context layer is focused on conversational abilities terminology) are explicitly or implicitly utilized in order to improve such as staying in context, keeping track of the conversation flow, the usability of the application.” We apply this same definition. Se- and relating the conversation to entry points in the knowledge mantic knowledge graphs provide a user with the power to perform graph. Willemsen [40] demonstrates a simple search for a specific queries and retrieve data which incorporates a level of inferencing type of directed relationship for a single entity. A user is unlikely to about the users query. Hoppe et al. [11] provide a classic exam- require significant additional explanation in this scenario, however ple of “semantic search”, compared to a simple keyword search. if we consider a decision making environment, such as intelligence Instead of searching for information which matches the keyword analysis, it becomes more complicated. Even with a concise and we can search based upon a concept, or class, of an instance in the clearly articulated search, such as “who does person x work for?”, knowledge graph. If I use a conversational agent underpinned by a there are additional factors which an analyst would want to un- knowledge graph I can ask more complex queries related both to derstand beyond a simple text response. In intelligence analysis classes and instances, for example, ‘what “organisations” (seman- the provenance of the information is important, as is the reliability tic class) is the person “Poppy” (instance) linked to?’ Due to the or confidence given to it. There are cases where missing links are semantic nature of the graph I can identify all instances of organisa- inferred in knowledge graphs [5], machine learning has produced tion and find connections across the graph to the target. The agent edges (observations or connections) within the knowledge graph can infer relevant entities, and any other sub-classes of entity or [21], or SPIN rules have inferred links [1, 32], so the explanation contextual information, through the semantic class I have provided. of these to an analyst also needs consideration. Additionally, more Additionally rule based reasoning can be applied. In this way a complex queries will require some graph traversal for which the conversational agent using a knowledge graph can be defined as choice of traversal algorithm is crucial to determine what infor- an agent-based model, where reasoning is shared between human mation is described to an analyst in the agents text response. A and machine in a conversation. choice of Dijkstra’s single shortest path, for example, would iden- Semantic knowledge graphs can be complex to query, particu- tify different information to an alternative heuristic method for larly with more advanced graph traversal and query methods. To finding multiple paths between two nodes. Sorokin and Gurevych query an RDF graph, for example, we can use the SPARQL query [27] describe a method for not only extracting entities and relations language [23]. Additionally we can use SPARQL Inferencing Nota- from a users query, but also the structure of their query and their tion (SPIN), for example to work out the value of a property based underlying intention. The structure has implications for the results on other properties in a graph. Figure 1 shows an example SPARQL which will be returned, particularly when directional relationships query [23]. This syntax, even for a relatively simple query, can ap- exist. While it is important that the machine can understand the pear complex to novice users. A conversational interface provides a user’s query and intention, it is also critical that the user can verify route to explore large knowledge graphs through natural language this. without the need to write any query syntax. Conversational interfaces require the ability to identify a users The power of semantic knowledge graphs has led them to be- intention and intention definition is therefore an important consid- come crucial to supporting many AI applications, including ques- eration, as these will trigger the relevant action in response to a tion and answer systems [29, 35]. Willemsen presents the use of query. Intentions may be domain specific, for example an intelli- knowledge graphs as the foundation to a conversational agent [39] gence analyst may wish to perform particular tasks which do not Algorithmic Transparency of Conversational Agents IUI Workshops’19, March 20, 2019, Los Angeles, USA translate to other environments. To identify possible tasks we may “Psychological research into how people go about generating hy- look to use existing work, such as the task taxonomy for graph visu- potheses shows that people are actually rather poor at thinking of alisation presented by Lee et al. [15], to understand generic queries all the possibilities. If a person does not even generate the correct a user may wish to make. There has been work to provide advice hypothesis for consideration, obviously he or she will not get the and solutions to the visualisation of large scale knowledge graphs correct answer.”[10] [28], and to provide situational awareness of graphs for intelligence As Wong and Varga [42] explain, performing situational logic analysis [9], which could be a starting point for visualisating a analysis to identify and test hypotheses is not straightforward. An conversational agents thought process. However, to date the use analyst starts with a fairly ill-defined query, likely based upon of conversational agents for intelligence analysis and the various their own experience, then follows an iterative process querying, vulnerabilities which are introduced, in addition to potential miti- assessing, learning, drawing conclusions, making judgments and gation’s through user interface design and visualisation, have not generating explanations to direct further searches. They will likely received attention. We believe that decision making environments amend existing hypotheses or come up with new ideas through- have additional requirements for visibility beyond traditional appli- out this process. Wong et al. [41] present the ‘Fluidity and Rigour’ cations for conversational agents, which have not been considered model, this model demonstrates the wide variety of shifting rea- in existing research. We can understand these requirements better soning strategies applied by analysts. These range from ‘leap of with a look to research in the area of intelligence analysis and sense faith’ observations and storytelling, with unknown and uncertain making. data, to rigourous and systematic evaluations of hypotheses, such as applied in ACH. Conversational agents allow for fluidity, where they can support wide variability in thinking. They can also sup- port rigour, where results are valid and underpinned by evidence, 2.3 Intelligence Analysis Methods and if the underlying thinking and machine reasoning of the agent can Requirements be demonstrated to an analyst. Conversational agents can there- Intelligence analysts are crucial to military decision making be- fore be used to aid reasoning, however, whilst in traditional visual cause they provide situational awareness (SA) to commanders. A analytics the focus is on making these processes visible, using a key method applied by military analysts is situational logic, this conversational agent alone to perform these tasks can mask the underpins much of an analysts standard process to achieve an un- underlying methods and data. This information needs to be visible derstanding of a situation. Heuer [10] provides a description of the to satisfy the requirement for rigour. situational logic approach, that “starting with the known facts of In time sensitive scenarios, for example if an analyst is tasked the current situation and an understanding of the unique forces at to understand a situation prior to an imminent military action work at that particular time and place, the analyst seeks to identify with little lead time, situational awareness is particularly impor- the logical antecedents or consequences of the situation. A scenario tant. Thomas and Cook [30] identify situational awareness as the is developed that hangs together as a plausible narrative. The an- perception of the elements in the environment within a volume of alyst may work backwards to explain the origins or causes of the space and time; comprehension of their meaning; the projection of current situation or forward to estimate the future outcome.” A tra- their status into the near future; and the prediction of how various ditional approach to perform situational logic analysis is ‘Analysis actions will affect the fulfillment of one’s goals. A thorough situ- of Competing Hypotheses’ (ACH), developed by Heuer almost 50 ational logic analysis can achieve perception of elements (known years ago. ACH is a matrix approach which provides rigour when facts), and can hang them together as a plausible narrative which comparing evidence against different hypotheses, and whilst it may involves comprehension of their meaning and projection of pos- not always be applied in it’s entirety by military analysts, aspects sible future developments. However, we believe that a traditional of ACH are commonly used. methodology such as ACH is flawed and would typically take too Looking to ACH allows us to understand critical aspects which long to complete satisfactorily. Instead, we propose that conver- feature in an analysts thinking. Such as, the evidence which un- sational agents and semantic knowledge graphs lend themselves derpins hypotheses and the related strengths and weaknesses, the well to situational logic analysis, where ‘known’ facts can be cap- propagation of weaknesses in a fused evidence picture, the ability tured as observations, along with the confidence, timestamp and to compare the strength of multiple alternative hypotheses, the rel- provenance of those observations. The graph can then be appended ative impact of removing pieces of evidence upon hypotheses, and with hypothesised associations as additional observations. In this the relative influence of different hypotheses and evidence upon way storyboards for a scenario can be captured within the graph. possible narratives. We can utilise inferencing capabilities and graph algorithms to While the principles of ACH are sound, in practice it is flawed. piece together information (which may be outside our own per- ACH is typically a matrix table approach and the table display itself sonal awareness and experience) and refute hypotheses. This is an is limited in the amount of information which can be clearly articu- example of shared human-machine reasoning, where a human is lated, so the text is summarised and lacking in surrounding context. able to deliver more intuitive reasoning, with a focus on abduction Additionally, it introduces an arbitrary structure to hypotheses and and induction, including ‘leap of faith’ ideas, while the machine evidence. This can produce adverse cognitive effects where the way, can augment human reasoning with deduction and induction by and order, in which hypotheses are listed can affect how much they formal argument, scientific rigour and evidence. are considered. Additionally, if analysts rely on their experience While traditional interfaces to provide this are complex and lack alone when assessing possible hypotheses they are prone to bias. fluidity, a natural language approach to interactions could be the IUI Workshops’19, March 20, 2019, Los Angeles, USA S. Hepenstal, N. Kodagoda, L. Zhang, P. Paudyal and B.L. W. Wong solution. An analyst can easily interact in natural language with a conversational agent, in a timely fashion, to explore the graph be- Figure 2: Algorithmic Transparency Framework fore concluding with a plausible narrative and achieving SA much more quickly. Crucial to providing true SA, however, is that an analyst can easily and visibly understand how and why a conversa- tional agent has provided the responses they have. In a sense, this requires visualisation of the agents conclusion pathway. To date, as far as we are aware, there has not been research conducted to understand the vulnerabilities to introducing conversational agents in the field of intelligence analysis, nor design steps which could help mitigate risks. 3 FRAMEWORK Algorithmic Transparency Framework: What the user needs from We have produced a research framework to identify the design re- black box algorithms: (i) explanations of how results from algorithms quirements for applications which involve shared human-machine are arrived at (ii) explanations that are interpretable by the user in a reasoning for use in decision making fields, such as intelligence manner that makes sense to them (e.g. the internals of the algorithm, analysis. The framework is underpinned by an exploration of exist- including important features, an indication of accuracy or confidence, ing literature and unstructured interviews with a small selection of and an understanding of the data used and uncertainties, all experienced military intelligence analysts. We provide an example presented in a manner which enables the user to assess if the results aid to help demonstrate the ‘visibility’ aspect of the framework. are sensible), (iii) visibility of the functional relationships mapped Approaches to date which develop machine reasoning through against the goals and constraints of the system, and (iv) context in conversational agents, including [27, 29, 40], focus upon data ex- which to interpret the explanations. NB: by showing goals and traction coupled with language processing and contextualisation, to constraints, we include some key elements of context, e.g. goals provide better understanding of a user’s query and more informed include some notion of the priorities and therefore some responses from an agent. These are important areas of research understanding of the problem, hence the context. for providing the underpinning technologies which enable shared reasoning with conversational agents. However, design aspects for agents used for critical decision making have not received sig- classification we can use Lime to visibly represent the feature results nificant attention in research. Figure 2 presents a framework for which the machine learning algorithm has picked as particularly designing applications which involve shared human-machine rea- relevant to a given classification. soning, such as conversational agents for knowledge exploration. For applications which allow fluidity and rigour in shared human- The framework diagram presents the relationship between ma- machine reasoning, it is not enough to merely provide explainability chine reasoning, shown as a ‘black box’, and human reasoning. of the internal workings of a system through result metrics. There There has been much work and discussion describing machine needs to also be visibility of what reasoning the machine is doing learning methods as a ‘black box’[4], and the associated vulnerabil- and why, how it’s reasoning fits within the fluidity and rigour model ities of this [22]. In a similar way a conversational agent’s explo- [41], and the ability to examine conclusion pathways and the effects ration of a complex and large knowledge graph can be a ‘black box’ of alternative reasoning strategies, within the context of the goals if there is too much information to explain clearly. An interface and constraints of the system. Visibility requires an appreciation of needs to provide ‘explainability’ and ‘visibility’ (the ability to in- the uncertainties and gaps in available data and must allow a user spect and verify) in order to share cognition between machine and to understand the influence and justifications of machine reasoning human, within the context of a given environment, task and user. within their own reasoning and analysis. The concept of visibility This framework can be used to inform the design requirements for has not been addressed in previous research in this area. such interfaces and to identify critical areas for future research. This paper presents a simple scenario to demonstrate how the The human user requires explainability of the cognition taking visibility of machine reasoning can be designed within an appli- place within a ‘black box’ (XAI). XAI has received a large amount of cation alongside human reasoning. The example considers a con- attention in recent years, with a focus upon understanding machine versational agent query system for a semantic knowledge graph. learning classifications. The meaning of explainability is key to how it is designed into interfaces. Current XAI research, as reviewed by Gilpin et al. [8], gives the definition that XAI is a combination of interpretability and completeness, where interpretability is linked to 3.1 Example: Conversational agents for graph explaining the internals of a system and completeness is to describe exploration it as accurately as possible. To date this angle of explainability has 3.1.1 Analyst Interviews. In order to understand what require- looked to express the process within the mathematical model, for ments exist for visibility of machine reasoning in conversational example how to represent important features which are influencing agent responses, and to map functional relationships against the a deep neural network. There are numerous tools which have been goals and constraints, we first need to understand what visibility used to explain a classification, for example Lime [25]. For a discrete means in the context of intelligence analysis. Much work has been Algorithmic Transparency of Conversational Agents IUI Workshops’19, March 20, 2019, Los Angeles, USA done to understand the general thought processes applied by ana- an analyst to consider alternative possibilities. A conversational lysts, however to date research has not considered the interaction agent can allow for a deeper explanation of findings, including between an analyst and conversational agent and the impact of feedback and suggestions for selected alternative inquiries which shared reasoning. We begin this discussion by conducting inter- is based upon a knowledge of the underlying and surrounding data. views with a small number of experienced intelligence analysts, to To provide a raw picture of all this data to an analyst would be understand for what tasks conversational agents could help, any too voluminous and complex for them to digest manually. In this vulnerabilities which exist, and for each task what visibility means way an agent can aid an analyst to identify alternative hypotheses to an analyst. which are not restricted to their own experiences or assumptions. The analysts interviewed for this study identified areas where a This reasoning aspect of conversational agents goes beyond a natural language interaction with data could be extremely beneficial. simple query tool, to incorporate elements of sense making and For example, when performing situational logic analysis analysts inferencing, and there are vulnerabilities to doing so. Analysts iden- apply a process of hypothesis creation, testing, and comparison, tified several risks when using conversational agents. These repre- related to real world entities. Analysts formulate hypotheses, often sent important problems which require mitigation to confidently linked to future strategy, impact, events, and activities i.e. ‘that apply conversational agents in the area of intelligence analysis. If Person X and Person Y are travelling to Location C for Event A, an agent is able to guide an analyst by refuting and suggesting which will have impact Z’. A key requirement is to understand the alternative hypotheses then there is potential for the analyst to be connections between these entities and the surrounding context, mislead. An agent could guide an analyst towards inaccurate con- in particular related to the key points of connection. Considera- clusions in a way which is difficult for the analyst to refute, given tions need to take into account the provenance and certainty of the complexity of the underlying graph and the fact that it is not observations and the impact of data changes, including unexpected visible to the analyst. If an analyst is interested in key connections observations, upon the analysts hypothesis. For example, if by ac- in a path they are vulnerable to the agents choice of path, where counting for uncertain data an alternative hypothesis presents itself there is an adverse impact if non-relevant connections are identified as most likely, or if additional data is included after an update in as key. Within the conversation text itself it is difficult to describe the situation which changes the overall picture. the provenance and certainty of information as well as the key A semantic knowledge graph approach can provide much needed information, particularly for many connecting observations. This persistence of data, with rigour in the capture of contextual infor- leads to textual responses which are either hard to interpret and mation. However, a graph increases in complexity and scale as it overload the analyst with too much information, or are summarised evolves over time and it becomes increasingly difficult for an ana- so that important information can be missing. lyst to assess their hypotheses against it. The analysts interviewed To help mitigate some of these issues the analysts interviewed in this study identified that current analytics tools to explore graph described what requirements for visibility in conversational agents data are often over-complicated, with significant learning required they have, in association with their goals for a system. Analysts to understand how to perform functional interactions for filtering felt an understanding of the underlying processes and algorithms and configuration. Resulting visualisations are then overloaded applied by conversational agents should focus upon the functional with too much information. Additionally, there is insufficient ex- meaning, in light of the intelligence analysis task, rather than planation of the meaning and constraints of functionality where any mathematical method. Specifically, analysts were interested in analysts are interested in “function rather than mechanics”. This ‘how’ and ‘where’ a conversational agent was exploring the graph leads to a barrier to analysts using tools because they find them and ‘why’ it deemed information to be interesting, including the off-putting and unnatural. A conversational approach could provide specifics of the sub-graph extracted, such as the provenance, history access to powerful functionality, but with less complication and and confidence of observations. Analysts emphasised the need for a learning required, and greater understanding of methods through balance between identifying the ‘key’ underlying data observations two way dialogue. This can help an analyst to explain difficult or entities, while not overwhelming the user, and also providing concepts, such as their level of risk aversion when considering a contextual understanding of what ‘key’ means. Analysts were appropriate evidence across a conclusion pathway. particularly interested in allowing for human reasoning of more Analysts identified a number of other benefits to using conver- intangible observations, alongside the deductive rigour of the ma- sational agents, beyond being more natural to interact with. When chine. This would include an understanding of weaknesses, missing exploring data they can allow for timely, coherent, and regular data, and ability to apply intuition. Additionally, visibility of past searches, where an ongoing conversation is maintained and the conversations, the current state of the conversation, and the state agents memory can be accessed and utilised. This capability would and evolution of the graph at each stage, is important for auditing be useful for analysts who want to ask questions such as, “have purposes and ethical accountability. there been any more visits to Location X?”. A key feature of conver- sational agents, identified by the analysts, is the ability to clearly articulate an audit trail to explain how information was found in the process of an investigation. This trail helps provide ethical account- 3.1.2 Example Case: Scenario. A conversational agent’s interpre- ability. Each interaction with the agent provides a time-stamped tation of a user’s intention will inform which thought process, or message, including associated information found and the state of algorithm, is applied to deliver a response. For example, a user’s the graph data at that point in time. Analysts can be influenced by query to find relationships between two different entities may in- bias and by capturing their line of questioning an agent could aid voke a ‘find connections’ intention. This approach to match query IUI Workshops’19, March 20, 2019, Los Angeles, USA S. Hepenstal, N. Kodagoda, L. Zhang, P. Paudyal and B.L. W. Wong to intention is typically performed using machine learning tech- Table 1: Example Conversational Agent Transcript niques for classification. Accurate conversational responses there- Stage Agent Text Entry fore begin with an assessment of possible intentions and an accurate 1 Human Is there a connection be- machine learning intention classifier, and later involve the accuracy tween Organisation X and any of entity and relationship extraction, the building of knowledge scientists? graph query syntax, subjectivity in which algorithms meet which 2 Machine Yes, Person B is a member intentions, variability and constraints of heuristic methods, and the of Organisation X. Person B reliability and completeness of the knowledge graph itself. There knows Person A. Person A is a are, therefore, uncertainties which need to be addressed within a scientist. user interface. For example, what is the impact upon an analysts 3 Human Is Person A linked to weapon decision if an agent interprets a subtly different intention, with X? different goals and constraints, and employs a different algorithm? 4 Machine Yes. Equipment A is used to How many intentions should an agent allow? How distinct do they produce weapon X, Financial need to be to mitigate uncertainties? Fundamentally we need to Transaction 1 purchased Equip- understand how ‘visibility’ of agent thinking can be provided to an ment A, Person A participated analyst. in Financial Transaction 1. In intelligence analysis it is crucially important that analysts can fully interpret the information and evidence which is guiding The conversational agent has identified that Person A is a scien- their decisions. Without visibility of their ‘conclusion pathway’, i.e. tist. It has also identified that Person A is connected to Organisation the pieces of information which are informing their acceptance or X, therefore the agent can perform deductive reasoning to find that rejection of hypotheses, they are vulnerable to mistakes, personal Organisation X is linked to a scientist. Furthermore, Person A ap- and experiential bias, and deception. The use of conversational pears a good candidate to suspect in supplying Organisation X agents presents challenges to the visibility of thought processes, with weapon X. We can see that Person A is connected to both by handing over some of this processing to an agent. The nature organisation and weapon, and we have an explanation for how. of chat bot interfaces, where a user types a message and receives a However, any uncertainty or alternative narratives within the data text reply can encourage a narrow focus for investigation. A user are not presented to us and the response is narrowly framed by our will typically receive responses based upon their questions, with line of questioning. There is a lot more information needed by an little awareness of data observations which lie on the periphery of analyst in order to understand these connections. This is because their line of questioning (reduced SA). The bigger picture is hidden the explanation does not take into account the requirements for and opportunities for deception are increased. ‘visibility’, including how the conversational agent maps it’s rea- Potential vulnerabilities are best explained with an example, as soning to the analyst’s goal, or an understanding of the constraints described in Table 1. All of these queries relate to a straightforward present in the reasoning approach. In this case, the analyst wishes ‘connections’ intent which finds a path between pairs of entities. to test their hypothesis and to allow for reasoning about alternative We are provided with information akin to ‘explainability’ in the Al- possibilities. The system, however, is constrained to apply a single gorithmic Transparency Framework (Figure 2), where the internals shortest path algorithm which traverses the graph and returns data of the system (the graph connections which are found between our to the analyst. Observations which lie on longer paths are ignored entities of interest) are described in natural language. and an analysts true situational awareness is reduced. Even worse, Scenario: An information request is received by an analyst to under- data can be introduced to mislead an algorithm, and thus manipu- stand the supply of equipment and ingredients to produce a weapon late the results presented to a user. This vulnerability ties closely (‘X’) to ‘Organisation X’. It is suspected that an individual with access with confirmation bias, where by understanding how the algorithm to the necessary equipment (a scientist) is supplying the goods. will look across data points it may be possible to introduce data to reinforce bias. This is a simplistic example, but it helps demonstrate some of the pitfalls with chat bots and data filtering algorithms which reduce situational awareness. This is particularly the case in more realistic scenarios or if advanced graph traversal algorithms such as prob- abilistic methods, heuristic approaches to explore multiple paths, or pattern matching methods are applied. As a situation becomes more complex, for example with observations which arise from different sources and demonstrate varying levels of confidence and reliability, designing for visibility of machine reasoning becomes critical to providing a clear picture which empowers an analyst to perform human reasoning. Much research has looked to develop ex- plainability of machine learning algorithms which present the user with mathematical representations, for example, of how features in the model relate to classification results. Little has been done, however, to understand how visibility should be provided for these Algorithmic Transparency of Conversational Agents IUI Workshops’19, March 20, 2019, Los Angeles, USA additional context and better mapping between the algorithm func- Figure 3: Example Visual Aid for Conversational Agent tions and analyst goals for the system. By providing this addition to supplement the text in the visual form of a sub-graph network, analysts can better understand the conclusion pathway taken by the agent and this gives them greater visibility of the context in which deductions are made by the agent. For example, the agent has deduced that Organisation X is linked to a scientist, Person A, through Person B. However, this deduction ignores other possibili- ties for a relationship between Person A and Person B which do not involve Organisation X, for example membership at the same gym. Additionally, the agent’s reasoning does not consider other entities which have similar access to equipment as scientists, for example university students. An analyst, when faced with this graph, could perform more intuitive abductive reasoning to question the role of the university. The systems constraints are more obvious, where an analyst can identify paths which have not been explored and nodes which could be relevant but have been missed. Without visibility of the machine thought processes, the human cannot interpret, critique, nor build upon machine reasoning. The additional visual aid is helpful for an analyst to sense check the agents thinking, providing some ability to verify findings by comparing alternative hypotheses which may arise from the inclu- sion of close nodes. Take the first query, for example, where the agent finds a connection between Organisation X and a scientist (Person A). The agent can explain that the most critical node in the path is Person B and an analyst therefore must be confident in the association between Person A and Person B to be confident in the path as a whole. By considering the surrounding context we see that both people are members of the same weights gym. There is a plausible association which is not related to Organisation X. Likewise, if we take a look at the most critical node linking Person A to Weapon X, which is Equipment A. Person A has purchased Equipment A which is used to produce Weapon X, however it is also used to produce Wood Glue and Person A has participated in a woodwork training course. Again, they have a plausible reason for making this purchase which is not related to Weapon X and the conversational agent has ignored this. The graph in Figure 3 provides the additional context that Equip- ment A is also owned by University X. If an analyst explores this connection they will see the display shown in Figure 4. Figure 4 shows just the sub-graph for the agents response to “is there a path between University X and Organisation X?”. We find that there is indeed and again Person B is the most critical node, Person D models within the context of their use, nor for how knowledge and Person C (a student at the university) are also important. The graph traversal algorithms are applied and explained to a user in addition of the visual aid to the conversational responses helps tandem with conversational agents. These are key areas requiring to overcome some of the issues identified by analysts, specifically further work. by providing visibility of the conclusion pathway, additional con- text and algorithm constraints, and by highlighting key points of 3.1.3 Example Case: Visibility. The example visual aid shown in connection within the graph. The conversation text itself can also Figure 3 revisits the scenario described earlier in Table 1 and ac- pull out key vulnerabilities, for example where a node is particu- companies the textual responses. Figure 3 displays the path found larly important to a path based upon its betweenness centrality by the agent in addition to other nodes which are close by (within score [7]. Betweenness centrality finds key bridging nodes between a single edge from the path). There is a key for the colour of nodes sub-graphs. We have therefore deemed these nodes will have the provided in the interface which is linked to their semantic class. largest impact upon the conclusion path if they are removed and The path found is akin to the agents conclusion pathway as this the analyst needs to be confident in their accuracy and associated traces the series of observations which connect the two entities of connections. interest. The extra relationships which are not on the path provide IUI Workshops’19, March 20, 2019, Los Angeles, USA S. Hepenstal, N. Kodagoda, L. Zhang, P. Paudyal and B.L. W. Wong conversational agent could describe important paths to the user Figure 4: Example Visual Aid with Further Exploration and demonstrate how removing pieces of information which have lower reliability and confidence affects the evidence, grounds, and ultimately the claim. The definition of possible intentions which can be understood by the conversational agent and the subsequent methods they invoke is a key area for future development, as is an analysis of graph tasks and methods to visualise large scale data within and alongside conversational responses. A greater understanding is needed of what ‘visibility’ means in the context of intelligence analysis tasks, goals and constraints, therefore more detailed studies should look to explore this concept with analysts. The framework proposed in this paper has wider implications for the design of shared human-machine reasoning applications beyond the conversational agent example discussed. Future work should therefore also look to see how this framework can be ap- plied to the design of other applications which provide shared human-machine reasoning, for example applications which include reasoning through machine learning. 5 CONCLUSION There is a place for conversational agents in the field of intelligence analysis and, if designed carefully, they could deliver significant advantages to analysts compared to current practices and analytics tools. There are, however, risks to using them in a decision making environment where visibility of the reasoning, evidence, goals and constraints which underpin analysis is crucial, in addition to the explainability of a result. We provide a design framework which highlights important research areas to explore when looking to develop applications for shared human-machine reasoning, in fields which require evidence based decision making. Future work should 4 FUTURE WORK look to apply the ‘Algorithmic Transparency Framework’ to the The example visual aid is a helpful start, however it is flawed in design of applications in real world scenarios and to tackle the many ways. An important issue facing analysts is information over- challenges identified in this paper. load. In the simple example provided it is easy for an analyst to understand the graph visualisation, however in a more realistic 6 ACKNOWLEDGEMENTS scenario the complexity and scale of the graph would present a This research was assisted by experienced military intelligence significant challenge. To tackle this problem more advanced traver- analysts who work for the Defence Science Technology Laboratory sal algorithms are required in addition to utilising the reasoning (Dstl). power of semantic knowledge graphs. Approaches such as concept lattice analysis [38] could also be explored to allow for greater com- REFERENCES plexity in the concepts and associated sub-concepts expressed by a [1] [n. d.]. SPIN (SPARQL Inferencing Notation). conversational agent. [2] Agnese Augello, Mario Scriminaci, Salvatore Gaglio, and Giovanni Pilato. 2011. A A more realistic scenario would require a smarter extraction of Modular Framework for Versatile Conversational Agent Building. , 577-582 pages. [3] Francois Bouchet and Jean-Paul Sansonnet. 2009. Subjectivity and Cognitive the surrounding graph context beyond that shown here, including Biases Modeling for a Realistic and Efficient Assisting Conversational Agent. , uncertainties in the data and better definition of what we mean by 209-216 pages. ‘close’ to the path. Rather than simply displaying additional connec- [4] Davide Castelvecchi. 2016. Can we open the black box of AI? Nature 538, 7623 (2016), 20. tions along a conclusion pathway, we need a method to display the [5] Wenhu Chen, Wenhan Xiong, Xifeng Yan, and William Wang. 2018. Variational important alternative connections which if considered could affect Knowledge Graph Reasoning. our hypotheses. To do this requires an understanding of how ana- [6] Robert Epstein. 2009. Parsing the Turing Test Philosophical and Methodological Issues in the Quest for the Thinking Computer (1. ed.). Springer Netherlands, lysts make inferences across graphs. Wong and Varga [42] describe Dordrecht. the concept of ‘brown worms’ to supplement argumentation which [7] Linton C. Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry (1977), 35–41. could be helpful to apply here. If an agent interprets a users query [8] Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and for connections as a hypothesis claim i.e. that the two entities are Lalana Kagal. 2018. Explaining Explanations: An Approach to Evaluating Inter- connected in a particular way, it can then extract a users grounds pretability of Machine Learning. [9] Geoff Gross, Rakesh Nagi, and Kedar Sambhoos. 2014. A fuzzy graph matching to the claim then trace through graph observations collecting evi- approach in intelligence analysis and maintenance of continuous situational dence against those grounds. Using the brown worms concept, the awareness. Information Fusion 18, 1 (2014), 43–61. Algorithmic Transparency of Conversational Agents IUI Workshops’19, March 20, 2019, Los Angeles, USA [10] Richards J. Heuer. 1999. Psychology of intelligence analysis. [41] B. L. William Wong, Patrick Seidler, Neesha Kodagoda, and Chris Rooney. 2018. [11] Thomas Hoppe, Bernhard Humm, Ulrich Schade, Timm Heuss, Matthias Hemmje, Supporting variability in criminal intelligence analysis: From expert intuition Tobias Vogel, and Benjamin Gernhardt. 2016. Corporate Semantic Web âĂŞ to critical and rigorous analysis. Societal Implications of Community-Oriented Applications, Technology, Methodology. Informatik-Spektrum 39, 1 (/02/01 2016), Policing and Technology (2018), 1–11. 57–63. [42] B. L. W. Wong and Margaret Varga. 2012. Black Holes, Keyholes And Brown [12] T. Jankun-Kelly, Tim Dwyer, Danny Holten, Christophe Hurter, Martin Nollen- Worms: Challenges In Sense Making. , 287-291 pages. burg, Chris Weaver, and Kai Xu. 2014. Scalability considerations for multivariate [43] BL William Wong and Ann Blandford. 2004. Describing Situation Awareness graph visualization. Springer International Publishing. at an Emergency Medical Dispatch Centre. In Proceedings of the Human Factors [13] Lorenz Klopfenstein, Saverio Delpriori, Silvia Malatini, and Alessandro Bogliolo. and Ergonomics Society Annual Meeting, Vol. 48. SAGE Publications Sage CA: Los 2017. The Rise of Bots: A Survey of Conversational Interfaces, Patterns, and Angeles, CA, 285–289. Paradigms. , 555-565 pages. [14] Liliana Laranjo, Adam G. Dunn, Huong Ly Tong, Ahmet Baki Kocaballi, Jessica Chen, Rabia Bashir, Didi Surian, Blanca Gallego, Farah Magrabi, Annie Y. S. Lau, and Enrico Coiera. 2018. Conversational agents in healthcare: a systematic review. Journal of the American Medical Informatics Association 25, 9 (/09/01 2018), 1248–1258. [15] Bongshin Lee, Catherine Plaisant, Cynthia Parr, Jean-Daniel Fekete, and Nathalie Henry. May 23, 2006. Task taxonomy for graph visualization (BELIV ’06). ACM, 1–5. [16] Vinayak Mathur and Arpit Singh. 2018. The Rapidly Changing Landscape of Conversational Agents. (/03/22 2018). [17] Kayla Matthews. 2018. We Need to Talk About Biased AI Algorithms. [18] Michael Mctear. 2002. Spoken dialogue technology: enabling the conversational user interface. ACM Computing Surveys (CSUR) 34, 1 (2002), 90–169. [19] Fernando A. Mikic, Juan C. Burguillo, Martin Llamas, Daniel A. Rodriguez, and Eduardo Rodriguez. 2009. CHARLIE: An AIML-based chatterbot which works as an interface among INES and humans. , 6 pages. [20] Adam S. Miner, Arnold Milstein, Stephen Schueller, Roshini Hegde, Christina Mangurian, and Eleni Linos. 2016. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA internal medicine 176, 5 (2016), 619–625. [21] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A Review of Relational Machine Learning for Knowledge Graphs. Proc. IEEE 104, 1 (2016), 11–33. [22] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks Against Machine Learning. ACM, New York, NY, USA, 506âĂŞ519. [23] Eric Prud’hommeaux and Andy Seaborne. 2008. SPARQL Query Language for RDF. [24] Marco Tulio Correia Ribeiro. 2016. Lime. [25] Marco Tulio Correia Ribeiro. 2016. Lime: Explaining the predictions of any machine learning classifier. [26] Scott Robertson, Rob Solomon, Mark Riedl, Theresa Wicklin Gillespie, Toni Chociemski, Viraj Master, and Arun Mohan. 2015. The visual design and imple- mentation of an embodied conversational agent in a shared decision-making context (eCoach). Springer, 427–437. [27] Daniil Sorokin and Iryna Gurevych. 2018. Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering. [28] Seema Sundara, Medha Atre, Vladimir Kolovski, Souripriya Das, Zhe Wu, Eu- gene Inseok Chong, and Jagannathan Srinivasan. 2010. Visualizing large-scale RDF data using Subsets, Summaries, and Sampling in Oracle. , 1048-1059 pages. [29] Wen tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, Jina Suh, and Microsoft Research Redmond. 2016. The Value of Semantic Parse Labeling for Knowledge Base Question Answering. [30] J. J. Thomas and K. A. Cook. 2006. A visual analytics agenda. Computer Graphics and Applications, IEEE 26, 1 (2006), 10–13. [31] (TM) TopQuadrant. [n. d.]. TopBraid Application. [32] (TM) TopQuadrant. [n. d.]. TopQuadrant SPIN Inferencing. [33] A. M. Turing. [n. d.]. Computing Machinery and Intelligence Author(s): A. M. Turing Source: Mind, New Series, Vol. 59, No. 236 (Oct., 1950), pp. 433-460 Published by: Oxford University Press on behalf of the Mind Association Stable URL: http://www.jstor.org/stable/2251299. [34] Jane Wakefield. 2016. Would you want to talk to a machine? [35] Ruijie Wang, Yuchen Yan, Jialu Wang, Yuting Jia, Ye Zhang, Weinan Zhang, and Xinbing Wang. 2018. AceKG: A Large-scale Knowledge Graph for Academic Data Mining. [36] Chen Wei, Zhichen Yu, and Simon Fong. 2018. How to Build a Chatbot: Chatbot Framework and Its Capabilities. ACM, New York, NY, USA, 369âĂŞ373. [37] Joseph Weizenbaum. 1983. ELIZA - a computer program for the study of natural language communication between man and machine. Commun. ACM 26, 1 (1983), 23–28. [38] Rudolf Wille. 2006/10/30. Formal Concept Analysis as Applied Lattice Theory. Springer, Berlin, Heidelberg, 42–67. [39] Christophe Willemsen. 2018. 3 reasons why Knowledge Graphs are foundational to Chatbots. [40] Christophe Willemsen and GraphAware. 2018. Knowledge Graphs and Chatbots with Neo4j.