Conversational Control Interface to Facilitate Situational Understanding in a City Surveillance Setting Daniel Harborne1 , Dave Braines2 , Alun Preece1 , Rafal Rzepka34 1 Crime and Security Research Institute, Cardiff University, Cardiff, UK 2 IBM Emerging Technology, Hursley, UK 3 Graduate School of Information Science and Technology, Hokkaido University, Japan 4 RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan harborned@cardiff.ac.uk Abstract and manipulate information. In this paper we explore an ap- proach to a conversational interface that takes advantage of In this paper we explore the use of a conversational a Controlled Natural Language(CNL), specifically ITA Con- interface to query a decision support system pro- trolled English [Mott, 2010]. We first outline the key char- viding information relating to a city surveillance acteristics of this technology and then move on to discuss the setting. Specifically, we focus on how the use of benefits it provides. Finally we include an approach for track- a Controlled Natural Language (CNL) can provide ing the context of user queries, furthering the capabilities of a method for processing natural language queries the framework. To demonstrate these factors, we use a hypo- whilst also tracking the context of the conversation thetical scenario of city-wide surveillance, where data feeds with relation to past utterances. Ultimately, we pro- such as traffic cameras, tweets concerning the local area and pose our conversational approach leads to a versa- reports from agents on the ground could be used to build an tile tool for providing decision support with a low awareness for the state of the city. This could grant insights enough learning curve such that untrained users can with regard to congestion, crimes in progress or emergencies operate it either within a central command location that require response. In this work, we focus on traffic camera or when operating within the field (at the tactical data feeds and on the information a surveillance system could edge). The key contribution of this paper is an il- plausibly generate when processing such data. lustration of applied concepts of CNLs as well as furthering the art of conversational context tracking whilst using such a technique. 2 Situational Understanding and Decision Keywords: Natural Language Processing (NLP), Support Systems Conversational Systems, Situational Understanding Situational awareness (SA) [Endsley, 1995] is the ability to build an accurate model of the state of a system, with sit- uational understanding (SU) [Smart et al., 2009] being the 1 Introduction ability to reason about it’s current and future states. Decision With the continued improvements made to machine-based an- support systems attempt to augment a human user’s ability to alytics tools and techniques (such as the rise of Deep Learn- perform one or both of these tasks. These systems can offer ing), there has been an increase in the extent to which data simple aggregation of data and information sources in to a can be processed autonomously by machines to provide ac- more comprehensible channel and/or can bring together ser- tionable intelligence. We now can harness broader datasets, vices that can process such data and make inferences from existing in many modalities and that are collected from many the available information, providing insights, predictions and sources. Furthermore, the capability for a system to perform recommendations to the decision maker. this collection and analysis in real time is increasingly com- mon. For tactical decision makers, such as emergency service 2.1 Conversational Interfaces incident commanders, this means at the point of formulating It is not uncommon for a decision maker’s primary skill set to a decision, the quantity of information feeds and the variety be outside the realm of computer science or data analysis. In- of the information within those feeds has vastly grown. Hav- stead, they take advantage of domain knowledge and related ing access to the right information at the right time is a key intuition to make decisions within a given scenario, making aspect to making the right decision and being overloaded by use of information provided to them on request or preemp- information can inhibit forming a decision entirely. Due to tively by human or machine analysts. By offering, a conversa- this change in the information landscape, novel approaches tional interface to a decision support system, decision makers to capitalizing on the vast information available need to be can request information, perform reasoning and action their explored. To fulfill this need, we have seen the increased decisions using natural language rather than through a tradi- innovation and adoption of novel interaction methods, such tional software interface. Firstly, this can minimize the learn- as conversational interfaces [Mctear et al., 2016], to access ing curve for using the system and can speed up the decision 59 making process. In addition, when combined with speech- plication Programming Interface (API)1 . In our scenario, we to-text and text-to-speech technology, can remove the need imagine the type of services that could be available to pro- conventional input devices. This move away from mice, key- cess this data, some of which we have been using in related boards and even screens to voice input/output mechanisms work [Harborne et al., 2018] and others are proposed as hy- not only can increase efficiency by allowing a wide range of pothetical services that realistically could exist. Using these actions to be available without using menus but also can often services would generate information relating to detecting cars free the user from the requirement for a desk-based system or in video and imagery as well as refining the car detections mobile computational device (such as a laptop or smart de- to a specific make and color. For the purpose of this paper, vice). Instead, the decision maker can form requests and re- we use pre-generated information, rather than that generated ceive information, with minimal change to their operational from live services as the integration of such services is outside behavior, including while operating at the tactical edge. of the scope of our work. In earlier work the concept of conversational interactions to support casual system users without specific ontology or 4 Controlled Natural Language and knowledge engineering capabilities has been explored. In Controlled English [Pizzocaro et al., 2013], the concept of a conversational inter- action to support the tasking of sensing assets within a coali- ITA Controlled English (CE) is an example of a controlled tion context, in a constrained environment was constructed. natural language (CNL) which aim to reduce the complexity The work brought together earlier task-oriented matching of of natural language (NL) to allow for easier human-machine assets and capabilities to requirements, and placing the power interaction. The benefits of CNL is that by reducing the gram- of the system and all the complexities within it, behind a mar to a confined subset, the information retains a machine- simple conversational interface. In [Preece et al., 2014] the readable structure whilst also being naturally readable by hu- work was extended further into the intelligence analysis do- mans. This is converse to unstructured data, such as natural main, and articulated using a simple intelligence gathering conversation, typically difficult for machines to process and and tracking scenario with various human and sensor assets highly structured data, such as XML, which are less human providing information relevant to the task. We also formally readable. In previous work, we have shown that controlled defined the underlying conversational model using the CE English can help a user control smart devices within their language, enabling formal definition of different speech acts home [Braines et al., 2017]. In that work we outline many and the pre-defined ways in which conversational interactions of the principles of CE, in this paper we will recap the funda- can logically flow. In this work the human users were able to mentals and relate them to the functionality required for this provide their ”local knowledge” as new facts via the conver- specific piece of work. It is recommended to read the previ- sational interaction, as well as ask questions using the same ous work for a thorough explanation of CE. interface. As a result of reasoning and other task-oriented activities the machine agents within the system were able to 4.1 Concepts, instances, rules raise alerts and start conversations with the human users via Controlled English allows the maintaining of a knowledge the same conversational interface as well. Finally in [Braines base via concepts and instances and allows for automatic in- et al., 2016] we extended the conversational interaction to ferences using rules. All three of these can be created before enable development and extension of the underlying models a support system goes live or can be created as part of the op- (also known as ontologies) that underpin the system. Through eration of the system. Concepts outline the classes of entities, these capabilities we have been able to show support for ques- instances are representations of specific known entities and tion answering interactions as well as the addition of local rules allow for the system to perform reasoning with these knowledge and the extension of the underlying models, all items. In Figure 1 and Figure 2 we show the definition of through natural language conversational interfaces using the the traffic camera concept along with some parent concepts it Controlled English language. inherits from in order to further facilitate inference and rea- soning. In Figure 3, we show an instance of a traffic camera. Both the concept and instance definition can be automatically 3 City Surveillance generated from the result of querying the traffic camera API. In this work, we use a scenario based on city surveillance to explore how a dialogue system can facilitate conversa- tional control of many services and how a decision maker can use natural language to make queries and perform reasoning across the range of available information. We imagine hypo- thetical tasks the agent may need to perform that relate to the monitoring of traffic volumes and assisting the location and tracking of specific vehicles to assist law enforcement. 3.1 Resources Available In our system we focus on information provided by traffic cameras. Specifically, traffic camera locations, video and im- 1 agery available via Transport for London’s Jam Cams Ap- https://data.london.gov.uk/dataset/tfl-live-traffic-cameras 60 4.2 CE Hudson and Custom Answerers When a user submits a query to the interface, it is sent to Hudson - an API that interprets natural language into recog- conceptualise a ˜ traffic camera ˜ C that nized CE components2 . This interpretation is returned as a is a spatial thing and JSON output which the interface can use to provide an ap- is a image source and propriate responses to the user. In CE terminology, an ap- is a video source and is a geolocation source and plication that reacts to Hudson API output is called a ”cus- has the value LO as ˜ longitude ˜ and tom answer”. This approach has both benefits and costs when has the value LA as ˜ latitude ˜ and compared with other machine learning approaches to NLP. A has the value U as ˜ url ˜ and has the value I as ˜ id ˜ and key characteristic is that the interpretation comes from the CE has the value C as ˜ common name ˜. knowledge base, thus interpretations can’t be learned based on sentence structure or patterns (like with a deep learning approach). This can make the space of interpretable input Figure 1: Transport for London traffic camera concept definition. sentences smaller. However, CE does allow for synonyms to be assigned to concepts and the approach of CE concept matching is usually powerful and robust enough to recog- nize user requests in a closed domain (as shown in previous work [Preece et al., 2017b]). To counter this limitation, a hy- brid approach that uses deep learning for interpreting from natural language to CE concepts could be used but exploring this is outside the scope of this work. conceptualise a ˜ displayable thing ˜ DT that In our use case, where we explore a control system within a has the value LAT as ˜ latitude ˜ and closed domain, CE’s power outweighs this drawback. Unlike has the value LON as ˜ longitude ˜ and a user interacting with a general purpose chatbot, a tactical ˜ can show ˜ the location LOC and ˜ can show ˜ the region REG and decision maker often will have a higher requirement for con- ˜ is located in ˜ the region REGL. sistent and reliable answers and information. Thus, a robust conceptualise a ˜ image source ˜ ISo that interface with a slightly higher learning curve is more impor- is a displayable thing and tant than covering all possible utterances to achieve a certain has the value UR as ˜ image url ˜. goal. This increased learning curve is likely to be quickly conceptualise a ˜ video source ˜ VSo that overcome during the decision maker’s initial interaction with is a displayable thing and the system and the knowledge base can be designed in such has the value VUR as ˜ video url ˜. a way that interaction is still intuitive (also shown in [Preece conceptualise a ˜ geolocation source ˜ GSo that et al., 2017b]). In addition to this consistency of response, is a displayable thing. this approaches also allows users to update the knowledge base via the conversational interface and for these updates to immediately be utilized in input processing and output gener- Figure 2: Definition of parent concepts used within the knowledge ation. This is discussed further in Section 6. base 5 Rules Inferencing Rules in CE are used to provide inherent inferencing that can take place upon the information within the knowledge base. For example, the rule shown in Figure 5 can take advantage of the properties of the region and location concepts (Fig- ure 4) to allow the system to infer which locations are located within defined regions of the city based on the geo-position there is a traffic camera named 'tfl Camera 02151' that has '0.00524' as longitude and properties of the location and the boundary of the region. In has '51.5421' as latitude and addition, the rule shown in Figure 6 allows for the system to has '/Place/JamCams 00001.02151' as url and infer that if a displayable thing (such as a video source) can has 'JamCams 00001.02151' as id and has 'Romford Rd / Tennyson Rd' as common name and show a road and that road is in a region, then that camera also has '00001.02151.jpg' as image url and shows that region. has '00001.02151.mp4' as video url and can show the location 'Tennyson Road' and 2 can show the location 'Romford Road'. In this work we used a publicly available open source imple- mentation of Controlled English, named ce-store which implements a number of generic APIs for simple usage. One set of APIs, known Figure 3: Transport for London traffic camera instance definition. as Hudson, enables natural language text processing in the context of a CNL model, returning an ”interpretation” of the specified natu- ral language as matches to concepts, properties, instances and more within the CNL model(s) loaded within the ce-store. ce-store, avail- able online at http://github.com/ce-store/ce-store 61 This inferencing takes place as the knowledge base is up- This is a strength of a CE solution as not only can a user inject dated and so new information provided to the system can lead new instances or update those instances through the conver- to further inferences to be made. Like concept definitions and sational interface but they can define new entirely new con- instances, rules can be added by users during standard usage cepts. This does require some level of familiarity with the of the interface. This is discussed further in Section 6 system but requires no coding and the interface can take ad- vantage of the new information immediately. This is in con- trast with deep learning techniques, where the creation of a conceptualise a ˜ region ˜ REG that has the value XONE as ˜ x1 ˜ and new class, feature or query type will often require retrain- has the value XTWO as ˜ x2 ˜ and ing of the model backing the interface, this can require the has the value YONE as ˜ y1 ˜ and knowledge of a trained engineer and time before the new in- has the value YTWO as ˜ y2 ˜ and is a geolocation source. formation is accounted for within the interface. To illustrate this, Figure 7 and Figure 8 show the interfaces conceptualise a ˜ location ˜ LOCA that is a geolocation source and being used to request a view of a region of the city. As out- ˜ is located in ˜ the region REG. lined in Section 5, this region (named, ’test region’) is defined with a geospatial boundary, and rules are used to infer that conceptualise a ˜ road ˜ ROAD that is a location and any instance of a displayable thing that can show a location has the value NAME as ˜ road name ˜. within that boundary (such as a traffic camera the can show a road) can show that region. In Figure 8, we see a hypo- thetical scenario where the user knows that a camera that is Figure 4: Controlled English concept definitions for regions, loca- marked in the API as viewing a certain road (not in the test tions and roads. These concepts, via inheritance, creates a speciali- sation of the geolocation source concept defined earlier in figure 2 region area) also can indirectly view another road (one that is in the boundary of the test region). The user, via the interface, can tell the system that the camera can show the second road, the knowledge base is updated instantly and future queries [DisplayableInRegion] will take this in to account, including when answering queries if correctly requires rule inferences. (the location L has the value X as longitude) and (the location L has the value Y as latitude) and (the region R1 has the value X1 as x1) and 7 Actions and Query Types (the region R1 has the value X2 as x2) and (the region R1 has the value Y1 as y1) and (the region R1 has the value Y2 as y2) and To provide decision support, a system must allow a decision (the value X >= X1) and maker to query the data and information available within the (the value X <= X2) and system. Sometimes, the user simply needs to see a selection (the value Y >= Y1) and (the value Y <= Y2) of the information or data sources for manual inspection (dis- then cussed in Section 7.1). In addition, the decision maker may (the location L is located in the region R1). want to ask a question about the state of the world and receive a computed or inferred answer. In this work, we explore three forms of query response: confirmation of the existence of en- Figure 5: Example of a CE rule which infers the city regions that locations are found in based on the location’s geolocation data and tities matching desired criteria, a count of entities matching region boundaries. desired criteria and listing all entities matching desired crite- ria (these response types are detailed in Section 7.2). Iden- tifying the required response type to appropriately answer a user’s query is achieved by detecting instances of question [ShowRegion] phrases, examples of which are shown in Figure 9. if To process user queries that contain filter criteria (such as (the displayable thing C can show the location R) and car color), the CE knowledge base contains definition of prop- (the location R is located in the region REG) erty categories (Figure 10) which represent attributes that in- then stances can be filtered by (these mirror a subset of the prop- (the displayable thing C can show the region REG). erties found on concepts defined in the knowledge base). In- stances of these properties are then created reflecting possi- Figure 6: CE rule that allows the system to infer that if a displayable ble values that can be filtered by (Figure 11). The purpose thing instance can show a location and that location is within a re- of these property and value definitions is to allow the Hudson gion, the instance can also show the region. API to identify them within an utterance from the user. These properties can be combined with the detection of other cri- teria such as concepts (e.g. ”car”), instances (e.g. ”Romford Road”) or being interested in a specific relationships ([car] ”is 6 Tellability driving on” [road])—. The detection of these filter criteria, As outlined in previous work [Preece et al., 2017a], a sys- allows the custom answerer to form a query to be sent to the tem’s tellability describes how easy it is for a user to inject knowledge base, the response of which can then be formated new or updated information in to a system during operation. appropriately and displayed to the user. It is worth noting 62 Figure 7: Example of the ”show me” type of request with the interface returning ”displayable things”, In this case a ”video source” that shows the entity of interest. The map is also focused on the location of interest as it is a ”geolocation source”. Cameras displaying ”...in use keeping London moving” have been made temporarily unavailable via the API by TFL. that the creation of these property and value definitions can lar aspects of interest. For example, requesting video sources be automated based on the concept and instance definitions that can show a specific location. or from another source and can also be injected by the user during operation. 7.2 Query Types: ”...exists?”, ”Count...”, ”List...” A further benefit of defining properties and their possible Another important feature that can be offered by a decision values as concepts is that it makes context tracking possible. support system is for a user to be able to ask questions of This is discussed further in Section 8. the information available. This information may have been present in the initialization (such as the location of traffic 7.1 Actions: ”Show me...” cameras) of the knowledge base or may have been generated One important benefit to decision support systems is that they by services processing data sources over time (such as cars can offer one interface for accessing a range of data sources within traffic camera video). To do this, filter criteria are iden- and pieces of information. By offering an efficient and easy tified as outlined at the beginning of this section, a query is to use method for filtering and viewing desired content, a sys- formed that will filter to instances that meet the criteria. The tem can ensure a decision maker is not overloaded by being result of this query is then returned in three possible formats presented with all available data sources simultaneously, in- based on the nature of the questions asked by the user. This stead requesting to see specific resources when they wish to response format is based on the detection of question phrases make use of them. As seen in Figure 2, in our system, a con- by Hudson API and the custom answerer’s reaction to those cept exists (”displayable thing”) that indicates that instances detected phrases. Examples of these query types are show in of that concept can, in some way, be presented to the user. Figures 12, 13 and 14. This parent concept is inherited into further concepts such as ”video source” which indicate to the interface how the data can be displayed. If the user has screens available a ”video 8 Tracking Context source” can be shown, if the interface includes a map a ”ge- Tracking context within conversational interfaces can be con- olocation source” can be zoomed in on and become the focus sidered a challenging but important task. The ability for a of it. These concepts also allow the user to request to view a decision maker to ask a query of the system and then subse- list of sources of particular modalities or that feature particu- quently refine that query creates a much more efficient work 63 Figure 8: Example of updating the knowledge base via the conversational interface. Once updated with the knowledge a particular camera can indirectly show a road located within test region, the ”show test region” request now shows the new camera feed due to inference. there is an action named 'show'. there is a color named 'black'. there is a question type named 'exists'. there is a color named 'white'. there is a question type named 'count'. there is a color named 'blue'. there is a question type named 'list'. there is a color named 'red'. there is a color named 'green'. there is a question phrase named 'are there any' that refers to the question type 'exists'. there is a model named 'Toyota'. there is a model named 'BMW'. there is a question phrase named 'are there' that there is a model named 'Ford'. refers to the question type 'exists'. there is a model named 'Range Rover'. there is a model named 'Renault'. there is a question phrase named 'how many' that there is a model named 'Mazda'. refers to the question type 'count'. there is a direction named 'North'. there is a direction named 'South'. there is a direction named 'East'. Figure 9: Controlled English instance definitions for phrases indi- there is a direction named 'West'. cating the aim of a user’s query —(defined as the question type). Figure 11: CE instance definitions for values the property categories can take. conceptualise a ˜ property category ˜ PROPC. conceptualise a ˜ color ˜ COL that is a property category. conceptualise a ˜ model ˜ MODEL that is a property category. conceptualise a ˜ direction ˜ DIR that is a property category. Figure 12: Example of using the interface to check if any instances Figure 10: CE concept definition for the concept ”property cate- of a specified criteria exist. gory” and child concepts that facilitate instance filtering. 64 Figure 13: An example of using the interface to retrieve a count of instances meeting a desired criteria. Figure 14: An example of using the interface to retrieve a list of all instances meeting certain criteria. flow in contrast to forcing the user to re-ask the same question Figure 16: An example of using the interface to make an initial and adding the desired additional parameters. In our approach query, then filtering the result rather with additional perimeters and to adding this functionality to a CE-based interface, the first then make a new query. step is to define query expansion phrases. These are phrases that are used in user utterances that indicate the intent to use the previous query as a base for building the next query (def- inition for these phrases are shown in Figure 15). The second step is to maintain a store within the custom answerer of the values for properties, the concepts , the instances and the re- lationships involved in the most recent query. With this infor- mation, a query can be formed using the following approach: • If a query expansion phrase is found, use all parameters from the last query. If a value in the current utterance is from a property category that had a value(s) stipulated in the previous query, then use the ”and” operator for the values. • If no query expansion phrase is found and the current ut- terance contains only property values (no concepts, in- stances or relationships), use the previous query’s con- cepts, instances and relationships. • Finally, if neither of the above are true, clear the query store and form a new query. The query store can also be used with actions. In Figure 17, we see the conversation ends with ”show me” without any stipulation of what to show. Here the custom answerer is able to infer that it should show the displayable things from the results of the last query. there is a query expansion phrase named 'of those'. there is a query expansion phrase named 'of these'. Figure 17: An example of using the interface to make an initial query, then filtering the result rather with additional parameters and Figure 15: CE definitions for the concept and instances of query finally using ”show me” to display the results from the previous expansion phrases. query without having to re-specify and of it’s parameters. 65 9 Conclusion [Mctear et al., 2016] Michael Mctear, Zoraida Callejas, and In this work we have outlined characteristics and methodol- David Griol. Conversational interfaces: Past and present. ogy for a conversational interface backed by the controlled The Conversational Interface, page 51–72, 2016. natural language, ITA Controlled English. We have shown [Mott, 2010] D Mott. Summary of ITA Controlled English. the benefits such an interface can provide to a decision maker NIS-ITA science library, 2010. and discussed the implications of using this approach over [Pizzocaro et al., 2013] Diego Pizzocaro, Christos Parizas, other techniques. In this work, we have also proposed a Alun Preece, Dave Braines, David Mott, and Jonathan Z. method for context tracking using a controlled language to Bakdash. Ce-sam: a conversational interface for isr mis- allow for intuitive and efficient query of the system’s knowl- sion support. Next-Generation Analyst, 2013. edge base. We have also identified the possibility of future work, that explores the integrating of deep learning tech- [Preece et al., 2014] Alun Preece, Dave Braines, Diego Piz- niques performing natural language processing from user ut- zocaro, and Christos Parizas. Human-machine conversa- terances to Controlled English. This may lead to an increase tions to support multi-agency missions. ACM SIGMO- in versatility and robustness of interpretation, whilst main- BILE Mobile Computing and Communications Review, taining the consistency of response, tellability and inferenc- 18(1):75–84, Dec 2014. ing provided by CE that is important for tactical decision [Preece et al., 2017a] Alun Preece, Federico Cerutti, Dave makers. Braines, Supriyo Chakraborty, and Mani Srivastava. Cog- nitive computing for coalition situational understanding. Acknowledgments In First International Workshop on Distributed Analyt- This research was sponsored by the U.S. Army Research ics InfraStructure and Algorithms for Multi-Organization Laboratory and the UK Ministry of Defence under Agree- Federations, 2017. ment Number W911NF-16-3-0001. The views and conclu- [Preece et al., 2017b] Alun Preece, William Webberley, sions contained in this document are those of the authors Dave Braines, Erin G. Zaroukian, and Jonathan Z. Bak- and should not be interpreted as representing the official poli- dash. Sherlock: Experimental evaluation of a conversa- cies, either expressed or implied, of the U.S. Army Research tional agent for mobile information tasks. IEEE Trans- Laboratory, the U.S. Government, the UK Ministry of De- actions on Human-Machine Systems, 47(6):1017–1028, fence or the UK Government. The U.S. and UK Govern- 2017. ments are authorised to reproduce and distribute reprints for [Smart et al., 2009] Paul R Smart, Trung Dong Huynh, Government purposes notwithstanding any copyright nota- David Mott, Katia Sycara, Dave Braines, Michael Strub, tion hereon. We’d like to thank Cardiff University Global Op- Winston Sieck, and Nigel R Shadbolt. Towards an un- portunities Centre for partially funding the collaboration be- derstanding of shared understanding in military coalition tween Crime and Security Research Institute (Cardiff Univer- contexts. Event Dates: 23rd - 24th, September 2009. sity) and Graduate School of Information Science and Tech- nology (Hokkaido University). References [Braines et al., 2016] Dave Braines, Amardeep Bhattal, Alun D. Preece, and Geeth De Mel. Agile development of ontologies through conversation. Ground/Air Multisen- sor Interoperability, Integration, and Networking for Per- sistent ISR VII, Dec 2016. [Braines et al., 2017] Dave Braines, Nick O’Leary, Anna Thomas, Dan Harborne, Alun Preece, and Will Webber- ley. Conversational homes: a uniform natural language approach for collaboration among humans and devices. International Journal on Advances in Intelligent Systems, 10:223–237, 2017. [Endsley, 1995] Mica R. Endsley. Toward a theory of situa- tion awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1):32–64, 1995. [Harborne et al., 2018] Dan Harborne, Ramya Raghavendra, Chris Willis, Supriyo Chakraborty, Pranita Dewan, Mud- hakar Srivatsa, Richard Tomsett, and Alun Preece. Rea- soning and learning services for coalition situational un- derstanding. volume 10635, pages 10635 – 10635 – 9, 2018. 66