Situational Awareness from Social Media Brian Ulicny, Jakub Moskal Mieczyslaw M. Kokar VIStology, Inc Northeastern University Framingham, MA Boston, MA {bulicny,jmoskal}@vistology.com m.kokar@neu.edu Abstract—This  paper  describes  VIStology’s  HADRian  system   management because the public uses these communication for semantically integrating disparate information sources into a tools  regularly…. With one click of the mouse, or one swipe common operational picture (COP) for humanitarian on their smartphone’s   screen,   a   message   is   capable   of   being   assistance/disaster relief (HADR) operations. Here the system is spread to thousands of people and have a tangible impact” [3]. applied to the task of determining where unexploded or In order for a commander to understand the situation additional bombs were being reported via Twitter in the hours immediately after the Boston Marathon bombing in April, 2013. and respond effectively, the commander must therefore have We provide an evaluation of the results and discuss future access to what people are saying on social media, and this directions. must be presented in such a way that the commander can respond to it effectively. However, neither the commander, Keywords—social media, situational awareness, Boston nor his or her staff, has time to read all of those messages and Marathon bombing. identify what is relevant in order to assess the situation. Semantic machine processing of the messages must provide I. INTRODUCTION the necessary insight into the relevance of particular messages The Homeland Security Act (2002) defines situational and   summarize   their   significance   to   the   commander’s   awareness  as  “information  gathered  from  a  variety  of  sources   information needs in a way that enables decisions and actions. that, when communicated to emergency managers and VIStology’s  HADRian  project,  our  internal  name  for   decision makers, can form the basis for incident management an AFRL SBIR Phase II project titled "Fusion, Management, decision-making”  [1]. Incident commanders for humanitarian and Visualization Tools for Predictive Battlespace Awareness assistance/disaster relief (HA/DR) operations are better able to and Decision Making", is focused on being able to quickly understand a situation and make appropriate decisions if they integrate disparate data sources into a COP by semantically can view all of the relevant information in an integrated annotating datastores using an ontology against which common operational picture (COP) in a way that allows them commander queries can be issued to determine relevant to make sense of the situation without being overwhelmed repositories, formulate the proper query to issue to the with information. However, HA/DR commanders should not repositories, extract results, reason with the query results, be expected to know where all the relevant information is filter them and display them. This project extends previous stored or how it is encoded. It would be better if a system data virtualization work at VIStology sponsored by the Office would   identify   how   to   meet   a   commander’s   high-level of Naval Research for representing and reasoning about information needs on the basis of previously annotated maritime track repositories annotated with an ontology; the information stores that could be brought to bear in an current project, sponsored by AFRL, includes entities of a emergency. In such dynamic situations, it would be desirable, variety of types for use in HA/DR situations. In this paper, we too, if the system allowed an administrator to quickly annotate examine the application of this technology to deriving new information stores in order to make them answerable to situational awareness from social media. the   commander’s   needs   and,   secondly,   provide   enough   II. HADRIAN BACKGROUND AND CONCEPT OF annotation that the system knew how to query, transform, load OPERATIONS and analyze data  relevant  to  the  commander’s  high  level  needs   into the system. In the first phase of this project, we developed techniques for In a large-scale emergency situation, such as the dealing with a range of object types and a variety of data aftermath of the Boston Marathon bombings on April 15, 2013 representation formats as well as a different type of interface [2], masses of people communicated information rapidly via (RESTful web services, GPS track servers, among others). A social media and react to those messages, shaping the guiding principle in this project is that HA/DR commanders situation. Some were reporting what they were observing on cannot dictate where relevant information is uploaded by the scene; others were not on the scene and merely users. Our goal is to make it usable wherever content creators commented or relayed information they received from upload it, as long as it is online. Thus, we need to develop elsewhere. While often dismissed as trivial, FEMA officials techniques for accessing it in various ways. It turns out that have  testified   that,   “Social   media  is  imperative  to  emergency   RESTful Web Services are very common for retrieving information  produced  by  ‘ad  hoc  sensor  networks’  and  so  we   STIDS 2013 Proceedings Page 87 have focused on these. A proof-of-concept demo we services. This allows us to generate OWL for reasoning developed reflects the retrieval and integration of information without developing any custom software, on the basis of from disparate repositories into a single COP that are relevant metadata and annotations alone. to a scenario in which a plane crashes into a chemical factory. This scenario was drilled at Calamityville, a HA/DR training 4. Technology for integrating a variety of information types facility associated with the National Center for Medical into the COP. We developed tools for integrating text, video, Readiness at Wright State University on May 11, 2011. We photos, and map overlays into a common COP based on used artifacts produced during this drill that exist in various Google Earth. We integrated Google Sketchup 3D facility repositories on the Web to illustrate our capabilities. We models into the demo, and as well as GPS tracks, encoded as annotated the repositories that included them but do not Distributed Interactive Simulation Protocol Data Unit binary modify the artifacts prior to incorporating them. data, as well as social media video, photos, and tweets in The Concept of Operations for our system is as Phase I. follows: 1. A COP Administrator who manages the system III. JIFX 13-4 FIELD EXPERIMENT annotates repositories, using an ontology, i.e. a VIStology, Inc, recently conducted a field trial of its formal representation of the conceptual domain. HADRian semantic information integration technology for 2. The COP Administrator formulates High Level Humanitarian Assistance/Disaster Relief operations at an invitation-only event sponsored by the Naval Postgraduate Query to describe information needs for current School held August 5-8, 2013, at McMillan Airfield, Camp operation Roberts, near Paso Robles, CA. 3. The System infers repositories that may contain In the scenario that we pursued there, a commander relevant information by reasoning over metadata needs to determine, on the basis of social media messages that the repository has been annotated with. (here, only Twitter posts), where additional or unexploded a. Information remains in place until it is bombs are being reported to be located (truly or falsely) in the aftermath of the Boston Marathon bombing in order to needed. It is not initially all extracted, evaluate where to dispatch resources. In the immediate transformed and loaded (ETL). aftermath of the Marathon bombings, several locations were b. Users upload data wherever they usually reported to have additional, unexploded bombs, all mistakenly upload it, not to a central repository. as it turned out. Of course, it was not obvious at the time that 4. The System issues appropriate low level queries to the reports were false, and it was incumbent on public officials repositories to maintain order and control at those sites if in fact they did 5. The System filters out some irrelevant data contain a threat to public safety. Our objective is to evaluate the feasibility of deriving 6. The System aggregates and displays data in a COP situational awareness from a representative corpus of social 7. Users including the EOC (Emergency Operations media messages gathered immediately after the Boston Center) or Incident Commander and other operations Marathon bombing. The corpus consists of approximately 0.5 center interact with the data in the COP. million messages that span the three hours following the 8. The COP operator pushes elements of the displayed bombing. In this experiment, information from social media information to users in the field via their users (here, Twitter users) was analyzed for answers to the smartphone as needed. high   level   query   “Where   are   people   reporting   that   additional   or unexploded   bombs   have   been   found?” 1 Answers to this question were identified and presented in the COP in an In order to produce this demo, we developed: appropriate way. The information included represented the following: 1. Domain ontologies for representing repositories and Where are additional/unexploded bombs being queries, incorporating other ontologies as needed, such as reported to exist?; UCore-SL [4] and a Distributed Interactive Simulation (DIS) When were those messages propagated?; Protocol Data Units (PDU) simulation data (for tracks) [11], to How often have these messages been propagated (i.e. represent the conceptual and technical domain. the amount of attention being directed to each location)?; 2. BaseVISor inference engine rules for reasoning about We were not able yet to represent, a future goal, answers to: relevant repositories and rewriting query URLs in order to retrieve information elements from RESTful web interfaces and PDU sources that are relevant to this scenario. BaseVISor is  VIStology’s  OWL  2  RL  forward-chaining inference engine. 1 This scenario was suggested to us by Desi Matel-Anderson, 3. A novel technique for producing OWL representations of FEMA Innovation Advisor and Think Tank Strategic Vision individual data items from the JSON output by RESTful web Coordinator, at RELIEF 13-3. STIDS 2013 Proceedings Page 88 How reliable and credible are the reports of a bomb Robles (CA) Police Department would have the following at that location. scopes: IV. SYSTEM DESCRIPTION Thing Scope: StatusUpdate The HADRian system can be thought of as having four Topic Scope: TrafficAccident functionalities that are relevant to this scenario: Region Scope: Paso Robles, CA Time Scope: 2012 A. Query Formulation and Repository Annotation SourceScope: Paso Robles Police Department B. Relevance Reasoning and Repository Querying C. Results Reasoning HLQs and Repository Annotations are represented in an OWL D. Interactive Display ontology that incorporates the UCore-SL ontology [4] and aspects of the Dublin Core [5] and Geonames ontologies [6]. A. Query Formulation and Repository Annotation. Any ontology editor can be used to annotate High level information needs are represented in our system repositories and formulate queries. We currently use Protégé ontology as instances of an OWL class called High Level 4.x for this purpose, but any other OWL editor would do. Query (HLQ). In our system, an HLQ is not a query string in any particular query language, such as SQL or SPARQL. B. Relevance Reasoning and Repository Querying Rather, it is a description of one or more such queries, Relevance Reasoning, in our system, is the process of represented in OWL. That is, it should be possible to derive identifying which repositories are relevant to a High Level the OWL description of a query string by parsing and Query based on its OWL annotations [8]. In HADRian, we analyzing the query. We have made some attempts at do not examine the contents of the repository in identifying a translating SPARQL queries and even natural language relevant repository. The system only considers the metadata queries into their OWL descriptions, automatically. However, that has been assigned to it. at present, we rely on manually encoding HLQs in OWL A Repository is inferred to be relevant to a HLQ if (but not directly. only if) its scopes overlap with the Thing, Topic, Region and A   High   Level   Query   is   assigned   various   ‘scopes’   in   the   Time scopes of the HLQ. If a scope is specified in terms of a ontology: a Region Scope, a Time Scope, a Topic Scope, a class, then a subclass or superclass overlaps with it. Regional Thing Scope and a Source Scope. Some of these scopes are and temporal overlaps are defined in the obvious way. A related via annotation properties to classes or individuals in Topic Scope defined in terms of an individual coincides with the ontology (in the case of Thing and Topic Scopes). An any coreferential term. HLQ is related via an object property to individuals in the case A Repository, in our system, is a collection of items that of Time and Region Scopes. An HLQ essentially could be represented in the COP. Repositories are a collection corresponds to an instance of a query of the form: of items, and as such, they may be defined extensionally as pre-specified collection of things or intentionally as items that Find all instances of class T produced by instances of class satisfy certain criteria, expressed as a query to a larger S that are about instances of class U that existed in region R repository. For example, a collection of photos in some during temporal period P individual user’s   Flickr   online photo album (flickr.com) represents a collection defined extensionally: the collection Here, class T corresponds to the Thing Scope of the HLQ. A was   defined   by   the   user’s selection of photos for that album. Thing Scope relates a query to the kind of thing that A Flickr query for photos taken in Yosemite Park on a constitutes an answer to the query. For example, in English, particular date, however, is a repository that is determined “who”   queries   seek   a   Person   or   subclass   of   Person as an intensionally. The set of photos that meet this criterion is not answer (e.g. Q: “Who   can   sign   my   timecard?” A:   “Bill”,   “a   necessarily known in advance. manager”). A Topic Scope specifies what the specified Each Repository must have a URL associated with it that ‘things'  from  the  Thing  Scope  are  about:  e.g.  magazines  about   enables the system to retrieve (extensional) or query Sports. In the query template above, R corresponds to the (intensional) the data. Many of the repositories we deal with Region Scope, which is an individual region in the ontology. have RESTful interfaces. A query-defined repository for a P corresponds to the Time Scope, which is an individual RESTful interface may have parameters that are specified at temporal range in the ontology. The Source Scope S indicates run time based on the High Level Query. For example, a that all of the things that satisfy the query must have been query for businesses listed in Yelp (yelp.com) may have a produced by an individual of class S or a subclass of S. The parameter for a zipcode that is filled at runtime by the zipcode classes that are represented may be expressed with arbitrarily corresponding to the area(s) that is (are) in the Region Scope complex OWL class expressions. of the HLQ. Repositories are also a class in our ontology. Every For the Boston Marathon scenario, the HLQ has obvious repository also has a Thing, Topic, Region, Time and Source Region (Boston, MA) and Time (April 15, 2013) scopes, but Scope. Thus, for example, a repository of tweets about traffic the Thing and Topic Scopes are not as obvious. The Thing accidents in Paso Robles, CA, during 2012 from the Paso Scope of the HLQ is defined as the class GeoFeaturesMentionedInStatusUpdates. This class is defined STIDS 2013 Proceedings Page 89 as a subclass of the intersection of the classes of the search. This process associates locatable phrases with GeographicFeature (a UCore-SL class defined   as   “A known locations and removes some phrases that are PhysicalEntity whose (relatively) stable location in some syntactically plausible but for which no identifiable location GeospatialRegion can be described by location-specific can be associated. For example, one of the extracted location data.”)   and   the   class   of things are the subject of the phrases   is   ‘BPD   Commissioner   Ed   Davis’,   based   on   its   mentionedIn object property with respect to some context. This phrase corresponds to no known place by StatusUpdate. The class StatusUpdate is equivalent to the querying the Google APIs, so it is dropped from the output. sioc:Post class,  defined  as  “An article or message that can be Location phrases that do result in known places are collated. posted to a Forum” 2. Several extracted phrases may coincide with the same known The repository of tweets in this scenario thus has the Thing place, according to one or more of the Google APIs. A count Scope StatusUpdate, but the HLQ has a Thing Scope of of the number of tweets that are associated with each known GeoFeaturesMentionedInStatusUpdates, which is neither a place is kept. Various metadata elements associated with the super- nor subclass of StatusUpdate. Therefore, it is not known place are inserted into the KML document that is within the Thing Scope of the HLQ. A relevance reasoning displayed as the result of the query. rule, specified in BaseVISor rule language, states that if an D. Interactive Display HLQ has a Thing Scope that is a subclass of things mentionedIn some class C and a repository has a Thing Scope Finally, the KML is displayed in the COP as an answer to the that is a subclass of C, then the repository is relevant to the High Level Query. Each placemark is labeled with one of the HLQ. location phrases that produced it. A number in parentheses BaseVISor is   VIStology’s   customizable,   forward-chaining next to the placemark's title indicates the number of tweets OWL 2 RL inference engine. BaseVISor that mentioned one of the location phrases mapping to this (vistology.com/basevisor) provides inference rules for the location. We emphasize this fact by rendering polygons OWL 2 RL language profile, but it can be extended with underneath the placemarks that also correspond to the location custom rules. These rules may be augmented with user- volume in tweets: the higher and darker the color, the more supplied procedural attachments that perform custom frequently mentioned was the location. Clicking on the functions in addition to default functionality for mathematical placemark reveals the phrases that produced the placemark, functions, string operations and the like [7]. the type of place (according to Google), and the API source In this case, the repository of tweets is pre-existent. (Figure 1). Therefore, it is extensionally defined and does not require any run-time instantiation of lower level query parameters. We simply extract the contents of the repository and convert them to OWL, in order to do results reasoning. The Topic Scope of the HLQ and the Repository both consist of the individual BostonMarathon2013 and the class UnexplodedBombs. Not every tweet in the repository is about UnexplodedBombs, although they are all presumed to be about the 2013 Boston Marathon. The class UnexplodedBombs is associated with a regular expression in the ontology that allows us to filter the query contents to only those tweets that are about both subjects. C. Results Reasoning After the relevant tweets are converted to OWL using a template that is part of the metadata annotation of the repository, BaseVISor is again used to reason about the results, in order to extract the required elements. Here a set of custom BaseVISor rules is used to identify locations Figure 1 Expanded placemark shows location phrases that mentioned in tweets about both unexploded bombs and the resulted in the placemark, number of tweets (1158), the 2013 Boston Marathon. These rules produce a set of phrases type of place (library, museum) and the API source. that refer to locations. These location phrases are then mapped to known locations using a heuristic algorithm that Each placemark can be removed from the COP by unchecking chooses among the results of querying the Google Places and a widget in the list of placemarks on the left hand side of the Google Maps Geocoding APIs, using the location phrase and a COP (Figure 2). This set of placemarks can be viewed geographic region corresponding to Boston as the parameters alongside other layers in Google Earth, such as baselayers 2 presenting a photographic map of the various structures in the Semantically-Interlinked Online Communities (sioc- region as well as street names and other geographic features project.org) and attributes. STIDS 2013 Proceedings Page 90 Figure 2 COP Indicating that three tweets about unexploded bombs mention the Mandarin Hotel, four mention Copley Square, one Back Bay Station and so on. bombs and the like; the recall and precision of identifying phrases specifying a location in the tweets; and the precision V. EVALUATION of associating a location phrase with a known place, using the In this exercise, we annotated a repository containing 509,795 Google APIs mentioned previously. twitter messages containing the hashtag #bostonmarathon Precision in automatically identifying instances of a between 4:06 PM and 7:04 PM on April 15, 2013, retrieved category is the ratio of true, positive identifications to positive using Twitter APIs. The bombs are said to have exploded at identifications. Recall is the ratio of true, positive 2:49 PM that day. The corpus was collected by Andrew identifications to positive instances in the corpus as a whole. Bauer   and   his   colleagues   at   Syracuse   University’s   School of Finally, the F1-measure characterizes the accuracy of a Information Studies’s   NEXIS   lab   and   made   available   on   the   categorization task as a whole by combining the recall and Web as a CSV file.3 The file contains the tweet ID number, precision into a single metric, weighing each equally: text, creation time, associated latitude/longitude (if there is one) and user ID. The latitude and longitude in the file represents the location of where the user sends the tweet from, not To begin with, we did not evaluate the precision and recall necessarily the location about which the user is reporting. of categorizing the corpus with respect to the topic of the Only 8,300 of the tweets had geocoded origins, or about 1.6% Boston Marathon. We assume that all of the tweets in the of the corpus. Generally, less than 1% of twitter users have corpus were about the 2013 Boston Marathon because of the enabled geotagging their locations using the location services time period in which they were sent in temporal proximity to on their smartphones or other devices [9][10]. In disaster the bombings. It is possible that some of the tweets in the relief datasets that we have examined, geotagged tweets corpus contain the hashtag #bostonmarathon but are in some approach 2% of the corpus. We were not concerned with the sense not about the 2013 Boston Marathon. We have no way source location of tweets, but locations that were mentioned in to evaluate the recall of this corpus. That is, we have no way to the tweets, so we ignored these fields even when they were evaluate how many tweets were sent that were about the 2013 non-null. The repository was annotated in our ontology as Boston Marathon but that did not contain this hashtag and were described above. not collected in this corpus. We evaluated our processing by evaluating: the recall Of the tweets in this corpus, we identified 7,748 tweets that and precision of identifying tweets that mentioned unexploded were about additional or unexploded bombs with a precision of 94.5%, based on a random sample of 200 tweets identified as 3 https://www.dropbox.com/s/h8wezi2y6pzqfh4/041513_1606- such. That is, only 1.5% of the original corpus was identified as referring to additional bombs, using our pattern matching. 1704_tweets.zip STIDS 2013 Proceedings Page 91 Based on a random sample of 236 tweets from the original Huntington Ave 4 corpus, our recall (identification of tweets that discussed additional bombs) was determined to be 50%. That is, there Iraq 3 were many more ways to refer to additional bombs than our Mandarin Hotel 3 rules considered. Thus, our F1 measure for accurately identifying tweets about additional bombs was 65%. Dorchester 3 Nevertheless, because of the volume of tweets, this did not 3 Marathon affect the results appreciably. Having thus reduced the corpus 98.5% in this way to only US Intelligence 3 tweets that discussed unexploded bombs in addition to referring to the 2013 Boston Marathon, we now evaluate the Copley Place 2 precision and accuracy of identify location phrases. Location Boston PD 2 phrases were identified purely by means of generic pattern matching. We did not use any list of known places. Nor did BBC 2 we include any scenario-specific patterns. The precision with Cambridge 2 which we identified location phrases was 95%. That is, in 95% of the cases, when we identified a phrase as a location phrase, John 2 it actually did refer to a location in that context. Mistakes included temporal references and references to online sites. St James Street #Boston 2 Our recall was only 51.3% if we counted uses of #BostonMarathon that were locative. (We mishandled hashtags with camel case.) Alternatively, since all of the More qualitatively, the Twitter processing we described here tweets contained some variant of the hashtag #bostonmarathon, resulted in 38 ranked places on the COP that were associated this is a somewhat uninformative location phrase. If we ignore with additional or unexploded bombs. We compared these this hashtag, then our recall was 79.2%. That is, of all the places with the places that were mentioned in the live blogs locations mentioned in tweets about additional bombs at the that were set up by CNN 4 , the New York Times 5 and the Boston Marathon, we identified 79.2% percent of the locations Boston Globe6 immediately following the bombings. These that were mentioned. Using the more lenient standard, our F1 blog sites mentioned the following locations (only once, each) measure for identifying location phrases in the text was 86.3%. Location [Source]: (# of Tweets Identified with That Location) Our precision in associating tweets with known places via the Google APIs was 97.2%. Our precision in assigning Boylston Street [Globe, CNN]: 8 unique location phrases to known places via Google APIs was Commonwealth Ave near Centre Street, Newton 50%. That is, there were many location phrases that were [Globe]: 0 repeated several times that we assigned correctly to a known place, but half of the unique phrase names that we extracted Commonwealth Ave (Boston) [Globe]: 0 were not assigned correctly. Ten location phrases that were Copley Square [NYT]: 4 extracted corresponded to no known locations identified via the Harvard MBTA station [Globe]: 0 Google APIs. These included location phrases such as JFK Library [CNN, Globe, NYT]: 1158 “#jfklibrary”  and  “BPD  Commissioner  Ed  Davis”.    The  former   is a phrase we would like to geolocate, but lowercase hashtags Mass. General Hospital [Globe, NYT]: 0 which concatenate several words are challenging. The latter (glass footbridge over) Huntington Ave near Copley is the sort of phrase that we expect would be rejected as non- place [Globe]: 4 geolocatable. See Table 1. Tufts New England Medical Center [NYT]: 0 Table 1 Top 20 Identified Places with Number of Tweets Washington Square, Brookline [NYT]: 0 Known Place #Tweets For three of these sites – Mass. General Hospital, Tufts JFK Library 1158 Medical Center and Washington Square, Brookline, reports of Boston 629 unexploded bombs or suspicious packages occurred after the end of the tweet collection period, at 7:06 PM. Otherwise, the Boston Marathon 325 recall of our system was good, missing only the report of St Ignatius Catholic 47 unexploded bombs at the Harvard MBTA station. A few Church tweets mentioning such a threat were in our corpus, but the PD 29 4 Boylston 8 http://news.blogs.cnn.com/2013/04/15/explosions-near- finish-of-boston-marathon/comment-page-18/ CNN 5 5 http://thelede.blogs.nytimes.com/2013/04/15/live-updates- Copley Sq 4 explosion-at-boston-marathon/ 6 http://live.boston.com/Event/Live_blog_Explosion_in_Cople y_Square?Page=16 STIDS 2013 Proceedings Page 92 system failed to pick them up, either due to capitalization Predictive Battlespace Awareness and Decision Making”.     issues or unexpected use of hashtags. Thanks also to the JIFX 13-4 participants for helpful feedback. Additionally, on average, tweets reflecting these locations were produced 11 minutes prior to their being reported on the sites mentioned. Thus, the tweet processing was more timely REFERENCES and more comprehensive than simply relying on a handful of [1] United States Code, 2010 Edition Title 6 - DOMESTIC SECURITY news sites alone for situational awareness CHAPTER 1 - HOMELAND SECURITY ORGANIZATION SUBCHAPTER V - NATIONAL EMERGENCY MANAGEMENT Sec. 321d - National Operations Center 6 U.S.C. §321d(a) I. CONCLUSION [2] FEMA. Lessons Learned - Boston Marathon Bombings: The Positive In this paper, we described a system for integrating disparate Effects of Planning and Preparation on Response. August 2, 2013. information sources into a COP for Humanitarian [3] Shayne Adamski. Written testimony of FEMA for a House Homeland Assistance/Disaster Relief operations by means of semantic Security Subcommittee on Emergency Preparedness, Response, and annotations and queries, using a common ontology. We Communications   hearing   titled   “Emergency   MGMT   2.0:   How   #SocialMedia & New Tech are Transforming Preparedness, Response, described the operation of the system and evaluated the results &  Recovery  …” July 9, 2013 of an experiment in annotating and querying social media data [4] Barry   Smith,   Lowell   Vizenor   and   James   Schoening,   “Universal   Core   streams in order to produce situational awareness. We applied Semantic   Layer“, Ontology for the Intelligence Community, our technology to a repository of tweets collected in the Proceedings of the Third OIC Conference, George Mason University, immediate aftermath of the Boston Marathon bombings in Fairfax, VA, October 2009, CEUR Workshop Proceedings, vol. 555. April, 2013, and demonstrated that a ranked set of places could [5] Dublin Core Metadata Initiative, http://dublincore.org be incorporated into the COP, showing the prominence of each [6] GeoNames Ontology, http://www.geonames.org/ontology/ site by tweet volume that was reported as being the site of an [7] C. Matheus, B. Dionne, D. Parent, K. Baclawski and M. Kokar. additional unexploded bomb or bombs. We evaluated the BaseVISor: A Forward-Chaining Inference Engine Optimized for results formally and compared the results with the situational RDF/OWL Triples. In Digital Proceedings of the 5th International Semantic Web Conference, ISWC 2006, Athens, GA, Nov. 2006. awareness that could be gleaned only from mainstream media [8] M. Kokar, B. Ulicny, and J. Moskal. Ontological structures for higher blogs being updated at the same time. On average, the levels of distributed fusion in Distributed Data Fusion for Network- automatic processing would have had access to locations from Centric Operations, D. Hall, C.-Y. Chong, J. Llinas, and M. Liggins II, tweets eleven minutes before these sites were mentioned on the Eds., ed: CRC Press, 2012, pp. 329-347. mainstream media blogs. Additionally, sites that were [9] Cheng, Z.; Caverlee, J.; and Lee, K. 2010. You are where you tweet: A prominent on Twitter (e.g. St Ignatius Church at Boston content-based approach to geo-locating  twitter  users.  In  CIKM  ’10. College or the Mandarin Oriental Hotel in Boston) were not [10] LEETARU, Kalev et al. Mapping the global Twitter heartbeat: The mentioned on the news blog sites at all. We believe that these geography of Twitter. First Monday, [S.l.], apr. 2013. ISSN 13960466. Available at: results show that this approach is a promising one for deriving . Date situational awareness from social media going forward. accessed: 04 Sep. 2013. doi:10.5210/fm.v18i5.4366. [11] McGregor, D., Brutzman, D., Armold, A., and Blais, C. L., "DIS-XML: ACKNOWLEDGMENT Moving DIS to Open Data Exchange Standards," Paper 06S-SIW-132, Simulation Interoperability Standards Organization, 2006 Spring This work was performed under AFRL contract FA8650-13- Simulation Interoperability Workshop, Huntsville, AL, April 2006 C-6381 “Fusion, Management, and Visualization Tools for STIDS 2013 Proceedings Page 93