Rapid Argumentation Capture from Analysis Reports: The Case Study of Aum Shinrikyo Mihai Boicu, Gheorghe Tecuci, Dorin Marcu Learning Agents Center, Volgenau School of Engineering, George Mason University, Fairfax, VA 22030 Abstract— The availability of subject matter experts has always require reasoning with concepts such as military unit or been a challenge for the development of knowledge-based military equipment. Thus, when developing a knowledge-based cognitive assistants incorporating their expertise. This paper agent for a new military application, one may expect to be able presents an approach to rapidly develop cognitive assistants for to reuse a significant part of the ontology of a previously evidence-based reasoning by capturing and operationalizing the developed agent. The reasoning rules, however, are much more expertise that was already documented in analysis reports. It application-specific, such as the rules for critiquing a course of illustrates the approach with the development of a cognitive action with respect to the principles of war versus the rules for assistant for assessing whether a terrorist organization is determining the strategic center of gravity of a force. Therefore pursuing weapons of mass destruction, based on a report on the the rules are reused to a much lesser extent. To facilitate their strategies followed by Aum Shinrikyo to develop and use acquisition, the learning agent shell includes a multistrategy biological and chemical weapons. learning engine, enabling the learning of the rules directly from Knowledge engineering, learning agent shell for evidence-based the subject matter expert, as mentioned above. reasoning, problem reduction and solution synthesis, agent We have developed increasingly more capable and easier to teaching and learning, intelligence analysis, cognitive assitant, use learning agent shells and we have applied them to build argumentation, weapons of mass destruction. knowledge-based agents for various applications, including military engineering planning, course of action critiquing, and I. INTRODUCTION center of gravity determination [3]. We research advanced knowledge engineering methods for Investigating the development of cognitive assistants for rapid development of agents that incorporate the knowledge of intelligence analysis, such as Disciple LTA [4] and TIACRITIS human experts to assist their users in complex problem solving [5], has led us to the development of a new type of agent and to teach students. The development of such systems by development tool, called learning agent shell for evidence- knowledge engineers and subject matter experts is very based reasoning [6]. This new tool extends a learning agent complex due to the difficulty of capturing and representing shell with generic modules for representation, search, and experts’ problem solving knowledge. reasoning with evidence. It also includes a hierarchy of Our approach to this challenge was to develop multistrategy knowledge bases, the top of which is a domain-independent learning methods enabling a subject matter expert who is not a knowledge base for evidence-based reasoning containing an knowledge engineer to train a learning agent through problem ontology of evidence and general rules, such as the rules for solving examples and explanations, in a way that is similar to assessing the believability of different items of evidence [7]. how the expert would train a student. This has led to the This knowledge base is very significant because it is applicable development of a new type of tool for agent development to evidence-based reasoning tasks across various domains, such which we have called learning agent shell [1]. The learning as intelligence analysis, law, forensics, medicine, physics, agent shell is a refinement of the concept of expert system shell history, and others. An example of a learning agent shell for [2]. As an expert system shell, the learning agent shell includes evidence-based reasoning is Disciple-EBR [6]. a general inference engine for a knowledge base to be The development of a knowledge-based agent for an developed by capturing knowledge from a subject matter evidence-based reasoning task, such as intelligence analysis, is expert. The inference engine of the learning agent shell, simplified because the shell already has general knowledge for however, is based on a general divide-and-conquer approach to evidence-based reasoning. Thus one only needs to develop the problem solving, called problem reduction and solution domain-specific part of the knowledge base. However, we still synthesis, which is very natural for a non-technical subject face the difficult problem of having access to subject matter matter expert, facilitates agent teaching and learning, and is experts who can dedicate their time to teach the agent. This computationally efficient. Moreover, in order to facilitate paper presents a solution to this problem. It happens that there knowledge reuse, the knowledge base of the learning agent are many reports written by subject matter experts which shell is structured into an ontology of concepts and a set of already contain significant problem solving expertise. Thus, problem solving rules expressed with these concepts. The rather than eliciting the expertise directly from these experts, a ontology is the more general part of the knowledge base and is junior professional may capture it from their reports. usually relevant to many applications in the same domain, such as military or medicine. Indeed, many military applications will We will illustrate this approach by considering a recent This research was partially supported by the National Geospatial-Intelligence Agency and by George Mason University. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the National Geospatial-Intelligence Agency or the U.S. Government. report from the Center for a New American Security, “Aum judges – the vapors were shifted toward a neighborhood, killing Shinrikyo: Insights Into How Terrorists Develop Biological 8 persons and injuring 200; (3) several attacks in the Tokyo and Chemical Weapons” [8]. This report provides a Subway on 20 March 1995, killing 13 and injuring thousands. comprehensive analysis of this terrorist group, its The fourth section of the report summarizes the main radicalization, and the strategies followed in the development lessons learned: (1) chemical weapons capabilities seem more and use of biological and chemical weapons. As stated by its accessible than biological capabilities for mass killing; (2) authors: “… this is the most accessible and informative effective dissemination is challenging; (3) recurred accidents in opportunity to study terrorist efforts to develop biological and the programs did not deter their pursuit; (4) during the chemical weapons” [8, p.33]. “This detailed case study of Aum transition to violence some leaders joined while others were Shinrikyo (Aum) suggests several lessons for understanding isolated or killed; (5) law enforcement pressure was highly attempts by other terrorist groups to acquire chemical or disruptive even though it was not an effective deterrent; (6) the biological weapons” [8, p.4]. “Our aim is to have this study programs and attacks were conducted by the leadership group enrich policymakers’ and intelligence agencies’ understanding only, to maintain secrecy; (7) the hierarchical structure of the when they assess the risks that terrorists may develop and use cult facilitated the initiation and resourcing of the programs but weapons of mass destruction” [8, p.6]. distorted their development and assessment; (8) Indeed, this report presents in detail two examples of how a contemporaneous assessment of the intentions and capabilities terrorist group has pursued weapons of mass destruction, one of a terrorist organization are difficult, uncertain and even where it was successful (sarin-based chemical weapons), and misleading; (9) despite many mistakes and failures, successes one where it was not successful (B-anthracis-based biological were obtained as a result of the persistence in the programs. weapons). We will show how we can use these examples to train Disciple-EBR, evolving it into a cognitive assistant that III. HYPOTHESIS ANALYSIS WITH DISCIPLE-EBR will help intelligence analysts in assessing whether other terrorist groups may be pursuing weapons of mass destruction. A class of hypothesis analysis problems is represented in Notice that this process operationalizes the knowledge from the Disciple-EBR as the 7-tuple (O, P, S, Rr, Sr, I, E), as shown in report to facilitate its application in new situations. Figure 1. The ontology O is a hierarchical representation of both general and domain-specific concepts and relationships. We first present a brief summary of the Aum report. Then The general (domain-independent) concepts are primarily those we explain the process of evidence-based hypothesis analysis for evidence-based reasoning, such as different types of using problem reduction and solution synthesis. Finally we evidence. The two primary roles of the ontology are to support present the actual development of the cognitive assistant. the representation of the other knowledge elements (e.g. the reasoning rules), and to serve as the generalization hierarchy II. AUM SHINRIKYO: INSIGHTS INTO HOW TERRORISTS for learning. The hypothesis analysis problems P and the DEVELOP BIOLOGICAL AND CHEMICAL WEAPONS [8] corresponding solutions S are natural language patterns with variables. They include first-order logic applicability The first section of the report describes the creation of the conditions that restrict the possible values of the variables. Aum cult by Chizuo Matsumoto in 1984 as a yoga school. Soon after that Aum started to develop a religious doctrine and A problem reduction rule Rr expresses how and under what to create monastic communities. From the beginning the cult conditions a generic hypothesis analysis problem Pg can be was apocalyptic, believing in an imminent catastrophe that can reduced to simpler generic problems. These conditions are be prevented only by positive spiritual action. In 1988, the cult represented as first-order logical expressions. Similarly, a started to apply physical force and punishments toward its members to purify the body, and started to commit illegalities. The Disciple representation of a class of P1 S1 hypothesis analysis problems is a 7-tuple Rri The second section of the report analyzes the biological (O, P, S, Rr, Sr, I, E) where: Question Question weapons program. The cult first tried to obtain botulinum Answer O – ontology of domain concepts and Answer Srj toxin, but it failed to obtain a deadly strain. However, the cult relationships; released the toxin in 20 to 40 attacks in which, luckily, nobody P11 S11 … P1 S1 died. Possible causes of the failure were identified as P – class of hypothesis analysis problems; n n ineffective initial strain of C. botulinum, unsuitable culture S – solutions of problems; Question Question conditions, unsterile conditions, wrong post-fermentation Rr – problem reduction rules that reduce Answer Answer recovery, and improper storage conditions. Similarly, the problems to sub-problems and/or anthrax program and its failure are analyzed. solutions; P21 S21 … Pm 2 Sm2 The third section of the report analyzes the chemical Sr – solution synthesis rules that synthesize the solution of a problem Question weapons program. While other chemical agents were tested Question Answer from the solutions of its sub-problems. Answer during the program, the main part of the program was based on sarin. Although the program had some problems with mass I– Instances of the concepts from O, with … properties and relationships; P13 S13 Pp3 Sp3 P1 S1 production, it was generally successful, and produced large E – evidence for assessing hypothesis P11 S11 Pn1 S1n quantities of sarin at various levels of purity. Aum performed evidence-based analysis problems. P21 S21 2 Pm 2 Sm several attacks with sarin, including: (1) an ineffective attack reasoning on a competing religious leader in 1993; (2) an attack, in June 1994, with a vaporization of sarin, intended to kill several Figure 1. Disciple representation of a class of hypothesis analysis problems. solution synthesis rule Sr expresses how and under what conditions generic probabilistic solutions can be combined into Assess H1 another probabilistic solution [9]. As mentioned, Disciple-EBR Problem-level Likeliness of H1 already contains domain-independent problem reduction and synthesis solution synthesis rules for evidence-based reasoning. Synthesis function (min, max, average) Disciple-EBR employs a general divide-and-conquer approach to solve a hypothesis analysis problem. For example, Question Q Question Q as illustrated in the right-hand side of Figure 1, a complex Answer A Answer B problem P1 is reduced to n simpler problems P11, … , P1n, Partial Partial through the application of the reduction rule Rri. If we can then Reduction-level Likeliness of H1 Likeliness of H1 find the solutions S11, … , S1n of these sub-problems, then these synthesis solutions can be combined into the solution S1 of the problem Synthesis function Synthesis function P1, through the application of the synthesis rule Srj. The (min, max, average, weighted sum) (min, max, average, weighted sum) Question/Answer pairs associated with these reduction and synthesis operations express, in natural language, the Assess H2 Assess H3 Assess H4 Assess H5 applicability conditions of the corresponding reduction and Likeliness of H2 Likeliness of H3 Likeliness of H4 Likeliness of H5 synthesis rules, in this particular situation. Their role will be discussed in more detail in the next section. Figure 2. Hypothesis assessment through reduction and synthesis. Specific examples of reasoning trees are shown in Figures 5, 6, and 11, which will be discussed in the next section. In weighted sum, as shown in Figure 2 and Figure 3 [12]. general, a top-level hypothesis analysis problem is successively As indicated above, Disciple-EBR includes general reduced (guided by questions and answers) to simpler and reduction and synthesis rules for evidence-based reasoning simpler problems, down to the level of elementary problems which allow it to automatically generate fragments of the that are solved based on knowledge and evidence. Then the reduction and synthesis tree, like the one from Figure 3. In this obtained solutions are successively combined, from bottom-up, case the problem is to assess hypothesis H1 based on favoring to obtain the solution of the top-level problem. evidence. Which is a favoring item of evidence? If E1 is such an Figure 2 presents the reduction and synthesis operations in item, then Disciple reduces the top level assessment to two more detail. To assess hypothesis H1 one asks the question Q simpler assessments: “Assess the relevance of E1 to H1” and which happens to have two answers, A and B. For example, a “Assess the believability of E1”. If E2 is another relevant item of question like “Which is an indicator for H1?” may have many evidence, then Disciple reduces the top level assessment to two answers, while other questions have only one answer. Let’s other simpler assessments. Obviously there may be any number assume that answer A leads to the reduction of H1 to the of favoring items of evidence. simpler hypotheses H2 and H3, and answer B leads to the Now let us assume that Disciple has obtained the solutions reduction of H1 to H4 and H5. Let us further assume that we of the leaf problems, as shown at the bottom of Figure 3 (e.g., have assessed the likeliness of each of these four sub- “If we assume that E1 is believable, then H1 is very likely to be true.” hypotheses, as indicated at the bottom part of Figure 2. The “The believability of E1 is likely.”) Notice that what is really of likeliness of H2 needs to be combined with the likeliness of H3, interest in a solution is the actual likeliness value. Therefore, an to obtain a partial assessment (corresponding to the answer A) expression like “The believability of E1 is likely” can be abstracted of the likeliness of H1. One similarly obtains another partial to “likely.” Consequently, the reasoning tree in Figure 3 shows assessment (corresponding to the answer B) of the likeliness of only these abstracted solutions, although, internally, the H1. Then the likeliness of H1 corresponding to the answer A complete solution expressions are maintained. needs to be combined with the likeliness of H1 corresponding to the answer B, to obtain the likeliness of H1 corresponding to Having obtained the solutions of the leaf hypotheses in all the answers of question Q (e.g., corresponding to all the Figure 3, Disciple automatically combines them to obtain the indicators). likeliness of the top level hypothesis. First it assesses the We call the two bottom-level syntheses in Figure 2 Assess hypothesis H1 reduction-level syntheses because they correspond to based on favoring evidence reductions of H1 to simpler hypotheses. We call the top-level almost certain synthesis problem-level synthesis because it corresponds to all max the known strategies for solving the problem. Which is a favoring Which is a favoring The likeliness may be expressed using symbolic probability item of evidence? E1 item of evidence? E2 values that are similar to those used in the U.S. National likely almost certain Intelligence Council’s standard estimative language: {no min min possibility, a remote possibility, very unlikely, unlikely, an Assess relevance Assess believability Assess relevance Assess believability even chance, likely, very likely, almost certain, certain}. of E1 to H1 of E1 of E2 to H1 of E2 However, other symbolic probabilities may also be used, as very likely likely certain almost certain discussed by Kent [10] and Weiss [11]. In these cases one may use simple synthesis functions, such as, min, max, average, or Figure 3. Automated hypothesis assessment through reduction and synthesis. inferential force of each item of favoring evidence (i.e., E1 and solve typical problems. Assess whether a terrorist System organization is pursuing E2) on H1 by taking the min between its relevance and its During the next phase specification weapons of mass destruction believability, because only evidence that is both relevant and they use the developed P1 S1 believable will convince us that a hypothesis is true. Next sample reasoning trees P1 S1 QuestionP1 S1 Rapid Question Answer Question Answer Question Disciple assesses the inferential force of the favoring evidence to develop a specifi- P1 S1 1 Answer Question Answer Question Answer P 1 Answer 1 S1 n n prototyping P11 S11 Pn1 S1n P1 S1 Pn1 S1n as the max of the inferential force corresponding to individual cation of the system’s Question 1 Question Answer Question Answer Question Answer Question Answer 1 Question Answer P2 S2 Answer 2 S2 items of evidence because it is enough to have one relevant and ontology and use that 1 P21 P1 S21 P21 S21 Pm Sm 2 Pm 2 Pm 2 Sm 2 Sm believable item of evidence to convince us that the hypothesis specification to design Ontology expertise area agent H1 is true. Disciple will similarly consider disfavoring items of and develop an development microbiology bacteriology chemistry person organization terrorist group has degree in evidence, and will use an on balance judgment to determine the ontology of concepts virology Masami Tsuchiya has as member Aum Shinrikyo inferential force of all available evidence on H1. and relationships which Rule learning IF the problem to solve is P1g IF the problem to solve is P1g IF the problem to solve is P1g is as complete as Condition Condition Except-When Condition … Except-When … Except-When IF the problem to solve is P1g Condition IF the problem to solve is P1g Condition Condition IF the problem to solve is P1g Condition Condition IF the problem to solve is P1g IF the problem to solve is P1g To facilitate the browsing and understanding of larger and ontology Condition … Except-When Except-When Condition Condition Condition … Except-When Except-When Condition Condition Condition … Except-When Except-When Condition Condition THEN solve its sub-problemsExcept-When … Except-When Condition Condition possible. Finally they THEN 1 … P P1g solve 1 THEN Png its sub-problemsExcept-When 1 … P 1g solve 1 THEN ng 1 its sub-problemsExcept-When solve P1g … PTHEN1 Png 1 … P solve 1 … Except-When Condition Condition its sub-problemsExcept-When solve … Condition its sub-problems Condition its sub-problemsExcept-When Condition refinement THEN reasoning trees, Disciple also displays them in abstracted solve its sub-problems 1g Png 1 … P 1 THEN 1g Png 1 … P 1 THEN solve its sub-problems 1g Png 1 … P 1 1g Png 1 … P1 ng use the system to learn 1g (simplified) form, as illustrated in the bottom right side of and refine reasoning Figure 5. The top-level abstract problem “start with chaos and rules, which may also Figure 4. Main agent development stages. destruction” is the abstraction of the problem “Assess whether require the extension of the ontology. Aum Shinrikyo preaches that the apocalypse will start with chaos and destruction” from the bottom of Figure 5. The abstract sub- In the next section we will illustrate the development of a problem “favoring evidence” is the abstraction of “Assess cognitive assistant that will help assess whether a terrorist whether Aum Shinrikyo preaches that the apocalypse will start with organization is pursuing weapons of mass destruction. The chaos and destruction, based on favoring evidence.” This is a main difference from the above methodology is that we capture specific instance of the problem from the top of Figure 3 which the expertise not from a subject matter expert, but from the is solved as discussed above. The user assessed the relevance Aum report [8]. and the believability of the two items of evidence EVD-013 and EVD-014, and Disciple automatically determined and V. CAPTURING THE EXPERTISE FROM THE AUM REPORT combined their inferential force on the higher-level hypotheses. The Aum report presents in detail two examples of how a terrorist group has pursued weapons of mass destruction. We IV. AGENT DEVELOPMENT METHODOLOGY will briefly illustrate the process of teaching Disciple-EBR Figure 4 presents the main stages of evolving the Disciple- based on these examples, enabling it to assist other analysts in EBR agent shell into a specific cognitive assistant for assessing whether a terrorist group may be pursuing weapons hypotheses analysis. The first stage is system specification of mass destruction. For this, we need to frame each of these during which a knowledge engineer and a subject matter expert examples as a problem solving experience imagining, for define the types of problems to be solved by the system. Then instance, that we are attempting to solve the following they rapidly develop a prototype, first by developing a model hypothesis analysis problem: of how to solve a problem, and then by applying the model to Figure 5. Detailed and abstract fragments of the hypothesis analysis tree. Assess whether Aum Shinrikyo is pursuing sarin-based weapons. whether Aum Shinrykio has or is attempting to acquire lab production expertise in order to secretly make sarin-based weapons” is solved We express the problem in natural language and select the as indicated in Figure 6. As one can see, the strategy employed phrases that may be different for other problems. The selected by Aum Shinrykio was to identify members trained in phrases will appear in blue, guiding the system to learn a chemistry who can access relevant literature and develop tacit general problem pattern: production knowledge from explicit literature knowledge. This Assess whether ?O1 is pursuing ?O2. strategy was successful. A member of Aum Shinrykio was Masami Tsuchiya who had a master degree in chemistry. Then we show Disciple how to solve the hypothesis Moreover, there is open-source literature from which a analysis problem based on the knowledge and evidence generally-skilled chemist can acquire explicit knowledge on the provided in the Aum report. The modeling module of Disciple- development of sarin-based weapons. From it, the chemist can EBR guides us in developing a reasoning tree like the one from relatively easily develop tacit knowledge to produce sarin- the right hand side of Figure 1. The top part of this tree is based weapons in the lab. shown in Figure 5. The Aum report provides the knowledge and evidence to The main goal of this stage is to develop a formal, yet solve the initial problem, explaining the success of Aum intuitive argumentation structure [12-15], representing the Shinrykio in pursuing sarin-based weapons. assessment logic as inquiry-driven problem reduction and solution synthesis. Notice that, guided by a question-answer At this stage Disciple only uses a form of non-disruptive pair, we reduce the top-level hypothesis assessment problem to learning from the user, automatically acquiring reduction and four sub-problems. We then reduce the first sub-problem to synthesis patterns corresponding to the specific reduction and three simpler problems which we declare as elementary synthesis steps from the developed reasoning tree. These hypotheses, to be assessed based on evidence. Once we patterns are not automatically applied in problem solving associate items of evidence from the Aum report with such an because they would have too many instantiations, but they are elementary hypothesis, Disciple automatically develops a suggested to the user who can use them when solving a similar reduction tree. For example, we have associated two items of problem which, in this case, is “Assess whether Aum Shinrikyo is favoring evidence with the second leaf- problem and Disciple has generated the reasoning tree whose abstraction is shown in the bottom-right of Figure 5. After we have assessed the relevance and the believability of each item, Disciple has automatically computed the inferential force and the likeliness of the upper level hypotheses, concluding: “It is certain that Aum Shinrykio preaches that the apocalypse will start with chaos and destruction.” The other hypothesis analysis problems are reduced in a similar way, either to elementary hypotheses assessed based on evidence, or directly to solutions. For example, based on the information from the Aum report, the problem “Assess whether Aum Shinrykio is developing capabilities to secretly acquire sarin-based weapons” is reduced to the problems of assessing whether Aum Shinrykio has or is attempting to acquire expertise, significant funds, production material, and covered mass production facilities, respectively. Further, the problem “Assess whether Aum Shinrykio has or is attempting to acquire expertise in order to secretly make sarin-based weapons” is reduced to the problems of assessing whether it has or is attempting to acquire lab production expertise, mass production expertise, and weapons assessment expertise, respectively. Then the problem “Assess Figure 6. Sample problem reduction and solution synthesis tree. pursuing B-anthracis-based weapons”. The overall approach used area as range). by Aum Shinrykio was the same but, in this case, the group Based on such specifications, and using the ontology was not successful because of several key differences. For development tools of Disciple-EBR, the knowledge engineer example, Endo, the person in charge of the biological weapons develops an ontology that is as complete as possible by was not an appropriate expert: “Endo’s training, interrupted by importing concepts and relationships from previously his joining Aum, was as a virologist not as a bacteriologist, developed ontologies (including those on the semantic web), while in Aum’s weapons program he worked with bacteria” [8, and from the Aum report. p.33]. While there is open-source literature from which a generally-skilled microbiologist can acquire explicit knowledge The next stage in agent development is that of rule learning on the development of B-anthracis-based weapons, “producing and ontology refinement. First one helps the agent to learn biological materials is a modern craft or an art analogous to applicability conditions for the patterns learned during the rapid playing a sport or speaking a language. Though some aspects prototyping stage, thus transforming them into reasoning rules can be mastered just from reading a book, others relevant to a that will be automatically applied for hypotheses analysis. weapons program cannot be acquired this way with rapidity or assurance” [8, p.33]. From each problem reduction step of a reasoning tree developed during rapid prototyping the agent will learn a The rapid prototyping stage (see Figure 4) results in a general problem reduction rule (or will refine it, if the rule was system that can be subjected to an initial validation with the learned from a previous step), as presented elsewhere (e.g., [3, end-users. 9, 16]), and illustrated in Figure 8. The next stage is that of ontology development. The guiding question is: What are the domain concepts, Specific reduction Partially learned rule Ontology evidence relationships and instances that would enable the agent to P1 IF the problem to solve is P1g tangible evidence demonstrative real missing evidence authoritative record testimonial automatically generate the reasoning trees developed during tangible evidence tangible evidence evidence rapid prototyping? Question Q Question pattern Qg unequivocal testimonial evidence equivocal testimonial evidence Answer A Answer pattern Ag unequivocal unequivocal testimonial probabilistically testimonial testimonial evidence equivocal evidence evidence based on testimonial based upon obtained at opinion evidence direct second observation hand completely The questions and answers that guide the reasoning process … Main condition equivocal testimonial evidence not only make very clear the logic of the subject matter expert, P11 Pn1 but they also drive the ontology development process, as will Plausible upper bound condition Explanation be briefly illustrated in the following. Plausible lower bound condition o1 o2 From each reasoning step of the developed reasoning trees, THEN solve its sub-problems P1g1 … Png 1 the knowledge engineer identifies the instances, concepts and f1 f2 relationships mentioned in them, particularly those in the o3 Minimally generalized example and explanation question/answer pair which provides the justification of that step. Consider, for example, the reduction from the bottom-left Figure 8. Rule learning from a specific reduction. of Figure 6, guided by the following question/answer pair: The left part of Figure 8 shows a specific problem reduction Q: Is there any member of Aum Shinrikyo who is trained in chemistry? step and a semantic network fragment which represents the A: Yes, Masami Tsuchiya because he has a master degree in meaning of the question/answer pair expressed in terms of the chemistry. agent’s ontology. This network fragment corresponds to that This suggests that the knowledge base of the agent should defined by the knowledge engineer for this particular step, include the objects and the relationships shown in Figure 7. during the rapid prototyping phase, as illustrated in Figure 7. Such semantic network fragments represent a specification of Recall that the question/answer pair is the justification of the the needed ontology. In particular, this fragment suggests the reduction step. Therefore we refer to the corresponding need for a hierarchy of agents (covering Aum Shinrikio and semantic network fragment as the explanation of the reduction Masami Tsuchiya), and for a hierarchy of expertise domains step. for weapons of mass destruction (including chemistry). The The right hand side of Figure 8 shows the learned IF-THEN first hierarchy might include concepts such as organization, rule with a plausible version space applicability condition. The terrorist group, person, and terrorist, while the second might rule pattern is obtained by replacing each instance and constant include expertise domain, virology, bacteriology, in the reduction step with a variable. The lower bound of the microbiology, and nuclear physics. The semantic network applicability condition is obtained through a minimal fragment from Figure 7 also suggests defining two features, has generalization of the semantic network fragment, using the as member (with organization as domain and person as range), entire agent ontology as a generalization hierarchy. The upper and has master degree in (with person as domain and expertise bound is obtained through a maximal generalization. Aum Shinrikyo chemistry One, however, only interacts with the agent to identify the explanation of the reduction step, based on suggestions made has as member has master degree in by the agent. Then the agent automatically generates the rule. For instance, based on the reduction from the left-hand side of Masami Tsuchiya Figure 6, and its explanation from Figure 7, Disciple learned Figure 7. Ontology specification. the rule from Figure 9. Finally one teaches the agent to solve other problems. In the rules in the context of the new ontology. Notice that this is, this case, however, the agent automatically generates parts of in fact, a form of learning with an evolving representation the reasoning tree, by applying the learned rules, and one language. critiques its reasoning, implicitly guiding the agent in refining The trained agent may now assist an analyst in assessing the rules. For example, based on the explanation of why an whether other terrorist groups may be pursuing weapons of instance of the rule in Figure 8 is wrong, the agent learns an mass destruction. For instance, there may be some evidence except-when plausible version space condition which is added that a new terrorist group, the Roqoppi brigade, may be to the rule, as shown in Figure 10. Such conditions should not pursuing botulinum-based biological weapons. The analyst be satisfied in order to apply the rule. may instantiate the pattern “Assess whether ?O1 is pursuing ?O2” Correct reductions lead to the generalization of the rule, with the name of the terrorist group and the weapon and the either by generalizing the lower bound of the main condition, agent will generate the hypothesis analysis tree partially shown or by specializing the upper bound of one or several except- in Figure 11, helping the analyst in assessing this hypothesis when conditions, or by adding a positive exception when none based on the knowledge learned from the Aum report. of the above operations is possible. Incorrect Refined rule Ontology Incorrect reductions and their explanations lead to the reduction evidence specialization of the rule, either by specializing the upper IF the problem to solve is P1g tangible authoritative evidence record missing demonstrative real testimonial evidence tangible tangible evidence bound of the main condition, or by generalizing the lower Pa1 evidence evidence Question pattern Qg unequivocal equivocal testimonial testimonial evidence evidence bound of an except-when condition, or by learning the Answer pattern Ag unequivocal unequivocal testimonial probabilistically testimonial testimonial evidence equivocal evidence evidence based on testimonial plausible version space for a new except-when condition, or by Question Q based upon direct observation obtained at second hand opinion completely equivocal evidence Answer A testimonial Main condition evidence adding a negative exception. … The goal is to improve the applicability condition of the P1a 1 P1a n Plausible upper bound condition rule so that it only generates correct reductions. Plausible lower bound condition At the same time with learning new rules and refining Failure Except-when condition previously learned rules, the agent may also extend the Explanation ontology. For example, to explain to the agent why a generated Plausible upper bound condition f reduction is wrong, one may use a new concept or feature. As a o4 3 o5 Plausible lower bound condition result, the agent will add the new concept or feature in its ontology of concepts and features. This, however, requires an THEN solve its sub-problems P1g1 … Png 1 adaptation of the previously learned rules since the Minimally generalized examples and explanations generalization hierarchies used to learn them have changed. To cope with this issue, the agent keeps minimal generalizations of Figure 10. Rule refined based on a negative example and its explanation. the examples and the explanations from which each rule was learned, and uses this information to automatically regenerate VI. FINAL REMARKS We have briefly presented an approach to the rapid development of cognitive assistants for evidence-based reasoning by capturing and operationalizing the subject matter expertise from existing reports. This offers a cost-effective solution to disseminate and use valuable problem solving expertise which has already been described in lessons learned documents, after-action reports, or diagnostic reports. REFERENCES [1] Tecuci G. (1998). Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies, San Diego: Academic Press, ISBN:0126851255. [2] Clancey W.J. (1984). NEOMYCIN: Reconfiguring a rule-based system with application to teaching. In: Clancey, W.J., Shortliffe, E.H. (eds.) Readings in Medical Artificial Intelligence, pp.361-381. Reading, MA: Addison-Wesley. [3] Boicu M., Tecuci G., Stanescu B., Marcu D. and Cascaval C.E. (2001). Automatic Knowledge Acquisition from Subject Matter Experts, in Proceedings of the Thirteenth International Conference on Tools with Artificial Intelligence (ICTAI), pp. 69-78. 7-9 November 2001, Dallas, Texas. IEEE Computer Society, Los Alamitos, California. [4] Boicu M., Tecuci G., Ayers C., Marcu D., Boicu C., Barbulescu M., Stanescu B., Wagner W., Le V., Apostolova D., Ciubotariu A. (2005). A Learning and Reasoning System for Intelligence Analysis, Proceedings Figure 9. Learned rule. Figure 11. Part of an automatically generated hypothesis analysis tree. of the Twentieth National Conference on Artificial Intelligence, AAAI- Knowledge Acquisition, Final Report for the AFOSR Grant # FA9550- 05, Pittsburgh, Pennsylvania, USA, July 9-13. 07-1-0268, Learning Agents Center, Fairfax, VA 22030, February 28. [5] Tecuci, G., Marcu, D., Boicu, M., Schum, D.A., Russell K. (2011). [10] Kent S. (1994). Words of Estimated Probability, in Steury D.P., ed., Computational Theory and Cognitive Assistant for Intelligence Analysis, Sherman Kent and the Board of National Estimates: Collected Essays, in Proceedings of the Sixth International Conference on Semantic Center for the Study of Intelligence, CIA, Washington, DC. Technologies for Intelligence, Defense, and Security – STIDS, pp. 68-75, [11] Weiss C. (2008). Communicating Uncertainty in Intelligence and Other Fairfax, VA, 16-18 November. Professions, International Journal of Intelligence and [6] Boicu, M., Marcu, D., Tecuci, G., Schum, D. (2011). Cognitive CounterIntelligence, 21(1), 57–85. Assistants for Evidence-Based Reasoning Tasks, AAAI Fall Symposium [12] Schum D.A. (2001). The Evidential Foundations of Probabilistic on Advances in Cognitive Systems, Arlington, VA, 4-6 November. Reasoning, Northwestern University Press. [7] Boicu M., Tecuci G., Schum D. (2008). Intelligence Analysis Ontology [13] Tecuci G., Schum D.A., Boicu M., Marcu D. (2011). Introduction to for Cognitive Assistants, in Proceedings of the Conference “Ontology Intelligence Analysis: A Hands-on Approach with TIACRITIS, 220 for the Intelligence Community: Towards Effective Exploitation and pages, George Mason University. Integration of Intelligence Resources,” Fairfax, VA, 3-4 December. [14] Wigmore J.H. (1937). The Science of Judicial Proof. Boston, MA: Little, [8] Danzig R., Sageman M., Leighton T., Hough L., Yuki H., Kotani R. and Brown & Co. Hosford Z.M. (2011). Aum Shinrikyo: Insights Into How Terrorists [15] Toulmin S.E. (1963). The Uses of Argument. Cambridge Univ. Press. Develop Biological and Chemical Weapons, Center for a New American Security, Washington, DC, July. [16] Tecuci G., Boicu M., Boicu C., Marcu D., Stanescu B., Barbulescu M. [9] Tecuci G., Boicu M. (2010). Agent Learning for Mixed-Initiative (2005). The Disciple-RKF Learning and Reasoning Agent, Computational Intelligence, Vol.21, No.4, pp. 462-479.