A Web-based Tool for Expert Elicitation in Distributed Teams Carlo Spaccasassi Lea Deleris IBM Dublin Research Lab IBM Dublin Research Lab Damastown, Dublin, D15 Damastown, Dublin, D15 Ireland Ireland spaccasa@ie.ibm.com lea.deleris@ie.ibm.com Abstract experts, which as Bayesian modeling gains popularity are more likely to arise than in the past [9]. We present in this paper a web-based tool Consider the following real-world example. We under- developed to enable expert elicitation of the took a project focused on understanding variability in probabilities associated with a Bayesian Net- the performance of a specific human resource process work. The motivation behind this tool is to and elected to use a Bayesian network as our modeling enable assessment of probabilities from a dis- framework. The domain experts were regular employ- tributed team of experts when face-to-face ees acting as experts, they were scattered across the elicitation is not an option, for instance be- world and spanned different domains of expertise. We cause of time and budget constraints. In ad- did not have the possibility of undertaking face-to-face dition to the ability to customize surveys, the sessions and opted for replacing them with phone inter- tool provides support for both quantitative views. The structural definition of the model, identi- and qualitative elicitation, and offers admin- fying the variables and inter-dependence, did not yield istrative features such as elicitation surveys many difficulties nor complains from the experts. By management and probability aggregation. contrast, the quantitative phase proved time consum- ing and generated significant frustration on both sides (analysts and experts). In particular, our efforts were hampered by (i) the time difference leading to early or 1 Introduction late at night sessions for either the expert or the ana- lyst and (ii) the time pressure on the experts because There is a thriving research community that studies of the analyst waiting on the phone for them to pro- techniques for learning the structure and parameters vide an answer. The main challenge however was to of a belief network from data [1]. However, when there have experts understand the format of the conditional is no relevant data available, or any literature to guide probability table (CPTs). Overall, we concluded that the construction of the model, the network must be phone elicitation was not an adequate support for re- elicited from the individuals whose beliefs are being mote parameter elicitation and that eliciting probabil- captured - such a person is often referred to as the do- ities directly in the CPT created unnecessary cognitive main expert, or simply expert. Both the structure and burden. the parameters of a belief network need to be elicited. Often it is easier to construct the structure of a belief The risk elicitation tool that we present here aims at network, as compared to eliciting the parameters, i.e. addressing those concerns. We opted for a web-based the conditional probabilities [2, 3, 4, 5]. We focus in tool, whose asynchronous feature enables more com- this paper on the subject of parameter elicitation, as- fortable time management of the elicitation process suming that the structure of the network has already from experts side (albeit less control for the analyst). been ascertained. An advantage of the web-based set up is the ability for the analyst to centrally manage the elicitation sur- Best practice in terms of parameter elicitation is based veys. While we recognize that web-based approaches on face-to-face interviews of the expert by a trained are second-best to face-to-face elicitation, we feel that analyst (or knowledge engineer) [6, 7, 8]. However, such a tool would enable wider adoption of Bayesian situations arise where such an approach is not feasible, models in settings where face-to-face elicitation is un- mostly because of time and budget constraints. This is likely. especially salient in projects with a distributed team of The remainder of the paper is organized as follows. In [17]. Section 2, we review the literature related to probabil- ity elicitation in Bayesian networks. Section 3 provides 3 Description of the Tool an in-depth description of the Risk Elicitation tool. Fi- nally Section 4 discusses related research endeavors. 3.1 Overview The Risk Elicitation tool is a web-based applica- 2 Expert Elicitation in Belief tion that offers both (i) an interactive web interface Networks through which parameter elicitation surveys them- selves can be answered and automatically collected, The process of eliciting probabilities from experts and (ii) support for survey management. The tool can is known to be affected by numerous cognitive bi- be freely accessed from the Internet; any web browser ases, such as overconfidence and anchoring effects [10]. with Adobe’s Flash Player 10 [18] installed will be able When eliciting probabilities in the context of a belief to run it. Given its web availability, the Risk Elicita- network, additional practical challenges must be con- tion tool is virtually always available. Moreover, inter- sidered [11]. viewees can complete a survey with little external help, pause and resume the survey at a later time, thus fur- One particular problem lies with the number of param- ther relaxing the need to coordinate interviewers and eters that have to be elicited from the experts, which interviewees. leads to long and tiring elicitation sessions and some- times inconsistent and approximate answers. To alle- We distinguish two classes of users of the Risk Elici- viate such problems, the analyst often resorts to mak- tation tool: analysts and domain experts. In the fol- ing assumptions about the conditional relationships lowing sections we describe the main use cases of the that reduce the number of parameters to be elicited by tool setting up elicitation surveys (Analyst), answer- parameterizing the network structures using NOISY- ing a survey (Expert) and collecting and aggregating OR and NOISY-MAX models (see for instance [2, 12]). results (Analyst). We also provide at the end of this This is in fact an option that we will provide in the section a description of the architectural set up along next version of our tool. with a short discussion of the technical challenges that we met. As we mentioned in the introduction, another chal- lenge associated with elicitation in Bayesian networks 3.2 Setting up Elicitation Surveys is the problem for the expert to understand the struc- ture of a conditional probability table. While consid- As mentioned earlier, we assume that the starting ering scenarios is fairly intuitive, understanding which point of the process is a Bayesian network whose struc- entry corresponds to which scenario can be unnecessar- ture is fully defined, including clear description of ily confusing. Efforts have thus been made to improve nodes and associated states. The first step for the an- the probability entry interface in probability elicita- alyst is therefore to load his Bayesian Network file on tion tools [13, 14]. Our tool integrates findings from the Risk Elicitation tool. The tool will automatically this stream of research, by asking simple text questions generate a sample survey, which the analyst can fur- corresponding to each cell of the CPT and by group- ther customize. The second step for the analyst is to ing all assessments corresponding to the same scenario create a user account for each expert. Experts, having together (although our support does not enable us to various domains of expertise, may not be qualified to show them all at once but simply sequentially). In- provide information for all the nodes in the Bayesian deed, previous research has shown that presenting all network. To address that situation, the analyst can de- conditioning cases for a node together during elicita- fine roles and associate a subset of the nodes to each tion reduces the effect of biases [5]. role. Each expert can then be associated to one or Finally, the need to provide precise numerical answers several roles and will only be asked questions on the is considered an additional cognitive obstacle for ex- Bayesian nodes pertaining his/her role(s)1 . perts. One solution to address this problem is to The main features of the tool that enable survey setup present the elicitation scale with verbal and numeri- are: cal anchors [15, 5, 16]. We included such findings into the design of our tool, enabling analyst to ask ques- tions in a qualitative manner. Another solution is to BBN Import The Risk Elicitation tool enables ana- elicit qualitative knowledge from experts, for instance lysts to create a personalized survey of the BBN by asking them to provide a partial order of the proba- 1 In the remainder of this paper we will refer to expert bilities and leveraging limited data whenever available and analyst as he. Figure 1: Expert Elicitation page for a quantitative question. they want to elicit. The BBN can be submit- Bayesian networks, to which the GeNIe file format ted from the Risk Elicitation tool to a server that is translated during the template generation. automatically generates a survey template. The template can then be customized by the analyst, User Management In the Administration section, who can perform the following modifications: analysts can register experts to the Risk Elici- tation tool and assign them roles. The analyst • Provide descriptive details on the Bayesian is presented with a classic user management con- networks, its nodes and its states; add ana- sole, where he can add, delete and update both lyst notes to specific questions, user accounts and the roles they play in an elicita- • Choose how to elicit node, whether quantita- tion survey. Whenever a user account is created, tively or qualitatively, the tool generates an automatic email, that the • Define, for the qualitative questions, the pos- analyst can further customize and send to the ex- sible answers and relative numerical ranges pert, presenting him his credentials to access the (which we call calibrations), tool and the survey he has been assigned. • Customize the question texts, • Choose whether to ask experts about their 3.3 Expert Elicitation confidence level, • Assign an order to the elicitation process (to After an expert has been notified of his account cre- control in which order nodes are elicited). dentials, he can access the Risk Elicitation tool. Upon logging in, he can select one of the surveys and roles he At the moment we only support the GeNIe file for- has been assigned to. At that point, he is offered the mat, but our tool can be easily extended to other option of reviewing a short tutorial of the tool. Moving formats. We have developed our own format for to the survey answering, he is presented with questions ities but ranges of probabilities. As shown in Fig- ure 2, experts are offered a set of labeled ranges, called calibrations, and can select the calibration that best describes the probability of a node being in a state, given the conditions expressed in the Context pane. Calibrations are initially defined by analysts at BBN Import time, both in terms of labels and numerical range. However, experts have the ability to modify the numerical values of ranges from the tool itself if they feel they are not appropriate for the specific question. Summary Tables There are as many questions for each node as parent state configurations. After all questions for a node have been answered, the expert is shown a summary table that provides a report of all the answers they have given (see Figure 3). This is in fact the conditional proba- bility built from the answers provided. However, at this point the expert has been actively involved in building it from the ground up and should not be as confused by the structure as if we had pre- sented it upfront. The summary table enables to compare answers across scenarios. If the expert Figure 2: An example of qualitative question. wants to change any of the input, he can navigate back to the associated question by clicking on the related cell in the summary table, as shown in for each relevant node of the Bayesian network. They Figure 3. When the expert is satisfied with his can ask for either quantitative or qualitative answers. answers, he can save and proceed to either an- When the survey is complete, the expert can submit swer questions about another node, or submit the the survey on the Risk Elicitation tool and exit. survey if all nodes have been answered. Expert elicitation is supported by the following fea- tures: Confidence For each question/node, the expert can provide an indication of his confidence in his an- swer (provided the analyst has enabled this fea- Quantitative and Qualitative Elicitation ture). At this point, confidence indication is qual- Probabilities can be elicited through either quan- itative (Low/Medium/High) but could be further titative or qualitative questions. Quantitative defined in terms of notional sample space for in- questions ask experts to state exact probabilities, stance. Confidence information can be used dur- using a pie chart for discrete nodes. As shown ing aggregation, to modify the weight an answer in Figure 1, each slice of the pie represents a has, or to provide a threshold to filter out answers state of the Bayesian node with its associated (e.g. consider only high confidence answers). probability. Users can drag the pie chart edges to provide their estimates of the node being Comment For each question, the expert has the op- currently evaluated, given that the scenarios portunity to provide a comment through an appo- defined in the context pane (parent nodes and site collapsible text area, placed below the ques- states), at the top left corner of the question tion itself. One use of the comment section is page. We also provide direct feedback about the to provide details about understanding of a node implied odd ratios on the right side of the pie description or state or to specify an implicit as- chart, as some situations may be more suited to sumption that the expert has made when provid- thinking about relative chances. The map in the ing answers. top right corner shows the local network topology for the node being elicited. The full Bayesian Network is also available in the Road Map tab 3.4 Gathering information on the left-hand side. After setting up surveys and notifying experts, the an- Qualitative questions do not elicit exact probabil- alyst can use the Status section of the tool to check Figure 3: Summary page for the questions elicited in Figure 1. on the progress of the elicitation process. He is pro- abilities elicited by experts and aggregates them vided with a summary of how many surveys have been using the method and weights specified by the completed. From the same section, experts can be re- analyst. Given that qualitative questions do not minded to complete their survey by an automatically provide an exact number but a range, we take the generated email. Once enough surveys have been com- midpoint of each range as the representative of the pleted, the analyst has the option to aggregate expert range (while acknowledging that this is a rather answers and export a file of the Bayesian network pop- simple approach which we will refine in later ver- ulated with the aggregated values. sions of the tool). The main mechanisms to enable gathering and aggre- Aggregated values are used to populate the orig- gation of answers are: inal Bayesian Network file imported in the tool. The analyst can then export the aggregated BBN Surveys Monitoring Analysts can monitor the ad- on his computer. vancement of survey completion from a dedicated section, called Status Tab. The Status Tab re- 3.5 Implementation Details ports which experts have completed their surveys and when, which surveys have not been submitted The tool employs a classic two-tiers architecture, with yet and which experts have been reminded to fin- a web application developed on top of IBM’s Web- ish the survey. To remind an expert to complete sphere Application Server 6.1 [20] and a Flash client his survey, an automatic mailing system is pro- built with Adobe’s Flex Builder 3 [18]. We have em- vided to automatically generate and send email ployed a Model-Driven Architecture approach [21] to reminders to the interested parties. Generated develop the tool, following the standard Model-View- email kindly remind experts of which surveys they Control pattern, where the view is the Flash client, have been assigned, the role they play into it, their most of the controls are in the web server and the account details in case they forgot and a link to model is the survey itself, exchanged and modify by the tool. Analysts can also customize the gener- both server and clients. Communication is handled by ated email before sending it from the tool itself, web services using JAX-RPC [22]. as shown in Figure 4. Surveys data has been modeled using the Eclipse Mod- eling Framework (EMF) [23]. We first designed an ab- Probability Aggregation After all surveys have stract, graphical representation of the data that the been completed, an analyst may need to aggre- survey needed to capture in EMF. The resulting rep- gate the answers provided by the experts. We cur- resentation, or model, is similar to a UML Class dia- rently support two methods of aggregation: lin- gram. Code to manipulate and also persist the model ear opinion pool and logarithmic opinion pool [19]. is automatically generated from the model and taken The analyst can control the aggregation method care of by EMF. by assigning a weight to each expert, to credit some experts more importance. The tool goes EMF does not support natively Actionscript, Adobe’s through all completed surveys, collects the prob- programming language: EMF’s standard tools cannot which the nodes are presented. Because of the web- based feature of our tool, we have more freedom in determining the order than traditional face-to-face ap- proaches. 4.1 Experiencing Different Orders We considered the following question: Does the param- eter elicitation ordering in belief networks even mat- ter to a domain expert? To answer this question, we explored the relationship between node ordering and user-friendliness of the elicitation process in an exper- imental setting. Specifically, three different node or- derings for the same belief network were considered: two ‘top-down’ and one ‘bottom-up’ ordering, with pa- rameter elicitation performed using the risk elicitation tool described in this paper. Around seventy Stanford University graduate students were asked to elicit a be- lief network with six nodes on the subject of getting a job immediately after their studies; they were split into approximately three equal groups, one group for each order. The top-down orders presented questions to elicit parameters of parent nodes before children nodes, while the bottom-up order visited children be- Figure 4: Customizable email templates from the ad- fore parents. ministration console. In this particular experiment, there was no drop-out - all subjects completed the elicitation process, per- haps due to the small size of the network and the in- generate model manipulation code automatically for centive of extra class credit (which was only granted it. To address this problem, we bridged EMF to a for complete assessments). Along with the Web-based Web Service definition file (WSDL). We first exported elicitation survey, the students also responded to a EMF’s models to an XML Schema, which we imported short survey requesting feedback about the elicitation into the WSDL file. Adobe’s Flex can then generate process and the corresponding tool. The results did code from the WSDL file both to communicate with indicate that the order in which the nodes are pre- the server and to access the model. sented affects not only how comfortable experts claim Communication points between server and client are to be with the process, but also the time required to also generated from the WSDL file. Extensions and elicit the parameters. In particular, there was a sig- modifications to either the model or the communica- nificant difference between the orders with regard to tion points, on the server and client side, were handled user-friendliness based on the survey responses. For automatically by either EMF or Flex, handling manual the two top-down orders, hardly any of the subjects error-prone tasks and saving development time. felt that the order was confusing, compared to 23% for the bottom-up order. Moreover, the average time to Finally, we import and export BBN files written in the complete the elicitation was lower for the two top-down SMILE/GeNIe format [24]. EMF automatically man- orders as compared to the bottom-up order. The two ages GeNIe file loading and saving, using the XML top-down orders differed in survey completion time: Schema definition which is publicly available. The Ge- an average of 400 seconds with a standard deviation NIe files are then converted to an internal EMF model of 170 seconds for the first one, against an average of designed to ease BBN manipulation. 500 seconds with a standard deviation of 400 seconds for the second one. 4 Related Research 4.2 Ordering Mathematically In this section, we briefly discuss some of the research questions that have arisen from the development of In a separate study, we explored the problem of deter- the risk elicitation tool. In particular, we have focused mining, for a particular belief network whose structure on the effect on the elicitation process of the order in is known, the optimal order in which the parameters of the network should be elicited. Our objective in ing with Bayesian networks. It supports many fea- determining the order is to maximize information ac- tures, such as BBN modeling, BBN learning from data, quisition. While the order of the elicitation process and elicitation. With respect to elicitation, analysts is irrelevant if all nodes are elicited and if experts are can create a profile for each expert, select the portions able to provide their true beliefs, we believe that new of a variables’ CPT to be elicited, and send this in- trends in belief network modeling make these assump- formation to a web server over the Internet. The web tions questionable. When only a subset of the nodes server generates surveys to elicit probabilities quan- may be elicited or when answers can be noisy, it is titatively, using a slider bar to capture expert input. necessary to devise an ordering strategy that seeks to Experts can also provide a level of confidence in an an- salvage as much information as possible. swer, expressed as a percentage, along with additional comments. In comparison with our tool, many features We therefore developed an analytical method for deter- are similar: both tools provide expert profile manage- mining the optimal order for eliciting these probabili- ment, on-line surveys, and survey import and export. ties, where optimality is defined as shortest distance to Our tool, however, allows for both quantitative and the true distribution (on which we have a prior). We qualitative elicitation of probabilities. The elicitation considered the case where experts may drop out of the formats are different as well: our tool uses pie charts elicitation process and modeled the problem through to capture quantitative probabilistic information for a myopic approach. discrete random variables and a slider bar (expressed For the case of uniform Dirichlet priors, we show that as a percentage difference from baseline) to capture the ‘bottom up’ elicitation heuristic can be optimal. impact of a factor on a (continuous-valued) metric un- For other priors, we showed that the optimal order of- der a specified scenario. Additionally, our tool allows ten depends on which variables are of primary interest analysts to fully customize surveys and aggregate by to the analyst (whether all the nodes in the network one of several algorithms. We also provide experts or a subset, as is often the case in risk analytic appli- with additional contextual information, including a lo- cations). cal and global map of the Bayesian network, a tutorial, the description of each node and state in the Bayesian The orderings resulting from the methods proposed in network, along with analysts’ comments. that model are driven solely by analytical concerns, and do not consider the user-friendliness of the elic- The Elicitation Tool from ACERA is quite different itation process. In practice, as we discussed in the from both our tool and BayesiaLab, in that it is an on- previous section, different orderings can impact the line questionnaire to directly elicit estimates of risks. perceived difficulty of the process, thereby making the Questions are open-ended. An example is: ”Will elicitation of complete and accurate beliefs more diffi- DAGGRE win?”. When answering a question, users cult. These results further motivate the need to inves- need to provide four numerical estimates in an HTML tigate the consequence of forcing a possibly unnatural form: the lowest estimate, the highest estimate, the ordering upon experts and to assess whether the ‘infor- best estimate and a confidence level. A graphical rep- mation gain’ from an analytical perspective is worth resentation of the estimates is displayed and the user the ‘cost’ in practice, i.e. in terms of the amount of can submit the answers. After submission, the tool confusion, fatigue and increased imprecision. More displays a selection of answers from users who have generally, empirical research to investigate how experts already completed the survey. The user is given the actually react to different orders is an important topic, chance to review his own answers in light of this new similar to the empirical work on understanding how input and submit again. In contrast, our tool allows experts actually feel about different probability elici- review of only the expert’s own answers, as shown in tation tools [15, 14]. The tool presented in this paper Summary Pages, and are tailored for Bayesian net- could be a useful support for such endeavors. works where the goal is to elicit (conditional) proba- bilistic information. To help experts frame the con- 4.3 Comparison with existing web-based text of the question, we provide additional informa- tools tion, such as the Bayesian network’s local map and network description. ACERA’s tool does not seem to Finally, we compare our tool to two existing web- be tied to Bayesian networks, so less contextual infor- based tools for risk elicitation pointed out by review- mation is required in this case. ers: BayesiaLab [25] and the Elicitation Tool from ACERA, the Australian Center of Excellence of Risk 5 Conclusion Analysis, described in [26]. BayesiaLab, a commercial product developed by In this paper, we describe a web-based expert elicita- Bayesia, provides an integrated environment for work- tion tool for Bayesian network models that is especially relevant for the management of distributed teams of [4] M. Druzdzel and L. van der Gaag. Building prob- experts. We focus especially on facilitating the under- abilistic networks: Where do the numbers come standing of a conditional probability table by asking from? IEEE Transactions on Knowledge and each entry separately and in a textual format. The tool Data Engineering, 12(4):481–486, 2000. enables the management of the survey administration cycle, from the customization of the survey and the [5] L. van der Gaag, S. Renooij, C. Witteman, B. Ale- creation of roles (associated with a subset of the net- man, and B. Taal. How to elicit many probabili- work) to the monitoring of progress from experts and ties. In K. Laskey and H. Prade, editors, Proceed- aggregation of results. ings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pages 647–665. Morgan While we have implemented several best-practices Kauffman, San Francisco, CA, 1999. from the elicitation literature, we also have identified various directions for further development. One sim- [6] C. Spetzler and C. von Holstein. Probability en- ple extension will consist in allowing for NOISY-OR coding in decision analysis. Management Science, and NOISY-MAX parameterization. Going further, 22(3):340–358, 1975. we would like to more strongly encourage for quali- [7] M. Merkhofer. Quantifying judgmental uncer- tative elicitation, asking for orders of magnitudes for tainty: Methodology, experiences and insights. instance, or if limited date was available following the IEEE Transactions on Systems, Man and Cyber- relative order procedure suggested by [17]. In fact, for netics, 17(5):741–752, 1987. cases where partial data is available, one could also consider providing feedback to the expert directly dur- [8] R.L. Keeney and D. von Winterfeldt. Elicit- ing the elicitation session [27]. Finally, we have started ing probabilities from experts in complex tech- providing support for utility/value nodes but so far in nical problems. Engineering Management, IEEE a coarse manner. Initially, experts are asked to iden- Transactions on, 38(3):191 –201, aug 1991. tify a parent states configuration for which they are comfortable with providing an exact estimate of util- [9] Sandra Hoffmann, Paul Fishbeck, Alan Krup- ity. We call this configuration base case. For non-base nick, and Michael McWilliams. Elicitation from case configurations, experts only need to specify how large, heterogeneous expert panels: Using multi- much in percentage the utility of the node differs from ple uncertainty measures to characterize informa- the base case. tion quality for decision analysis. Decision Anal- ysis, 4(2):91–109, 2007. Acknowledgements [10] D. Kahneman, P. Slovic, and A. Tversky. Judg- This work is partially supported by fundings from IDA ment under Uncertainty: Heuristics and Biases. Ireland (Industrial Development Agency). Cambridge University Press, 1982. [11] S. Renooij. Probability elicitation for belief net- References works: Issues to consider. The Knowledge Engi- neering Review, 16(3):255–269, 2001. [1] D. Heckerman. A tutorial on learning with [12] Adam Zagorecki and Marek Druzdzel. Knowledge bayesian networks. In M. Jordan, editor, Learning engineering for bayesian networks: How common in Graphical Models. Kluwer, Netherlands, 1998. are noisy-max distributions in practice? In Pro- ceeding of the 2006 conference on ECAI 2006: [2] M. Henrion. Some practical issues in construct- 17th European Conference on Artificial Intelli- ing belief networks. In L. Kanal M. Henrion, gence August 29 – September 1, 2006, Riva del R. Shachter and J. Lemmer, editors, Proceedings Garda, Italy, pages 482–486, Amsterdam, The of the Fifth Conference on Uncertainty in Artifi- Netherlands, The Netherlands, 2006. IOS Press. cial Intelligence, pages 161–173. Elsevier Science, New York, NY, 1989. [13] Hope Nicholson Korb, L. R. Hope, A. E. Nichol- son, and K. B. Korb. Knowledge engineering tools [3] M. Druzdzel and L. van der Gaag. Elicita- for probability elicitation. Technical report, 2002. tion of probabilities for belief networks: Combin- ing qualitative and quantitative information. In [14] H. Wang and M. Druzdzel. User interface tools P. Besnard and S. Hanks, editors, Proceedings of for navigation in conditional probability tables the Eleventh Conference on Uncertainty in Arti- and elicitation of probabilities in bayesian net- ficial Intelligence, pages 141–148. Morgan Kauff- works. In C. Boutilier and M. Goldszmidt, ed- man, San Francisco, CA, 1995. itors, Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 617– [25] Bayesia. Bayesialab. http://www.bayesia.com/ 625. Morgan Kaufmann, San Francisco, CA, 2000. en/products/bayesialab.php. [15] S. Renooij and C. Witteman. Talking probabil- [26] Andrew Speirs-Bridge, Fiona Fidler, Marissa ities: Communicating probabilistic information McBride, Louisa Flander, Geoff Cumming, and with words and numbers. International Journal Mark Burgman. Reducing overconfidence in the of Approximate Reasoning, 22:169–194, 1999. interval judgments of experts. Risk Analysis, 30(3):512–23, 2010. [16] F. Fooladvandi, C. Brax, P. Gustavsson, and M. Fredin. Signature-based activity detection [27] A. H. Lau and T. Y. Leong. Probes: a frame- based on bayesian networks acquired from ex- work for probability elicitation from experts. In pert knowledge. In Information Fusion, 2009. Proceedings of the AMIA Symposium. American FUSION ’09. 12th International Conference on, Medical Informatics Association, 1999. pages 436 –443, july 2009. [17] E. M. Helsper, L. C. van der Gaag, A. J. Feelders, W. L. A. Loeffen, P. L. Geenen, and A. R. W. El- bers. Bringing order into bayesian-network con- struction. In Proceedings of the 3rd international conference on Knowledge capture, K-CAP ’05, pages 121–128, New York, NY, USA, 2005. ACM. [18] Jeff Tapper, Michael Labriola, Matthew Boles, and James Talbot. Adobe Flex 3: training from the source. Adobe Press, first edition, 2008. [19] Robert T. Clemen and Robert L. Winkler. Com- bining probability distributions from experts in risk analysis. Risk Analysis, 19:187–203, 1999. [20] E.N. Herness, R.H. High, and J.R. McGee. Web- sphere application server: a foundation for on de- mand computing. IBM Syst. J., 43(2):213–237, April 2004. [21] Richard Soley and the OMG Staff Strat- egy Group. Model-driven architecture. http: //www.omg.org/~soley/mda.html, 2000. [22] Roberto Chinnici. Java APIs for XML based RPC (JSR 101). http://jcp.org/aboutJava/ communityprocess/first/jsr101/, October 28, 2003. [23] Frank Budinsky, Stephen A. Brodsky, and Ed Merks. Eclipse Modeling Framework. Pear- son Education, 2003. [24] Marek J. Druzdzel. Smile: Structural model- ing, inference, and learning engine and genie: a development environment for graphical decision- theoretic models. In Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence, AAAI ’99/IAAI ’99, pages 902–903, Menlo Park, CA, USA, 1999. American Association for Artificial Intelligence.