Early requirements engineering for e-customs decision support: Assessing overlap in mental models Brigitte Burgemeestre, Jianwei Liu, Joris Hulstijn, Yao-Hua Tan Faculty of Economics and Business Administration, Vrije Universiteit, Amsterdam, {cburgemeestre, jliu, jhulstijn, ytan}@feweb.vu.nl Abstract. Developing decision support systems is a complex process. It involves stakeholders with diverging interpretations of the task and domain. In this paper, we propose to use ontology mapping to make a detailed analysis of the overlaps and differences between mental models of stakeholders. The technique is applied to an extensive case study about EU customs regulations. Companies which can demonstrate to be ‗in control‘ of the safety and security in the supply chain, may become ‗Authorized Economic Operator‘ (AEO), and avoid inspections by customs. We focus on a decision support tool, AEO Digiscan, developed to assist companies with an AEO self-assessment. We compared the mental models of customs officials, with mental models of the developers of the tool. The results highlight important differences in the interpretation of the new regulations, which will lead to adaptations of the tool. Keywords: e-government, shared mental models, decision support systems 1 Introduction The creation, implementation and enforcement of legislation are complex processes that involve a large amount of people, parties and disciplines [7]. In this paper we discuss a decision support system to assist in such a complex regulatory environment. The European Union has drafted new customs legislation intended to make supply chains more secure. Trustworthy companies are certified by customs authorities to become ‗Authorized Economic Operator‘ (AEO1 2) and benefit from reduced customs inspections [1]. The AEO legislation has to be implemented by national customs, enforced by regional customs authorities and understood and applied by businesses. As a result, we observe the introduction of several IT systems which try to support these tasks. To align the tasks of the stakeholders in the certification process, such IT systems have to take complex stakeholder characteristics into account. The phase of early requirements engineering aims to analyze stakeholder interests and how they might be addressed or compromised by system requirements [17] [5]. A well known approach to early requirements engineering is the i* framework [17] which proposes an actor-oriented approach, based on the goals and intentions of an actor. An important issue that is not addressed by early requirements methods like i*, is the existence of overlap or differences in the interpretations of the various stakeholders. Much work in requirements engineering implicitly assumes that mental models of the task and domain are shared among stakeholders. In practice however, 1 http://www.douane.nl/zakelijk/aeo/en 2 http://ec.europa.eu/taxation_customs/customs/policy_issues/customs_security Proceedings of CAiSE Forum 2009 31 Proceedings of CAiSE 2009 Forum this assumption is not always warranted. Especially in public-private collaborations, where the parties involved have different interests and backgrounds, differences in the interpretation among various stakeholders can exist. Overlap in task-specific knowledge structures or having a ‗shared mental model‘ is argued to have a positive influence on performance and effectiveness in collaborative situations [7] [4] [11]. We argue therefore that early requirements engineering should involve identification of the differences and similarities that exists among the mental models of the stakeholders. With the differences clarified, the stakeholders become aware about each other‘s mental model constructs, which they in turn can use to align their approaches. Unlike some of the empirical work on shared mental models, however, we are not satisfied with mere lists of differences. Instead we propose to use conceptual models in the form of ontologies, as well as ontology mapping techniques, to detect divergent or synonymous concepts in two or more ontologies in a systematic and precise way. 2 Towards a conceptual model As a starting point for an analysis of mental models of stakeholders in a regulatory environment, we propose Normative Multiagent Systems (NMAS). Each stakeholder is viewed as an autonomous agent that can act, perceive its environment, communicate with others and has skills to achieve its goals and tendencies [16]. Although agents are autonomous, their behavior must be restricted by norms. The regulator, which enforces the norms, is also seen as one of the agents and not as a separate entity [2]. This makes sense because both the regulator and the businesses have to interpret the legislation to apply it in practice. Figure 1 shows a situation in which two agents ‗A‘ and ‗B‘ must collaborate. To do so, they must interpret norms, and implement them in practice. For each agent we draw two ‗thinking balloons‘: the agent‘s own interpretation of the norms, and the agent‘s beliefs about the other agent‘s interpretation of the norms. Fig. 1. Agents‘ beliefs about the norms, and about each other‘s beliefs of the norms We suggest that for successful collaboration both agents must have either a shared interpretation of the norms or that their mental models are transparent for the other, so that other agents can take actions to overcome differences. To analyze the expected Proceedings of CAiSE Forum 2009 32 Proceedings of CAiSE 2009 Forum effectiveness of the collaboration we can therefore compare the thinking balloons in two ways (see Figure 1): arrow 1 compares the agent‘s mental models of the norms, and arrow 2A and 2B compares the mental model with the beliefs the other agent has about the mental model. To compare the mental models and the beliefs about the mental models we use a technique from software engineering: ontology mapping [12] [8]. We view the agents in our example as two agents that need to have a (partial) mapping of their ontologies to communicate and collaborate effectively. Unlike most research in ontology matching, for mental model research we cannot assume that there is commonly shared body of knowledge, structure or syntax available. If we built the mental models from scratch we might end up with even more divergent ontologies than the original mental models. We therefore combine ontology matching techniques to tackle the problem. First we use generic knowledge model templates, from knowledge engineering methods such as CommonKADS [12] as templates to construct the agents‘ specific mental models we like to compare. In line with the CommonKADS method, the agent‘s models we construct will therefore consist of three knowledge categories: domain knowledge, task knowledge and inference knowledge [12]. The templates provide us a generic structure that is domain independent and can function as a core ontology against which we can map the individual agent ontologies. Since our research is concerned with implementing norms in practice we do have access to instances of the mental model concepts. We can therefore use instance based methods [13] to discover mappings between ontology concepts. Furthermore the norms itself are a source of domain knowledge that can be used to make the meaning of nodes explicit [3] and easier to compare. We combine these different techniques and knowledge sources to make a comparison of the ontologies possible. To promote the merger of ontologies towards semantically interoperable ontologies a final step is to identify key differences. With the differences made explicit, the agents become aware about each others mental models, which can in turn help them to more effectively discuss and overcome the differences. Combining these issues, we come to a three step approach to analyze and compare mental models of agents. Step 1 is to develop generic domain, task and inference models based on knowledge templates from CommonKADS [12]. These generic models are used as a starting point for constructing the agent‘s specific mental models. Step 2 is to use the generic models to externalize, analyze and compare individual agent‘s mental model constructs. Step 3 is to build a conceptual model that presents the encountered differences and similarities of the mental models of the agents. This model makes the differences in mental models transparent, which makes it easier to overcome the heterogeneity and to adjust the models accordingly. The following section describes the application of this approach to a case study. 3 Case study: AEO self-assessment of a petrochemical company We use the approach described in the previous section to analyze and compare the mental models of stakeholders involved in the AEO self assessment of a Proceedings of CAiSE Forum 2009 33 Proceedings of CAiSE 2009 Forum petrochemical company (PCC). The self assessment is part of the application procedure for companies to qualify for AEO. To qualify for the AEO status a company must assess itself on a number of criteria, which are described in the community customs code and [8] [9]. The company reports its findings to customs, who then determine the quality of the self assessment and if an AEO certificate can be granted or not. PCC used in their self assessment a decision support tool, ‗AEO Digiscan‘, developed by Deloitte‘s Tax Advise unit. The AEO Digiscan is an online tool that works as a classic expert system and is also based on [8] [9]. Experts of Deloitte contributed to the development of the AEO Digiscan, by specifying the guidelines, and turning them into clear questions. In the application procedure for AEO a traditionally public task (AEO assessment) is partly delegated to a private party (a company). The private party therefore needs insight in the mental model of the public party (customs authority) to perform the task according to their standards. The customs, on the other hand, are interested in the mental model of the company, because the legislation is new and customs need to learn from best practices of early AEO applicants. Since PCC used the AEO Digiscan we can view this as an adoption of the mental model of Deloitte to perform the self assessment. In this paper we compare Deloitte‘s interpretation of the self assessment task, embedded in the AEO Digiscan, with the interpretation of Dutch TCA experts. 3.1 Approach For the data collection we used the following methods: document analysis and semi- structured interviews [6] [18].We studied internal and public documents from both Dutch TCA and Deloitte on AEO certification and self assessment. To elicit detailed expert knowledge in the interviews, we showed the experts of Dutch TCA the AEO application of PCC, which had used the Deloitte AEO Digiscan, and asked them how they would have assessed this company (if there would have been no AEO self assessment) and if they could point out points of interest. We asked Deloitte experts to explain the reasoning done by the tool by giving examples from PCC‘s AEO application using the AEO Digiscan. To analyze and structure the interview results, we use an adapted version of the knowledge model templates for the assessment task of the CommonKADS methodology [12]. As the self assessment task is concerned with identifying risks, implementing and evaluating control measures to mitigate risks we consider the IT risk management model of NIST [15] an appropriate starting point for a domain model. Furthermore we used [8] [9] as general background knowledge of the AEO self assessment domain. 3.2 Findings We found that the interpretations of Deloitte and Dutch TCA of the task and domain model for AEO self assessment overlap. The overlap was especially visible in the domain models that both include general risk analysis concepts and concepts based on topics of the AEO guidelines [8]. However, important aspects of the self assessment are interpreted differently. In general we found that the approach offered by the AEO Proceedings of CAiSE Forum 2009 34 Proceedings of CAiSE 2009 Forum Digiscan is more structured and requires less expertise on AEO legislation, than the Dutch TCA approach posted on their website. However both the task and inference model showed that the scope of the AEO Digiscan is limited; it focuses on risk assessment (identifying risks and measures) while Dutch TCA‘s risk management approach, covers risk assessment and the implementation of measures. We also observe a difference in scoring: a measure of the implementation of control measures by Dutch TCA and a risk-based scoring by Deloitte. These differences correspond with the views that Dutch TCA and Deloitte have on the AEO certification. Dutch TCA sees the AEO self assessment as a means to judge the quality of companies‘ internal control system, and to create awareness of potential risks. In contrast, Deloitte efficiently provides companies with an indication of their position with respect to achieving the AEO status. This difference became explicit when comparing the inference models of both parties. These findings are important aspects that should have been addressed during the early requirements phase. The aspects greatly influence the kind of tool that is developed and the role the tool will fulfill within the task of ―self assessment‖. They lead to different system requirements. 4 Conclusion Charting the differences between mental models of stakeholders is an important element of developing a complex decision support system, because it helps to identify differences in expected functionality, and in the way the system is expected to be used. Differences in task and domain models will lead to different system requirements, consider for example the difference in scoring. Where most approaches only identify the difference in scoring, mental models help to unravel the underlying issues that contributed to these differences, such as the differences in scope and the perception of the task. Therefore such mental model mapping should be part of the early requirements engineering phase [5]. Note that expectations may be too complex to implement. It is easier to design and implement an expert system about compliance (rule-based), than about risk assessment in context (principle-based). Once such expectation gaps have been identified, it is important that the stakeholder, who is having the system developed, makes clear choices about the intended functionality of the system, and communicates these to the other stakeholders. An interesting side-effect of our research is that the stakeholders themselves have now realized what their respective positions are. The differences are not insurmountable. In fact, some Deloitte experts have expressed a willingness to adapt their tool, and especially the risk-based scoring model, to address concerns of Dutch TCA about the implementation of control measures. Acknowledgments. This research is part of the EU project ITIADE. We are grateful for the open and insightful discussions with representatives of Dutch TCA, Deloitte and PCC. Proceedings of CAiSE Forum 2009 35 Proceedings of CAiSE 2009 Forum References 1. Baida, Z., Rukanova, B., Liu, J. & Tan, Y.: Preserving Control in Trade Procedure Redesign - The Beer Living Lab, Electronic Markets, The International Journal, Vol. 18, No. 1, pp. 53--64 (2008) 2. Boella, G. and van der Torre, L.: Norm negotiation in multiagent systems. International Journal of Cooperative Information 16(2), pp. 97—122 (2007) 3. Bouquet, P. Serafini, L. and Zanobini, S.: Semantic Coordination: A New Approach and an Application, Proc. ISWC2003, Springer, LNCS Vol. 2870, pp 130-145 (2003) 4. Cannon-Bowers, J.A. & Salas, E.: Reflections on Shared Cognition. Journal of Organizational Behavior, Vol. 22, No. 2, pp. 195-202 (2001) 5. Castro, J., Kolp, M., and Mylopoulos, M.: Towards requirements-driven information systems engineering: The Tropos project, Information Systems (27), pp. 365–389 (2002) 6. Eisenhardt, K. M.: Building theories from case study research. The Academy of Management Review 14 (4), pp. 532-550 (1989) 7. van Engers, T.M., Kordelaar, P.J.M., den Hartog, J., and Glassée, E.: POWER: Programme for Ontology based Working Environment for modeling and use of Regulations and legislation. Proceedings 11th workshop on Database and Expert Systems Applications (IEEE) Greenwich London, pp. 327-334 (2000) 8. European Commission: AEO Guidelines, TAXUD/2006/1450 (2007) 9. European Commission: The AEO Compact model, TAXUD/2006/1452 (2006) 10. Kalfoglou, Y. & Schorlemmer, M.: Formal support for representing and automating semantic interoperability. ESWS 2004, pp. 45–60 (2004) 11. Mohammed, S. & Dumville, B. C.: Team Mental Models in a Team Knowledge Framework: Expanding Theory and Measurement Across Disciplinary Boundaries. Journal of Organizational Behavior, 22. pp. 89-106 (2001) 12. Schreiber, G., Akkermans, H., Anjewierden, A. , de Hoog, R., Shadbolt, N., Van de Velde, W. and Wielinga, B.: Knowledge engineering and management, MIT Press, Cambridge (2000) 13. Schopman, B.A. C. Wang, S. and Schlobach, S.: Deriving Concept Mappings through Instance Mappings, In John Domingue and Chutiporn Anutariya, editors, ASWC, LNCS Vol. 5367, pages 122-136 (2002) 14. Sowa, J.: Knowledge Representation: Logical, Philosophical, and Computational Foundations. MIT Press, Pacific Grove, CA: Brooks/Cole (2000) 15. Stoneburger, G., Goguen, A. and Feringa, A.: Risk Management Guide for Information Technology Systems. NIST Special Publication 800-30 (2000) 16. Wooldridge, M.: An Introduction to Multiagent Systems, John Wiley & Sons , Chichester, England (2002) 17. Yu, E.K.S.: Towards Modeling and Reasoning Support for Early-Phase Requirements Engineering, in: Proceedings of the Third IEEE International Symposium on Requirements engineering, pp. 226-235 (1997) 18. Yin, R. K.: Case study research: Design and methods. Sage Publications Inc, London (2003) Proceedings of CAiSE Forum 2009 36