Proceedings Collaborative Modeling: Towards a Meta-model for Analysis and Evaluation ? D. (Denis) Ssebuggwawo1 , S.J.B.A. (Stijn) Hoppenbrouwers1 , and H.A. (Erik) Proper1,2 1 Institute of Computing and Information Sciences, Radboud University Nijmegen Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands, EU. D.Ssebuggwawo@science.ru.nl, stijnh@cs.ru.nl 2 Public Research Centre – Henri Tudor, Luxembourg, EU. erik.proper@tudor.lu Abstract. In this paper we discuss a meta-model for the analysis and evaluation of collaborative modeling sessions. In the first part of the meta-model, we use an analysis framework which reveals a triad of rules, interactions and models. This framework, which is central in driving the modeling process, helps us look inside the modeling process with the aim of understanding it better. The second part of the meta-model is based on an evaluation framework using a multi-criteria decision analysis (MCDA) method. Central to this framework, is how modelers’ quality priorities and preferences can, through a group decision-making and negotiation process, be traced back to the interactions and rules in the analysis framework. Key words: Collaborative Modeling, Modeling Process Quality, Mod- eling Process Analysis, Modeling Process Evaluation, Group Support Tools 1 Introduction A number of studies have, over the years, looked at collaborative modeling [1,2,3]. There have also been attempts to understand the modeling process [4,5]. Such modeling is driven by participants’ communication. Human communication [6], in collaborative modeling, involves argumentation, negotiation and decision mak- ing. Often, participants need to agree, through negotiation and decision making, on what constitutes, for example, “quality” for the different modeling artifacts and how such quality should be assessed. However, how to assess the quality of the collaborative modeling process, especially with respect to the modeling artifacts, remains a largely unexplored area. The current paper tries to develop a meta-model which can be used for both the analysis and evaluation of a collaborative modeling process and the relation ? This paper first appeared as a Working Paper on Information Systems in Sprouts. http://sprouts.aisnet.org/10-36/ 56 5th SIKS/BENAIS Conference on Enterprise Information Systems between events in the process and the resulting artifacts. The meta-model links the modeling artifacts and the evaluation framework to the rules, interactions and models (RIM) framework [7] through the interactions which are governed by rules. The interactions, rules and models are a result of the communicative process, mainly through modelers’negotiation. Negotiation plays a key role in collaborative modeling. It is through negotiation that modelers reach agreement and possibly consensus. In this paper we limit our discussion to negotiation dialogues from argumentation theory. Negotiation dialogue has been widely studied, see for example [8,9,10]. Its practical applications include multi-agent systems (MAS) [11,12,13,14] with wide applications in electronic commerce [15,16,17]. Negotiation dialogues start from a position of conflict and the goal is to establish some consensus or compromise for all the parties involved. Usually, participants have conflicting objectives, in- terests, preference and priorities. Through the process of negotiation, they get a compromise position that everyone is comfortable with. This is what happens in a multi-actor (collaborative and interactive) modeling process. Modelers have conflicting views, priorities and preferences and they engage in an argumentation process, that involves, propositions, (dis)agreements, acceptances and rejections, supports and withdraws, etc, to reach a compromise. It should be noted that, although there are a number of factors that one may be interested in looking at in the analysis and evaluation of the modeling process, which in fact may influence the quality of the modeling process, e.g., power struggle, leadership and the unspoken message or body language, etc., (see for example, [18]), our interest at the moment is in what we call “drivers” of the modeling process. Rules and/or goals, interactions, and models are hypothesized to be drivers of the modeling process. In this paper we concentrate on only these. 2 Modeling Process Analysis: The RIM Framework Stakeholders, in a collaborative modeling process, interact and communicate their ideas and opinions to other members through the communication process. Three key items concerning this communication are the rules, the interactions and the models. The rules, interactions and models (RIM) framework is based on these items and helps us look into the collaborative modeling process. This framework is depicted in Fig. 1. Details of the RIM framework are found in [7]. The RIM framework is a three-tier framework that examines the communicative acts (interactions) in a modeling session, the rules/goals set, and the models produced as a result of the interaction and collaboration. The different collab- orative modeling players work under a set of rules and goals. The rules/goals, interactions and models are all time-stamped to help us track and identify he interplay between any pair. The interplay of rules, interactions and models is explained in Table 1. 57 Proceedings Fig. 1. A framework for analyzing interactions, rules and models. Table 1. RIM framework features Path Interplay IM-MI The interactions lead to the generation of models and generated (inter- mediate) models drive further interaction. RM-MR Some rules/goals of modeling apply to (intermediate) models and these models may lead to the setting of new rules/goals. RI-IR Rules guide and restrict interactions and some interactions may change the rules of play. 2.1 Interaction Analysis: The Structure In order to analyze the interactive conversations and determine the structure of the speech-acts that result thereof, we need to apply a discourse analysis or conversation analysis technique. There are a number of methods which can be used, notably, speech-act theory by Searle [19]. Searle’s aim in his “Theory of Speech Acts” [19] was to show that: “speaking a language is performing acts (· · · ) in accordance with certain rules for the use of the linguistic elements”, and to formulate these rules. He argues that the minimal unit of an utterance is not a word or sentence but a “speech act”. Two types of speech acts were identified in his theory: propositional act - which is the act of uttering words and illocutionary act - which is a complete speech act. An illocutionary act has two components: propositional content which describes what an utterance is about and illocutionary force describes the way it (utterance) is uttered. In addition, each illocutionary act has an illocutionary point which characterizes that particular type of speech act. Searle classifies utterances according to the illocutionary point and proposes five classes of speech acts shown in Table 2. However, as argued in [20], speech-acts are individual statements in the whole conversation and cannot be analyzed outside the whole conversation in which they occur. The language-action perspective (LAP) [21] is, therefore, a candi- date in analysing the whole conversation in which the speech-acts are just com- ponents. We base our analysis of the communicative process on LAP to identify the conversational interactions that occur in a collaborative modeling process. 58 5th SIKS/BENAIS Conference on Enterprise Information Systems Table 2. Illocutionary speech act types . Speech-Act Explanation Type Assertive represent facts of the world of utterance or common experiences, e.g., reports or statements Directives represent the speaker’s attempt to get the hearer perform the action indicated in the propositional content, e.g., requests Commissives represent the speaker’s intention to perform the action indi- cated in the propositional content, e.g., promises Expressives say something about the speaker’s feeling or psychological at- titudes regarding the state of affairs represented by the propo- sitional content, e.g., apologies Declaratives change the world through the utterance of a speech act Fig. 2 shows the structure of the interactions. We use Object Role Modeling (ORM) method [22] to represent analysis and evaluation concepts in this paper. Table 3 shows the elements of the interaction component. responds to has has Category InteractionNr Topic (.name) TopicNr (.name) has ends at has contains exchange of SpeechAct Interaction Time (.name) (.name) (.hms) ModelProposition begins at (.name) generates Rule Actor (.name) (.name) is guided by has GroupNegotiation GroupDecisionMaking Fig. 2. Elements of an interaction 2.2 Rule Analysis: The Structure Rules govern the interactions and production of the models. They guide col- laborative modelers during the modeling process and can be set for (before) or in (during) the modeling process. They link the product of the conversations - the model to the conversations and they are intended to guarantee both process 59 Proceedings Table 3. Explanation for elements of an interaction Element Explanation InteractionNr Unique number that refers to an interaction. Time Time at which an interaction is (de-)activated. Topic Subject under discussion in an interaction with a topic number. Actor A participant in an interaction. Speech-act An illocutionary act from the interaction and has a category. ModelProposition Model formation proposition (implicitly/explicitly agreed to). Rule Guideline(s) or convention(s) that direct the interactions. quality and model quality. Rules are either explicitly stated or implicitly stated. The elements of a rule are given in Fig. 3 while Table 4 explains these elements. Interaction (.name) is de-activ ated in is activ ated in is activ ated at is de-activ ated by Rule Time Content (.name) (.hms) is de-activ ated at is activ ated by ModelProposition (.name) guides Goal is explicit is implicit Fig. 3. Elements of a rule Table 4. Explanation for elements of a rule Element Explanation Content Conversational content in which a rule is (de-)activated. Time Time at which a rule is (de-)activated. Interaction Conversations from which propositions are generated. ModelProposition Model formation proposition (implicitly/explicitly agreed to). Goal A rule that sets the state to strive for. 60 5th SIKS/BENAIS Conference on Enterprise Information Systems 2.3 Model Analysis: The structure Models (intermediate or final) are lists of propositions up to time t, i.e. conversa- tional statements commonly agreed upon and shared by all the modelers. These model propositions are subject to selection criteria in order to determine which one makes it to the group (shared) model. In collaborative modeling a model proposition is either explicitly agreed with or implicitly not disagreed with. The structure of a model proposition component is shown in Fig. 4 while its elements are explained in Table 5. Interaction (.name) is de-activ ated at is generated from Time ModelProposition SelectionCriteria (.hms) (.name) is selected by is guided by is activ ated at Rule (.name) Fig. 4. Elements of a model proposition Table 5. Explanation for elements of a model proposition Element Explanation Rule Guidelines that direct the selection of a model-proposition. Time Time at which a model-proposition is (de-)activated. SelectionCriteria A set of evaluation criteria used to select a model-proposition. Interaction Interaction from which a model-proposition is generated. 3 Modeling Process Evaluation: An MCDA Framework In collaborative modeling a number of artifacts are used in, and produced dur- ing, the modeling process. These include the modeling language, the methods or approaches used to solve the problem, the intermediate and end-products produced and the medium or support tool that may be used to aid the collab- oration, see for example [23]. The priorities of the individual decision makers 61 Proceedings need to be aggregated, so as to reach agreement and consensus on what should be the group’s position as far as modeling process quality is concerned. Reach- ing agreement requires group decision making and negotiation. Group decision making and negotiation are special types of interactions during the modeling process. This is what provides a link between the analysis (RIM) framework and the evaluation (MCDA) approach. In Section 4, it will be shown how this link is exploited to get a unified framework for analysis and evaluation. In the eval- uation, we use a Multi-criteria Decision Analysis (MCDA) method to evaluate the modeling artifacts. We specifically use the single synthesizing (weighting) criterion preference approach - with Analytic Hierarchy Process (AHP) [24]. has is giv en ModelingArtifact QualityCriteria QualityScore (.name) (.name) (.nr) is ameasure of PriorityValue (.nr) is of is used in IndividualQScore GroupQScore Quality (.nr) "ModelingA rtifactIsEv aluatedInInteraction" is of { 'w eighting', 'outranking', 'interactiv e' } is ev aluated in MCDA Type (.name) using { 'A HP', 'M A U T/M A V T', 'E LEC TRE ', 'PRO M ETHEE', 'MOMP' } Interaction Rule (.name) (.name) is guided by GroupNegotiation GroupDecisionMaking Fig. 5. Elements of a modeling artifact Table 6. Explanation for elements of a modeling artifact Element Explanation Quality Degree of excellence or deficiency-free state. QualityCriteria A modeling artifact feature to measure quality. QualityScore A value given to a criterion as a measure of its quality. It may be an individual or group score. PriorityValue Aggregated quality scores to determine priority values. Interaction Group negotiation/decision-making to agree on quality scores. Rule A set of guidelines that direct the interactions. MCDA A multi-criteria decision analysis approach used for the evaluation. It is of a certain type The structure of the evaluated modeling artifact component, within the MCDA evaluation framework, is shown in Fig. 5. The different concepts are explained in Table 6. One important observation about the modeling artifact and the evaluation framework is the link provided by the evaluated modeling artifact to the RIM framework through the interactions which are governed by 62 5th SIKS/BENAIS Conference on Enterprise Information Systems rules. This is an important observation since it helps us to unify the two frame- works. 4 The Analysis and Evaluation Meta-model In this section we combine the components to form a unified model for the inte- grated analysis and evaluation (of process and results) of collaborative modeling. The aim of having a unified framework is twofold: 1) to trace the flaws in the modeling process using the evaluation framework back to the analysis framework, 2) to automate the analysis and evaluation by a having support tool which can be used to both analyze and evaluate the modeling process. Although the anal- ysis and evaluation frameworks can stand on their own, having a tool-support that can help modelers to analyze and evaluate the process and trace flaws in the entire modeling process is more attractive than the individual frameworks. The components of the integrated frameworks are linked together in a meta-model shown in Fig. 6. The novelty of the meta-model is that it combines the analysis and evaluation frameworks, i.e., the RIM framework and the MCDA framework. This is easily visible in the meta-model where the triage of the rules (R), inter- actions (I) and models (M) in Fig. 1 is depicted through the rules, interactions and model proposition entities. is explicit is implicit Category Actor InteractionNr (.name) (.name) responds to is de-activ ated by has has has Rule has has Content (.name) Topic SpeechAct TopicNr (.name) (.name) guides Goal contains exchange of is activ ated by is activ ated at is de-activ ated at { 'A HP', 'MA U T/MA V T', 'ELEC TREE', 'PRO M ETHEE', 'MOMP' } ends at MCDA Interaction (.name) (.name) "M odelingA rtifactIsE v aluatedInInteraction" Time (.hms) is of is guided by begins at using is ev aluated in Type is generated from stops at starts at { 'w eighting', 'outranking', 'interactiv e' } GroupNegotiatipon GroupDecisionMaking ModelProposition (.name) is giv en has QualityScore QualityCriteria ModelingArtifact (.nr) (.name) (.name) is selected by is used in PriorityValue is of SelectionCriteria (.nr) is a measure of GroupQScore Quality IndividualQScore (.nr) Fig. 6. An integrated meta-model for collaborative modeling analysis and eval- uation 63 Proceedings 5 Meta-Model in Use: Illustrative Examples To demonstrate the theoretical importance and practical significance of the model we provide below some illustrative examples. The examples are drawn from recorded communication/conversations that took place during a modeling session. 5.1 Application of the Meta-Model: The Analysis Example 1. Interaction analysis in Fig. 2 is based on the following excerpt. Table 7 shows the elements of an interaction. Time Actor Speech Act 02:00 M1 So, where does Ordering start? 02:03 M2 First we have to decide who takes part in it. So we can set that on top of the diagram? 02:10 M1 There are numbers, so that’s easy, so probably the purchasing officer is involved? 02:18 M2 Eh ... I guess so. 02:21 M1 So he needs ordering one second ... ”draws 2”. Table 7. Extracted elements of interaction from the coded meta-data Int. # Int. Name Top. # Top. Name Speech Act Type/Category Rsp. to Time Actor 1 INFORMATION 1 SET CONTENT QUESTION 02:00 M1 SEEKING [Where does ordering start?] 2 2a SET CONTENT PROPOSITION 02:03 M2 [First we have to decide who takes part in DECISION Ordering] MAKING 2b SET GRAMMAR QUESTION GOAL [Can we set who takes part in Ordering on top of the diagram?] 3 3a SET GRAMMAR PROPOSITION-QUESTION 2b 02:10 M1 GOAL [There are numbers, so that’s easy, so probably the purchasing officer is involved?] INQUIRY PROPOSITION 3b SET CONTENT [Purchasing Officer is involved in Ordering] 2a 4 NEGOTIATION 4 SET CONTENT AGEEMENT WITH 3b 02:18 M2 [Eh… I guess so] 5 DELIBERATION 5 SET CONTENT DRAWING 02:21 M1 [So he needs ordering … one second … “draws 2”,i.e., number 2 (purchasing officer) on top of first swim lane KEY: Int.: Interaction Top.: Topic Rsp.: Response. Example 2. Rule analysis for Fig. 3 is based on the following excerpt of modeling session conversations. Extracted elements of a rule from the coded meta-data are given in Table 8. 64 5th SIKS/BENAIS Conference on Enterprise Information Systems Time Actor Speech Act 01:25 M1 Let’s create 5 swim lane diagrams. 01:30 M2 Yes, isn’t that what I just proposed? 08:43 M1 Sequences are started with the START symbol ... 08:45 M2 Yes ... 08:48 M2 Use blocks to indicate activities. 15:18 M1 So no decision diamonds in UML activity diagrams? 15:19 M2 No; well; maybe. Table 8. Extracted elements of a rule from the coded meta-data Rule Int. Name[A] Content[A] Time[A] Int. Name[D] Content[D] Time[D] M.P VALIDATION DELIBERATION All participants should All t DELIBERATION De-activated when all or End t GOAL agree on the model. the majority have agreed [Proposed and on the model, i.e. activated in the reached consensus. Assignment.] CREATION PERSUASION Let’s create 5 swim 01:25 PERSUASION Yes, isn’t that what I 01:30 A.C GOAL lane diagrams - [14] just proposed?-[15] [14] PROPOSITION ARGUMENT FOR 14 GRAMMAR INFORMATION Sequences are started 08:43 INFORMATION Yes…[149] 08:45 A.C RULE SEEKING with the START SEEKING AGREEMENT WITH [148] symbol …- [148] 148 CLARIFICATION GRAMMAR NEGOTIATION Use blocks to indicate 08:48 - - - A.C GOAL activities - [151] [151] PROPOSITION GRAMMAR INQUIRY So no decision 15:18 INQUIRY No; well; maybe-[249] 15:19 GOAL diamonds in UML ANSWER 248 activity diagrams?[248] QUESTION KEY: Int.: Interaction A.C.: Activation Content M.P.: Model Proposition [A/D]: Activated/De-activated Some explanation is in order for some of the concepts shown in Tables 7 and 8. The categories for coding the modeling conversations, i.e., the interaction names in both tables correspond to the dialogue types of Walton and Krable [25] whereas the topic names and rule categories, in Table 8, are explained in [7]. The validation goal is an example of an explicitly stated rule. This is activated at the start of the modeling session and remains so until de-activated at the end of the modeling session. The others are all implicitly stated and are (de-)activated during the interactions as shown by the (de-)activation content. It should be be noted that we use the terms “activation” and “de-activation” in the sense that modeler M1 starts the argument and modeler M2 concludes it in the sense of reaching a final agreement. For each we identify, respectively, the interaction, content and time in (by, at) which the argument was started and concluded. Example 3. Model proposition analysis in Fig. 4 is based on the following excerpt. Extracted elements of a model proposition from the coded meta-data are given in Table 9. 65 Proceedings Time Actor Speech Act 14:41 M1 If there is no place, he can’t order or there is no availability. 14:45 M2 Yeah, true... 14:50 M2 You cannot do decision diamonds in UML activity diagrams. 14:57 M2 You can only have splits and joins of some sort, not the decisions as such. 16:46 M1 We can also say that if the form isn’t filled in well then it is rejected but... 16:55 M2 Yeah ... 17:07 M1 No-route and terminal point from ”accept” in swim lane 7, with ”no order” ... 17:14 M2 OK..., Yes Table 9. Extracted elements of a model proposition from the coded meta-data Model Proposition Time Rule Name Int. Name Selection Criterion Act. De-act. If there is no place, he cannot order or there is 14:41 CREATION NEGOTIATION Explicitly agreed with no availability. Yeah, true... 14:45 You cannot do decision diamonds in UML 14:50 - GRAMMAR PERSUASION Not explicitly disagreed activity diagrams. with. You can only have splits and joins of some sort, 14:57 - not the decisions as such. We can also say that if the form isn't filled in 16:46 CREATION NEGOTIATION Explicitly agreed with. well then it is rejected but... Yeah ... 16:55 No-route and terminal point from "accept" in 17:07 GRAMMAR NEGOTIATION Explicitly agreed with. swim lane 7, with "no order" ... OK..., Yes 17:14 KEY: Act.: Activated De-act.: De-activated Int.: Interaction 5.2 Application of the Meta-Model: The Evaluation Example 4. Evaluation analysis in Fig. 5 is based on an evaluation instrument part of which is shown in Fig. 7. This instrument is used, first by individual modelers, and then second by a team of modelers, to evaluate the modeling ar- tifact (modeling language, modeling procedure, modeling products-the models and the support tool). The instrument shows, for example, how a modeling pro- cedure is evaluated (using its selected quality criteria). These are assigned scores using the fundamental scale [24]. The quality criteria (quality dimensions of the modeling artifacts) are defined in [23] and the process of assigning these quality criteria scores is explained therein. Upon reaching consensus through negotiation and decision making processes, modelers use these scores in the computation of priorities and the overall quality for the modeling artifacts as shown in Table. 10. 66 5th SIKS/BENAIS Conference on Enterprise Information Systems 7/2/2010 3:01:18 PM Page 1 of 1 Model Name: COME Numerical Assessment 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 9 Efficiency Effectiveness Compare the relative importance with respect to: Modeling Procedure Efficiency EffectivenesSatisfaction Commitmen Efficiency 2.0 6.0 3.0 Effectiveness 5.0 6.0 Satisfaction 1.0 Commitment & Shared Understanding Incon: 0.07 Fig. 7. Evaluating a modeling artifact in collaborative modeling Table 10. Elements of a modeling artifact Modeling Quality Priority Overall MCDA Int. Name Rule Artifact Criterion Score value Quality Name Type Modeling - Efficiency 6 0.464 NEGOTIATION/ VALIDATION Procedure - Effectiveness 5 0.368 DECISION MAKING GOALS/ - Satisfaction 0.077 AHP Weighting CREATION GOALS 1 - Commitment & 1 0.092 0.359 Shared Understanding Int.: Interaction 5.3 Discussion The examples given, do illustrate how the analysis and evaluation frameworks can be used to, respectively, analyze and evaluate the modeling sessions. The interactions provide a driving force through the argumentations, negotiations, etc., for the modeling process while the rules and/or goals are a part and parcel of the structuring process during the modeling process, especially, when there is no facilitator. It has been observed in [7] that modelers structure the modeling process into pro-active rule and goal setting procedures and ad-hoc reactive rule and goal setting procedures. With this kind of structuring, it is possible to see how the rules are set for, and set in, the modeling session. Analysing the data from such a well-structured process helps us to pin-point to the types and categories of these rules and goals, the interaction types and it enables us to see how the modeling session unfolds and progresses and how models are created from (implicitly or explicitly) agreed upon statements. Identifying the drivers of the collaborative process in terms of rules, interactions and models is likely MSD 67 Proceedings to enable development of guidelines that can be used in the development of an automated support tool for the analysis. Figure 7 and Table 10 show, respectively, how the evaluation of the modeling process and the associated artifacts can be done and how the modelers’ priorities can be aggregated. There are a number of modeling artifacts that are used in and developed during a collaborative modeling session. These include the modeling language, the modeling procedure, the models, and the support tool or medium. Analyzing what takes place during the modeling process, and what drives the modeling process won’t be complete unless we assess and evaluate the quality of all these modeling artifacts. Evaluation is quite important since it gives assurance about the quality of these artifacts and through the meta-model we can trace the flaws in the modeling process back to the analysis. One key observation is that the modeling artifacts’ quality dimensions can be assigned quality scores during a negotiation and decision making (interactive) process using a multi- criteria decision analysis technique, e.g., AHP [24], where the modelers’ different priorities, preferences are reconciled and aggregated, and the overall quality is finally obtained by synthesizing the priorities. Rules and/or goals play a role since they direct and guide the modeling process. 6 Conclusion and Future Research The contribution of the paper is twofold. First, it shows how the collaborative modeling process can be analyzed through the RIM framework and how it can be evaluated through the MCDA evaluation framework. Second, it develops a meta-model which unifies the analysis framework and the evaluation framework. To test the soundness of the meta-model, we provided illustrative examples from real modeling sessions. Though simple in description, these examples bring out well the concepts discussed for the meta-model. One key observation is that the types or names of the identified interactions are similar to those identified by Walton and Krabbe [25][26] in “Argumentation Theory”, with the exception of the “eristic” dialogue. Future Research Direction. For future research, we intend to apply the meta- model to modeling sessions, especially empirical tests with experts in industry to further test the theoretical significance and practical relevance and importance of the meta-model. More specifically, we intend to further study and analyze the modeling process using a number of other factors other than those concentrated on in this paper, e.g., dialogue games and argumentation process through negoti- ation from a number of perspective, e.g., multi-agents, (see for example, [27,28]). We further intend to test our a priori hypothesis about the interdependencies of the modeling artifacts and how the quality of one affects the quality of the other. We hypothesize that the the modeling language and the support tool are independent whereas the modeling products (models) and the modeling proce- dure are dependent variables in a multi-actor multi-criteria modeling session. Our intention is to empirically study this interdependency. Establishing this re- 68 5th SIKS/BENAIS Conference on Enterprise Information Systems lationship is key in helping develop guidelines for a support tool that automates the analysis and evaluation of the modeling process. References 1. Gjersvik, R., Krogstie, J., Følstad, A.: Participatory Development of Enterprise Process Models. In: Krogstie, J., Halpin, T., Siau, K.(eds.), Information Modeling Methods and Methodologies, pp. 195–215. IGI Global (2005) 2. Stirna, J., Persson, A.: Ten Years Plus with EKD: Reflections from Using an En- terprise Modeling Method in Practice. In: Proper, H.A., Halpin, T.A., Krogstie, J. (eds.), Proceedings of the EMMSAD 2007, held in conjunctiun with CAiSE’07, pp. 97–106. Tapir Academic Press, Trondheim, Norway (2007) 3. Barjis, J., Kolfschoten, G.L., Verbraeck, A.: Collaborative Enterprise Modeling. In E. Proper, F. Harmsen, and J.L.G Dietz (eds.): PRET 2009, LNBIP 28, pp. 50–62. Springer Heidelberg (2009) 4. Veldhuijzen van Zanten, G., Hoppenbrouwers, S.J.B.A., Proper, H.A.: System De- velopment as a Rational Communicative Process. J. of Systemics, Cybernetics and Informatics 2(4), 47-51 (2004) 5. Rittgen, P.: Negotiating Models. In Krogstie, J., Opdahl, A.L., Sindre, G. (eds.) CAiSE 2007. LNCS 4495, pp. 561-573. Springer Heidelberg (2007) 6. Clark, H.H., Brennan, S.E.: Grounding in Communication. In: Resnick, L.B., Levin, J.M., Teasley, S.D. (eds.) Perspectives on Socially Shared Cognition, pp. 127–149. American Psychology Association, Washington (1991) 7. Ssebuggwawo, D., Hoppenbrouwers, S.J.B.A., Proper, H.A.: Interactions, Goals and Rules in a Collaborative Modeling Session. In: Persson, A., Stirna, J. (eds.) PoEM 2009. LNBIP 39, pp. 54-68. Springer Berlin Heidelberg (2009) 8. Pruit, D.G.: Negotiation Behaviour. Academic Press NewYork (1981) 9. Raiffa, H.: The Art and Science of Negotiation. Havard University Press, Cambridge, MA (1982) 10. Rosenschien, J., Zlotkin, G.: Designing Conventions for Automated Negotiations among Computers. MIT Press, (1994) 11. Amgoud, L., Prade, H.: Generation of Different Types of Arguments in Negotiation. In: Delgrande, J.P.,Schaub, T. (eds.), Proceedings of the 10th International Workshop on Non-Montonic Reasoning, Whistler, Canada, June 2-5, 2004, pp. 10–15 (2004) 12. McBurney, P., van Eijk, R.M., Parsons, S, Amgoud, L.: A Dialogue-game Protocol for Agent Purchase Negotiations. J. of Auto. Agents and Multi-agents Sys 7(3), 235– 273 (2003) 13. Parsons, S, Jennings, N.R.: Negotiation Through Argumentation: A Preliminary Report. Proceedings of the 2nd International Conference on Multiagent Systems, Kyoto Japan, pp. 267–274 (1996) 14. Parsons, S., Sierra, C., Jennings, N.: Agents that Reason and Negotaiate by Argu- ing. J. Logic Comput. 8(3), 261–292 (1998) 15. Lâasri, B., Lâasri, H., Lander, S., Lesser, V.: A Generic Model for Intelligent Ne- gotiating Agents. Int. J. of Cooperative Information Systems 1, 291–317 (1992) 16. Sandholm, T., Lesser, V.: Issue in Automated Negotiation and E-Commerce. Ex- tending the Contract Net Protocol. In Proceedings of the 1st International Conference on Muti-agent Systems, AAAIPress: Menlo Park, CA, USA, pp. 328–335 (1995) 17. Sierra, C., Dignum, F.: Agent Mediated Electronic Commerce: Scientific and Tech- nological Road-map. In Dignum, F. and sierra, C. (eds.), Agent Mediated Electronic Commerce 1991. LNAI 1991, pp. 1–18. Springer-Verlag-Berlin (2001) 69 Proceedings 18. Sackler, M.L.: The Unspoken Message. Modern Psychoanalysis 23, 53–62 (1998) 19. Searle, J.R.: Speech Acts. an Essay in the Philosophy of Language. London, Cam- bridge University Press (1969) 20. Winograd, T., Flores, F.: Understanding Computers and Cognition. A New Foun- dation for Design. Norwood, Ablex (1986) 21. Goldkuhl, G.: Conversational Analysis as a Theoretical Foundation for Language Action Approaches? In: Weigand, H., Goldkuhl, G., de Moor, A. (eds.). In: 8th Inter- national Working Conference on the Language Action Perspective on Communication Modelling (LAP 2003), pp. 51–69. Tilburg, The Netherlands (2003) 22. Halpin, T.: Information Modeling and Relational Databases: From Conceptual Analysis to Logical Design. Morgan Kaufmann Pub (2001) 23. Ssebuggwawo, D., Stijn Hoppenbrouwers, S.J.B.A., Proper, H.A. (2009). Evaluat- ing Modeling Sessions Using the Analytic Hierarchy Process. In: Persson, A., Stirna, J. (eds.) PoEM 2009. LNBIP 39, pp. 69-83. Springer Berlin Heidelberg (2009) 24. Saaty, T.L.: The Analytic hierarchy Process. McGraw-Hill, New York (1980) 25. Walton, D., Krabbe, E.C.W.: Commitment in Dialogue: Basic Concepts of Inter- personal Reasoning. State University of New York Press, Albany, N.Y. (1995) 26. Reed, C., Norman, T.J.: Argumentation Machines: New Frontiers in Argument and Computation.Kluwer Academic Publishers, Dordrecht, The Netherlands (2004) 27. Hindricks, K., Jonker, C.M., Tykhnov, D.: Negotiation Dynamics: Analysis, Con- cessions Tactics and Outcome.In: Proceedings of the 2007 IEEE/WIC/ACM Inter- national Conference on Multi Agent Technology, pp. 427–433 (2007) 28. Markus, M.G., Gerhard, W.: A Generic Framework for Argumentation Based Ne- gotiation. In: Cooperative Multi Agents. LNCS 4676, pp. 209–233 (2007) 70