=Paper= {{Paper |id=Vol-1157/empirical1 |storemode=property |title=Lessons Learned in the Use of i* by Non-Technical Users |pdfUrl=https://ceur-ws.org/Vol-1157/paper1.pdf |volume=Vol-1157 |dblpUrl=https://dblp.org/rec/conf/istar/CarvalloF14 }} ==Lessons Learned in the Use of i* by Non-Technical Users== https://ceur-ws.org/Vol-1157/paper1.pdf
Lessons Learned on the use of i* by Non-Technical Users

                           Juan Pablo Carvallo1, Xavier Franch2
            1
             Computer Science Department, Cuenca University, Cuenca, Ecuador
                   2
                    Universitat Politècnica de Catalunya, Barcelona, Spain
          {pablo.carvallo@ucuenca.edu.ec, franch@essi.upc.edu}



       Abstract. Enterprise Architecting activities, particularly the mapping from
       business strategies to information systems architectures, are time-consuming ac-
       tivities usually conducted by relatively large teams, composed of groups of non-
       technical stakeholders playing mostly an informative role, led by few experi-
       enced technical consultants performing most of the documenting and modelling
       effort. Lately, several works have been reported that propose and use i* to sup-
       port these modelling activities. In spite of this increasing adoption and experi-
       ences in different domains, there exists little work in relation to the practical use
       of i* by non-technical stakeholders and the ability that the notation may provide
       them to become more proactive in system modelling activities. We present here
       10 lessons learned in a study conducted in an Ecuadorian University to gain
       empirical evidence in relation to the use of i* by non-technical stakeholders.


1      Introduction
Modern enterprises largely rely on information systems specifically designed to man-
age the continuously increasing complexity of interactions with their context. Enter-
prise Architecture (EA) [1] is an increasingly adopted concept which encompasses
several levels of architectural design, which requires deep understanding of the enter-
prise context and strategies. Early phases of the enterprise architecting process are
usually oriented to model the enterprise context, which helps understanding the pur-
pose of enterprises on their environment, e.g. what is required from them, thus assist-
ing enterprise decision-makers to design and refine their business strategies and en-
terprise architects to understand what will be required from the resulting socio-
technical system. However, far from easy, the construction of such models is usually
a cumbersome task, mainly due to communicational gaps among technical personnel
(e.g. internal or external consultants) with limited knowledge of enterprise structure,
operations and strategy, and their administrative counterparts imposing pressure and
time constraints to the process. Because of this, the role of non-technical stakeholders,
instead of being proactive, is mostly constrained to a merely informative role.
   In order to deal with these problems, in the last few years we have intensively used
the i* notation to bridge the gap among technical consultants and non-technical stake-
holders [2] and proposed the DHARMA method [3] for discovering business architec-
tures departing from the construction of Context Models (CM) expressed in i*. CMs
are interactively built with the participation of non-technical stakeholders, who often
sketch drafts of the models without the intervention of technical consultants. Our
experiences in these works provided important evidence pointing to i* as a valuable
framework to be used by non-technical stakeholders, and more proactively help tech-
nical consultants achieve their requirements and architecting objectives. To validate
this observation, we planned and conducted an empirical validation case, in which we
intended to measure the ability of non-technical stakeholders to learn and use the i*
notation in an industrial setting. We report here the lessons learned in this project.


2      Related Work
There exists little work in relation to i* usability analysis, particularly in cases involv-
ing non-technical stakeholders. We may mention [4] which focuses on i* visual syn-
taxes and semantic transparency. However, that work intends to improve i* symbols
in relation to several dimensions including semiotic clarity, perceptual discrimination,
complexity management and visual expressiveness among other, more than a direct
observation on the real usage. Other works [5, 6] focus on guidelines to simplify,
conduct and improve the process of drawing i* diagrams.
   Our intention in this case study is different: to evaluate the capacity of non-
technical stakeholders to quickly grasp the main concepts included in the framework,
their capacity to assimilate them and produce models of enough quality, when the
DHARMA method is adopted.


3      Empirical Evaluation with Non-Technical Users
The industrial case study was conducted in a private medium size (10.000 students)
Ecuadorian university. We were hired to act as consultants in the reengineering of the
systems architecture, in which we used DHARMA as a preliminary step for defining
an IT strategic plan and identifying a project portfolio. This portfolio encompassed IT
projects of different nature (software acquisition, technology platform modernization
and IT processes definition among other). After the needed initial preparation, the
first activity conducted, as proposed in the DHARMA method, was the construction
of the i*-based CM. The CM construction was performed with the contribution of 13
Organizational Areas (OA): Faculties, Graduate School, Professional/Continuing
Education, Research Deanship, Financial Direction, Treasury, Human Resources,
Students Welfare, Legal Department, Communications Department, Libraries and
External Relations Department.


4      Lessons Learned
Lesson 1: Provide balanced training on the graphical framework.
Discussion: Non-technical stakeholders had no previous training on the graphical
framework or the notations to be used. Therefore, they were not aware of their utility
and objectives. On the one hand, without basic training they would not have been able
to participate actively in the process. On the other hand, we did not want to train non-
technical stakeholders to become experts; this could be a time-consuming and costly
process, struggling against their particular interests and objectives, with no clear re-
turn on investment and thus, increasing the risk of the process.
Our approach: We surveyed participants and identified an average of 8 hours as max-
imum time to be spent on training. We designed a 4-session seminar, each session 2
hours long. In the first sessions we socialized the project and its objectives and taught
participants about strategic planning in technology and hybrid systems architecture
[3]. In Session 3, participants were introduced to i* basic concepts for modeling SD
diagrams and the DHARMA method: actors, dependencies, dependency directions
and dependency types (goal, soft-goal, task and resource) were discussed and several
examples were provided; open discussion among participants was and encouraged at
all times. In the last session, examples of application of the DHARMA method for
construction of i* SD models where provided.
Lesson 2: Provide a roadmap to perform the work
Discussion: Even after training, modelling activities and their objectives can be fuzzy
to non-technical participants. In addition, because of lack of experience, they will not
have notion of time spans or deliverables to be produced as result of each activity.
Our approach: We provided participants with a detailed schedule containing the
work breakdown structure (WBS), including main modelling activities, their task
decomposition, time assigned for their fulfilment and deliverables to be produced.
Task decomposition was fine-grained, so models were completed in several steps. For
instance, for the first activity, we identified the tasks listed in Table 1. Each task was
assigned 2 to 8 hours, but the full activity was scheduled to be completed in a week.
          Table 1. Excerpt of WBS proposed for first modelling activity for one of the OA.
Task               Description              Duration Starts     Ends                       Work Product
  2.2 Construct first CM of the OA            1w    04‐jun‐13 08‐jun‐13 1st. Version of CMs from OA view
2.2.1 Environmental actors identification      4h   04‐jun‐13 05‐jun‐13 List of environmental actors
2.2.2 Goals identification                     8h   05‐jun‐13 07‐jun‐13 CMs including only goals
2.2.3 Resources identification                 2h   06‐jun‐13 07‐jun‐13 Enriched CMs including resources related to goals
2.2.4 Soft‐goals and tasks identification      4h   06‐jun‐13 07‐jun‐13 Enriched CMs including soft‐goals and tasks
2.2.5 Review and consolidation                 4h   07‐jun‐13 08‐jun‐13 1st. Version of CMs from OA view

Lesson 3: Provide guidelines to improve quality
Discussion: Since participants were non-technical, they tend to do their best effort
and justify poor quality of results based on that fact. Techniques shall be provided to
participant to improve quality of their work products.
Our approach: For most well-known graphical frameworks there exist several best
practices documented, as well as tacit knowledge emerging from consultants’ own
experience, which can be transferred to non-technical participants. Starting from the
first day of seminar, we introduced participants to recommended best practices and
encouraged to use them at all times. We issued recommendations referent to: method-
ology (e.g., consider only one actor in the environment of the OA at a time; do not
consider dependencies among environmental actors –not relevant for the modeling of
the organization; use pattern from [6]); conduction (e.g., draw models by hand was
allowed; use tables instead of graphical models was allowed); notational (e.g., the
guidelines listed in [7] were also provided).
Lesson 4: Help users to manage size
Discussion: A common setback shared by most graphical modelling notations is the
difficulty to manage drawing when models scale up. Advice shall be provided to non-
technical participants to avoid over-scaling their models, and facilitate their handling.
Our approach: Fragmenting models in smaller packs or transforming them to tabular
representations are good strategies to manage size. Guidelines such as the provided in
Lesson 3, proved to be particularly good for this purpose. Although some of them
may impede users from viewing the whole picture, fact is that whole models may
have a similar effect because of size and number of elements to be considered at a
time. It is also a fact that these guidelines help participants to focus their attention in
one problem at a time improving quality of work product.
Lesson 5: Avoid the use of specialized tools.
Discussion: Mastering the use of specialized modelling software can be a time con-
suming and rewardless activity, particularly in cases where users may never use the
tools again. In addition this can unnecessarily increase project time span and costs.
Our approach: We lead non-technical participants to create their own models without
providing training or biasing them towards the use of any particular tool. Of course
this approach produces several hand-drawn CMs, all with different notations, but
intentional elements are easy to recognize and time spent by participants on drawing
activities is significantly reduced. This leads participants to focus in more relevant
issues such as, the identification of dependencies or their intentionality.
Lesson 6: Do not over-constrain user’s imagination.
Discussion: Overtraining non-technical users or excessive guidelines may constraint
their thoughts, leading them to skip aspects that can be very relevant for the process.
Our approach: We value free thinking of non-technical participants so we try to pro-
vide just conceptual training and basic guidelines. In some cases this may lead to
significant contributions, both for the process and the methods used to support it. For
instance, when we asked participants to identify environmental actors, we meant ac-
tors in the external context of the organization. However, when creating the CMs
from the perspective of their own OA, non-technical participants also identified other
OA as actors on their context and dependencies among them, which represent internal
process activities. As a result, systems functionality will support needs for both exter-
nal and internal actors’.
Lesson 7: Do not expect excellence in the use of framework elements.
Discussion: As mentioned in Lesson 1, non-technical participants’ lack of experience
and training required to master graphical notations. Therefore is it quite normal for
their CMs, to contain several flaws.
Our approach: As mentioned earlier, we led non-technical participants to draw
freely. Resulting models have to be reviewed by consultants anyway, and eventually
be transferred to specialized tools. Even if the number of mistakes is high, experience
shows that corrections required, are very simple to achieve. Usually requires changing
dependencies directions, their type or the verbal tense in some descriptions. For the
tabular form we use Excel sheets and dropdown lists which largely simplify this pro-
cess.
Lesson 8: Review continuously and provide feedback.
Discussion: Regardless of basic training, roadmaps and guidelines, non-technical
participants will require continuous but decreasing feedback in relation to their inter-
mediate and final work products.
Our approach: During the process we scheduled review meetings in various mile-
stones. In particular, we scheduled two hour review meetings with each OA, after the
tasks of the first modelling activities (see Table 1) where concluded. An additional
week was given to participants in order to refine their models, considering the provid-
ed feedback.
Lesson 9: Plan for validation activities.
Discussion: Analyzing i* graphical models is not an easy task. On the one hand mod-
els are not enforced by any prescriptive method; they are greatly built based on mod-
elers’ point on view and perception. On the other hand, as they scale up, the number
of graphical elements can become very large making the process very hard to manage.
In cases like the one described in this paper the problem is even worst: 13 models
were built by the different OAs simultaneously, which included common elements
which had to be identified and mapped.
Our approach: To facilitate the process, we started by identifying the elements of the
notation and their attributes to be validated. In addition to actors, we considered de-
pendencies the central element for the validation process, and their type, direction and
description the attributes to be validated. To simplify the analysis process, we con-
ducted the following activities for each of the resulting OA’s CM:
 CMs were transcribed from original handmade drawings delivered by non-tech-
  nical stakeholders to their tabular representation by the consultants.
 Identified environmental actors were categorized as actors in the context of the
  organization (ECA) and actors that represent OAs, when they appear as part of the
  CM of other OA (ICA), and assigned an identification code.
 Additional columns were used to state the correct type, direction and description,
  of each of the dependencies included by non-technical stakeholders in their CMs,
  after a careful analysis.
 Finally, consultants added rows to include important dependencies missed by non-
  technical stakeholders in their CMs.
After the review process was completed, workshops with each of the OAs were con-
ducted in order to validate that the interpretation of consultants.
Lesson 10: Be aware of consolidation activities
Discussion: Many empirical studies will require several i* models to be constructed
and later consolidated into a final model, encompassing all of the elements included
in individual ones. Far from easy, this activity could become one of the most difficult
tasks to achieve in the empirical validation process.
Our approach: As mentioned earlier, in our case 13 CMs were constructed. In order
to easy consolidation we first converted them to their tabular form, and merged them
in an Excel sheet. Rows were sorted by actor, dependency type, direction and depend-
ency description, using Excel’s built-in sorting capacities. This eased the identifica-
tion and eventually elimination of duplicated dependencies identified by more than
one OA (including dependencies which were the same but had dissimilar description,
type or direction). The final CM model after completing this process included 69
ECA grouped in 6 categories (identified according to the CRM pattern we presented
in [8]), 53 ICA grouped in 5 categories and 1124 dependencies, 569 connecting the
organization with some ECA and 555 connecting the organization with some ICA.


5      Concluding Remarks
Conducting empirical studies involving graphical notations is not an easy task. Many
problems emerge from the nature of the notations themselves and also from their use
by participants, particularly in cases where they do not have training in the required
graphical notations. These aspects make this kind of study specially challenging.
   In this paper we have identified some problems emerging from this type of studies
and presented lessons that were learned in an empirical case study conducted in an
industrial process in which we acted as consultants. Other lessons, e.g. related to the
practical semantics of i* constructs [9] or about granularity of goals and tasks, have
not been reported by lack of space. The case study was finally successful, and alt-
hough this does not necessarily mean a cause-effect relationship, there was an agree-
ment that the actions described in this paper were fundamental for its positive conclu-
sion.


6      References
[1] The Open Group. The Open Group Architecture Framework (TOGAF) version 9. The
    Open Group, 2009.
[2] Carvallo, J.P. Supporting Organizational Induction and Goals Align-ment for COTS Com-
    ponents Selection by Means of i*. ICCBSS 2006
[3] Carvallo, J.P., Franch, X. On the Use of i* for Architecting Hybrid Systems: A Method and
    an Evaluation Report. PoEM 2009.
[4] Moody, D.L., Heymans, P., Matulevicius, R. Improving the Effectiveness of Visual Repre-
    sentations in Requirements Engineering: An Evaluation of i* Visual Syntax. RE 2009.
[5] Estrada, H., et al. “An Empirical Evaluation of the i* Framework in a Model-Based Soft-
    ware Generation Environment”. CAiSE 2006.
[6] Hadar, I., et al. Comparing the Comprehensibility of Requirements Models expressed in
    Use Case and Tropos: Results from a Family of Experiments. IST 55(10), 2013.
[7] Carvallo, J.P., Franch, X. On the use of Requirements for Driving Call-for-tender Processes
    for procuring Coarse-grained OTS Component. RE 2009.
[8] Carvallo, J.P., Franch, X. Building Strategic Enterprise Context Models with i*: A Pattern-
    Based Approach. TEAR 2012.
[9] Guizzardi, R. et al. Ontological Distinctions between Means-end and Contribution Links in
    the i* Framework. ER 2013.