<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Detecting Judgment Inconsistencies to Encourage Model Iteration in Interactive i* Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jennifer Horkoff</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eric Yu</string-name>
          <email>yu@ischool.utoronto.ca</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Information, University of Toronto</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <fpage>20</fpage>
      <lpage>25</lpage>
      <abstract>
        <p>Model analysis procedures which prompt stakeholder interaction and continuous model improvement are especially useful in Early RE elicitation. Previous work has introduced qualitative, interactive forward and backward analysis procedures for i* models. Studies with experienced modelers in complex domains have shown that this type of analysis prompts beneficial iterative revisions on the models. However, studies of novice modelers applying this type of analysis do not show a difference between semi-automatic analysis and ad-hoc analysis (not following any systematic procedure). In this work, we encode knowledge of the modeling syntax (modeling expertise) in the analysis procedure by performing consistency checks using the interactive judgments provided by users. We believe such checks will encourage beneficial model iteration as part of interactive analysis for both experienced and novice i* modelers.</p>
      </abstract>
      <kwd-group>
        <kwd>Goal-and Agent-Oriented Models</kwd>
        <kwd>Early Requirements Engineering</kwd>
        <kwd>Model Analysis</kwd>
        <kwd>Interactive Analysis</kwd>
        <kwd>Judgment Consistency</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Modeling and analysis can be challenging in Early Requirements Engineering
(RE), where high-level system requirements are discovered. In this stage,
hard-tomeasure non-functional requirements are critical, and understanding the interactions
between systems and stakeholders is a key to system success. Because of the
highlevel, social nature of Early RE models, it is important to provide procedures which
prompt stakeholder involvement (interaction) and model improvement (iteration). To
this end, our previous work has introduced interactive, qualitative analysis procedures
over agent-goal models (specifically, i* models) which aim to promote model
iteration and convergent understanding [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">1-5</xref>
        ]. These procedures are interactive in
that, where partial or conflicting analysis labels appear in the model, users are asked
to provide a human input as resolution before the procedure proceeds further.
      </p>
      <p>
        Experiences with skilled i* modelers in complex case studies have provided
evidence that interactive analysis prompts further elicitation and beneficial model
iteration [
        <xref ref-type="bibr" rid="ref1 ref3">1,3</xref>
        ]. However, case studies comparing ad-hoc to semi-automated
interactive analysis using novice participants showed that model iteration was not
necessarily a consequence of systematic interactive analysis, but of careful
examination of the model prompted by analysis in general [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. We concluded that the
positive iterative effects of interactive analysis found in previous case studies were
dependent upon modeling expertise (the ability to notice when analysis results were
inconsistent with the model), domain expertise (the ability to notice when results
differed from the modeler!s understanding of the world), and interest in the domain
being modeled (caring enough about the modeling process to improve the model).
      </p>
      <p>One consequence of these results would be to recommend that interactive analysis
be performed by, or in the presence of, someone with significant knowledge of i*.
However, this is often not a reasonable expectation, as many i* modelers may be new
to the notation and modeling technique, and will want to be guided by evaluation
procedures in analyzing the model. As a result, we aim to embed some modeling
expertise into the analysis procedure and corresponding tool support by detecting
inconsistencies using the results of interactive judgments.</p>
      <p>Case study experiences show that making judgments over the model can lead the
modeler to revise the model when the decision made using domain knowledge differs
from what is suggested by the model. For instance, in the simple example model for
Implement Password System in Fig. 1, if the application Asks for Secret Question but does
not Restrict Structure of Password, model analysis would suggest that Usability would be
at least partially satisfied. If instead, the modeler thinks that Usability should be
partially denied, this means the model is inaccurate or insufficient in some way.
Perhaps, for example, Usability also requires hints about permitted password structure.</p>
      <p>However, in our student study we found several occasions where novice modelers
made judgments that were inconsistent with the structure of the model, and did not
use these opportunities to make changes or additions to the model. To place this
situation in the context of our previous example, if the Application Asks for Secret
Question but does not Restrict Password the student may have decided that Usability was
still partially denied, continuing the evaluation without modifying the model to be
consistent with their judgment.
Similarly, our studies and experiences showed that it is easy to forget previous
judgments over an intention element and to make new judgments which are
inconsistent with previous judgments. For example, a user may decide that if Security
is partially denied and Usability is partially satisfied, Attract Users is partially denied. In
another round of analysis, if they are presented with an identical situation, they may
now decide that Attract Users has a conflict.</p>
      <p>We use these observations to guide us in embedding modeling expertise into
interactive i* analysis by detecting inconsistencies using judgments. We distinguish
and check for two types of inconsistencies: inconsistencies with the structure of the
model and inconsistencies with judgment history. In this work, we take the initial
steps of describing these checks formally and through examples. Future work will
test the practical effectiveness of these checks in encouraging beneficial i* model
iteration.</p>
    </sec>
    <sec id="sec-2">
      <title>2 Background</title>
      <p>
        We assume the reader is familiar with the i* Framework. The evaluation procedures
and their extensions described in this work use the syntax defined in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. More
information can also be found on the i* Wiki Guidelines [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        In order to more precisely define the consistency checks introduced in this work,
we summarize the formalization of the i* framework presented in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The definitions
use the ĺ notation to represent relationships between elements, so if (i1,i2) ෛ R we
write this as R:i1ĺ i2.
      </p>
      <p>Definition: i*model. An i* model is a tuple &lt;I, R, A&gt;, where I is a set of
intentions, R is a set of relations between intentions, and A is a set of actors. Each
intention maps to one type in {Softgoal, Goal, Task, Resource}. Each relation maps to
one type in {Rme, Rdec, Rdep, Rc}, means-ends, decomposition, dependency, and
contribution links, respectively.</p>
      <p>
        Analysis labels are used in i* to represent the degree of satisfaction or denial of an
intention. We use the formal definition of analysis predicates from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], adapted from
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]:
      </p>
      <p>Definition: analysis predicates. We express agent-goal model analysis labels
using a set of predicates, V, over i א I. Each v(i) א V maps to one of {S(i), PS(i),
C(i), U(i), PD(i), D(i)} where S(i)/PS(i) represents full/partial satisfaction, C(i)
represents conflict, U(i) represents unknown, and D(i)/PD(i) represents full/partial
denial.</p>
      <p>
        In addition, we have defined a conceptually useful total order where v1 &gt; v2 implies
that v1 is more desirable (or "higher#) than v2. This order is as follows:
S(i) &gt; PS(i) &gt; U(i) &gt; C(i) &gt; PD(i) &gt; D(i)
(1)
The framework for interactive goal model analysis summarized in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] currently
provides two types of analysis procedures: forward (from alternative solutions to
goals) [
        <xref ref-type="bibr" rid="ref1 ref3 ref4">1,3,4</xref>
        ] and backward (from goals to solutions) [
        <xref ref-type="bibr" rid="ref2 ref5">2,5</xref>
        ]. Generally, the procedures
start from initial labels expressing the analysis questions, e.g. what if the Application
Restricts Structure of Password and Asks for Secret Question? (forward) or is it possible
for Attract Users to be at least partially satisfied? (backward). Propagation is automatic
following rules defined in our previous work. Propagation can be described via the
forward and backward propagation axioms described in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Generally, for an
intention i א I, where i the destination of one to many relationships, r א R : i1 x $ x in
ĺ i, these predicates take on the form:
      </p>
      <sec id="sec-2-1">
        <title>Forward Propagation:</title>
        <p>(Some combination of v(i1) ר ! ר v(in), v א</p>
      </sec>
      <sec id="sec-2-2">
        <title>Backward Propagation:</title>
        <p>v(i) ĺ (Some combination of v(i1) ר
V) ĺ</p>
        <p>v(i)
! ר v(in), v א</p>
        <p>V)
The interactive nature of the procedures comes when human judgment is needed to
resolve incoming partial or conflicting labels (forward) or to provide feasible
combinations of incoming labels to produce a target label (backward). New
judgments are added to the model formalization by replacing the axioms defined
above for an intention with new axioms of the same form, describing the judgment.
For example, given S(Restrict Structure of Password) and S(Ask for Secret Question) (both
alternatives are satisfied),we decide that Usability has a conflict, C(Usability), we would
remove all axioms having Usability as a target or source and add:</p>
        <p>Forward: S(Restrict Structure of Password) ר S(Ask for Secret Question) ĺ C(Usability)
Backward: C(Usability) ĺ S(Restrict Structure of Password) ר S(Ask for Secret Question)
For simplicity, in this work we will refer to the left side of the forward propagation
axioms as a combination of labels, CL, and the right side as the individual label, IL.
Forward judgments then consist of CL ĺ IL and backward judgments consist of IL ĺ
CL.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3 Detecting Inconsistencies in Interactive Judgments</title>
      <p>In this section we define two types of inconsistencies using human judgments.</p>
      <sec id="sec-3-1">
        <title>3.1 Inconsistencies with Model</title>
        <p>When considering inconsistencies between a judgment and the model, we compare
the contents of the combinations of labels (CL) to the individual label (IL), looking for
inconsistencies. For example, if the combination of labels has no positive labels (S,
PS) and the IL is positive, we classify this as inconsistent (Case 3). We enumerate the
following cases which we define as inconsistent, summarizing each case in after the
"//% symbols:
For a judgment CL ĺ IL or IL ĺ CL over i א I:
//there are no unknown labels in the CL, but the IL is unknown
Case 1: for all vj(ij) in CL, vj  U and IL = U(i)
//there are no negative labels in the CL, but the IL is negative
Case 2: for all vj(ij) in CL, ,vj  PD or D and IL = PD(i) or D(i)
//there are no positive labels in the CL, but the IL is positive
Case 3: for all vj(ij) in CL, , vj  PS or S and IL = PS(i) or S(i)
//the CL is all positive or all negative, but the IL is a conflict</p>
        <p>Case 4: for all vj(ij) in CL, (vj = PS or S) or (vj = PD or D) and IL = C(i)
In the forward case, the combination of labels can be said to represent evidence
from the model, while the individual label is the user judgment. In the backward
case, the individual label is the required evidence in the model, while a permissible
combination of labels is the user judgment applied to the model structure.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2 Inconsistencies with Judgment History</title>
        <p>When considering inconsistencies with between old and new judgments over the
same intentions, we compare the combination of labels (CL) in the new and previous
judgments, looking for cases when the combination of labels is the same, is clearly
more positive, or more negative, using the ordering of labels from (1). We use this
comparison to decide whether the new individual label (IL) is consistent with the old
individual label. An example of the case is described in Section 1, when the
combination of labels is equal, but the individual label is not. In another example, the
user decides that with incoming labels of PS(uceSi)ytr and PD(aUsil)bty , Attract Users is
C(racAt )resU . In the next round of evaluation, incoming labels may be PS(urity)Sec
and C(abtiyl)Us . The new combination of labels is more positive than the previous, as
C &gt; PD, so the individual label should not be less than the previous individual label,
C, i.e. not U, PD, or D.</p>
        <p>To aid in our definition of these cases we will refer to ILnew and Cnew, the most
recent judgment for i א I, and ILprev and Cprev , the previous judgments for i. We
define psuedocode to check for these types of inconsistencies as follows:
For a judgment CLnew ĺ ILnew (backward: ILnew ĺ CLnew) over i א I:
For each previous judgment CLprev ĺ ILprev over i א I:
//compare labels in previous CLs to labels in new CL
For each vj(ij) א CLprev,</p>
        <p>For vk(ij)א CLnew, compare vj(ij) to vk(ij)</p>
        <p>Classify as: &gt;, =, or &lt;
CLnew ĺ ILnew is inconsistent with CLprev ĺ ILprev if:
//The new CL is more positive, but the IL is more negative
All classifications are &gt; or =, and ILnew &lt; ILprev
//The new CL is more negative, but the IL is more positive
All classifications are &lt; or =, and ILnew &gt; ILprev
//The new and old CLs are identical, but the IL has changed
All classification are = (CLprev = CLnew), and ILnew  ILprev
This work reinforces the semantics of i* by embedding rules into the iterative analysis
procedures which check for consistency amongst and between user judgments in the
model. We have been very flexible and permissive in defining our judgments, only
defining cases which are clearly inconsistent. For example, we could include rules to
measure when a CL is mostly negative (many more negative labels than positive), and
check that the IL is at least partially negative.</p>
        <p>Although we have defined inconsistent judgment situations, we have not specified
what actions to take when inconsistencies are found. In order to provide flexibility,
we do not recommend preventing users from making inconsistent judgments, but
instead suggest warning users, either when the judgment is made, or after the fact
using a judgment consistency checker. This feature would work similarly to a built-in
model syntax checker. Both the judgment consistency and model syntax checks are
currently being implemented in the OpenOME tool [12]. The GMF meta-model of
the tool has been expanded to include judgment and evaluation alternatives.</p>
        <p>As we are aiming for model iteration, future work should adapt these checks to
take frequent model changes into account. . Studies involving experienced and new
i* users are needed to test the effectiveness of these checks in encouraging model
iteration through interactive analysis.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"Interactive Analysis of Agent-Goal Models in Enterprise Modeling</article-title>
          ,%
          <source>International Journal of Information System Modeling and Design (IJISMD)</source>
          , vol.
          <volume>1</volume>
          ,
          <issue>2010</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Finding Solutions in Goal Models: An Interactive Backward Reasoning Approach,% 29th International Conference on Conceptual Modeling</source>
          , Springer-Verlag New York Inc,
          <year>2010</year>
          , p.
          <fpage>59</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"Evaluating Goal Achievement in Enterprise Modeling &amp; An Interactive Procedure</article-title>
          and Experiences,%
          <source>The Practice of Enterprise Modeling</source>
          , Springer,
          <year>2009</year>
          , pp.
          <fpage>145</fpage>
          -
          <lpage>160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"A Qualitative, Interactive Evaluation Procedure for Goal-</article-title>
          and
          <string-name>
            <surname>Agent-Oriented</surname>
            <given-names>Models</given-names>
          </string-name>
          ,%
          <source>CAiSE`09 Forum</source>
          , Vol-
          <volume>453</volume>
          , CEUR-WS.org,
          <year>2009</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <surname>E. Yu,</surname>
          </string-name>
          "Qualitative, Interactive,
          <source>Backward Analysis of i* Models,% 3rd International i* Workshop</source>
          , CEUR-WS.org,
          <year>2008</year>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Interactive Goal Model Analysis Applied - Systematic Procedures versus Ad hoc Analysis,% The Practice of Enterprise Modeling, 3rd IFIP WG8</source>
          .1 (
          <issue>PoEM</issue>
          !10),
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"Towards modelling and reasoning support for early-phase requirements engineering</article-title>
          ,
          <source>% Proceedings of ISRE 97 3rd IEEE International Symposium on Requirements Engineering</source>
          , vol.
          <volume>97</volume>
          ,
          <year>1997</year>
          , pp.
          <fpage>226</fpage>
          -
          <lpage>235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] i* Wiki,"</article-title>
          http://istarwiki.org%,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Giorgini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mylopoulos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Sebastiani</surname>
          </string-name>
          ,
          <article-title>"Goal-oriented requirements analysis and reasoning in the Tropos methodology</article-title>
          ,%
          <source>Engineering Applications of Artificial Intelligence</source>
          , vol.
          <volume>18</volume>
          ,
          <year>2005</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>171</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkoff</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>"A Framework for Iterative, Interactive Analysis of Agent-Goal Models in Early Requirements Engineering</article-title>
          ,% 4th International i* Workshop, submitted,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <article-title>OpenOME, an open-source requirements engineering tool</article-title>
          , http://www.cs.toronto.edu/km/openome/%,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>