<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Analyzing the Influence of Certain Factors on the Acceptance of a Model-based Measurement Procedure in Practice: An Empirical Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nelly Condori-Fernández</string-name>
          <email>nelly@pros.upv.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oscar Pastor</string-name>
          <email>opastor@pros.upv.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centro de Investigación en Métodos de Producción de Software Universidad Politécnica de Valencia</institution>
          ,
          <addr-line>Camino de Vera s/n, 46022, Valencia</addr-line>
        </aff>
      </contrib-group>
      <fpage>61</fpage>
      <lpage>70</lpage>
      <abstract>
        <p>Full automatic software measurement from conceptual models is now accepted by academics, although take-up of these model-based measurement procedures in practice by software practitioners has been slow. To encourage acceptance in industry, an acceptance model for measurement procedures is proposed, identifying a set of factors that influence perceived usefulness and perceived ease of use when a user employs a measurement procedure. Analyzing the results of an empirical study carried out with software engineering academics, we find which factors have an influence on other factors. Using regression analysis, certain factors are identified that affect perceived usefulness and ease of use, and which in turn will affect intention to use.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Although software measurement is recognized as a key element of engineering
science, it has not yet been widely accepted in practice by software practitioners. The
Software Engineering Measurement and Analysis (SEMA) group at the Software
Engineering Institute (SEI) concluded from a series of explorative studies carried out
from 2004-2005 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] that there is still a significant gap between the current and desired
state of software measurement. One of reasons for this is that there are no programs
that use measures and empirical evidence to assess the practical relevance of such
programs.
      </p>
      <p>
        Nowadays, with the appearance of the model-driven development process, several
approaches have arisen which allow for full-automatic software measurement of
specific artifacts developed at early stages and in particular contexts [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
However, the question is whether these model-based measurement procedures would
be accepted in practice.
      </p>
      <p>
        According to Cooper and Zmud [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], acceptance is one of the stages in the diffusion
of technological innovations, and is defined from an employee perspective; an
organization’s personnel are induced to commit to Information Technology
application usage. Acceptance must not be confused with adoption; which is defined
as a stage where negotiations are started in relation to the decision to adopt the
innovation and mobilizing of organizational and financial resources for doing so [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        The acceptance of technology has been investigated in a number of different fields
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]; however, in the software measurement field there are few papers on this
subject in the literature.
      </p>
      <p>
        Umarji and Emurian [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] focus on the evaluation of the likelihood of acceptance of a
metrics program. Their model takes as input organizational culture, and the nature of
the metrics program. Gopal et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] researched the influence of institutional factors
on the assimilation of metrics in software organizations. They also identified a set of
determinants for metrics program success [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. These determinants are divided into
organizational and technical variables.
      </p>
      <p>
        Our proposal focuses on a model-based measurement procedure relating to
acceptance from a software practitioner’s perspective. A number of models exist for
evaluating the acceptance of new techniques and technology, in particular the
Technology Acceptance Model (TAM) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The Method Evaluation Model (MEM)
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], which uses the same TAM constructs, was the first to be applied in the context
of Functional Size Measurement (FSM) procedures ([
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]). From preliminary
results obtained with MEM, a theoretical model was defined, which includes a set of
factors that affect practitioners’ perceptions, perceptions that will determine the user’s
intention to use the model-based measurement procedures [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>The aim of this paper is to analyze the influence of these factors on acceptance of
RmFFP in practice, using the regression analysis technique. RmFFP is a measurement
procedure designed to automatically estimate the functional size of object-oriented
applications generated in an MDA environment</p>
      <p>This paper is structured as follows: Section 2 introduces an acceptance model for
model-based measurement procedures, Section 3 shows how an initial empirical study
is carried out to analyze the causal relationships of the model, and finally, our
conclusions are given and further work is suggested.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Evaluating the acceptance of measurement procedures</title>
      <p>
        In order to define our model for evaluating acceptance of model-based measurement
procedures; we use the same TAM constructs, but which have been redefined in the
following way [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]:
• Perceived Ease of Use: the extent to which a person believes that using a
particular measurement procedure would be free of effort.
• Perceived Usefulness: the extent to which a person believes that a particular
measurement procedure will be effective in achieving intended objectives.
• Intention to Use: the extent to which a person intends to use a particular
measurement procedure.
      </p>
      <p>In addition, we identified the following factor types:
• Intrinsic Factors related to the intrinsic nature of software measurement
procedure; these correspond to quality and tangibility of results, and the
minimum number of actions required for calculating the measure using a
measurement procedure.
o Quality of results: extent to which a person believes that the results of using
a measurement procedure are accurate and convertible.
o Tangibility of results: extent to which a person believes that the results of
using a measurement procedure are observable and understandable.
o Minimum actions: extent to which a person believes that using a particular
measurement procedure would obtain results with the minimum number of
actions required.
• Extrinsic Factors that do not depend on the measurement procedure in itself;
these correspond to the experience and job relevance of the software practitioner.
o Job relevance: extent to which an individual believes that a measurement
procedure is applicable and relevant to his or her job.
o Experience: knowledge or skill gained in the use of measurement procedures
over a period of time.
• External factors that depend on the organization as a whole. These factors
include where the business follows trends in the market based on advertising and
marketing or peer company use, or the maturity level of an organization, or has
business priorities giving rise to time or cost constraints.</p>
      <p>The causal relationships hypothesized between the TAM constructs and factors of
the model are shown in Figure 1. In the next section, we present an empirical study to
analyze these causality relationships.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Analyzing causality relationships in the Acceptance of RmFFP</title>
      <p>
        RmFFP is a functional size measurement procedure designed on the basis of the
COSMIC standard method, which has been approved by ISO/IEC 19761 [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
RmFFP was proposed in order to automatically estimate the functional size of
objectoriented systems generated in an MDA environment [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The object to be measured is
the functional requirements specification obtained using the OO-Method requirements
model [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>This procedure starts with the definition of the measurement strategy, which
includes the purpose, the scope, and the measurement viewpoint. The scope of
RmFFP comprises the functionality to be included in a particular measurement. The
measurement viewpoint corresponds to the ‘analyst’ viewpoint, which will focus on a
requirements specification (object of interest).</p>
      <p>
        Then, RmFFP starts a mapping phase to identify the significant primitives of the
Requirements Model that contribute to the system’s functional size according to the
concepts of the COSMIC [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. We defined sixteen mapping rules whose principal
purpose is to reduce misinterpretation about the COSMIC generic concepts and to
facilitate the automation of the RmFFP procedure. For instance, each use case is
identified as a functional process; each message of the sequence diagram is identified
as a data movement type, etc. The main outcome of this phase is the identification of
data movements that are fundamental components of COSMIC.
      </p>
      <p>Once the data movements have been correctly identified, we proceed with the
measurement phase, whose purpose is to produce a quantitative value that represents
the software functional size of a requirements specification. To do this, we apply the
measurement function, which consists of assigning a numerical value of 1 Cfsu
(Cosmic Functional Size Unit) to each data movement. We defined four rules to add
together these quantified data movements. To do this, we used the relationship types
between use cases to calculate the size of the functional processes (use case) and the
size of the entire system</p>
      <sec id="sec-3-1">
        <title>3.1 Planning: Case study</title>
        <p>
          In order to define the goal of our empirical study, we used the
Goal/Question/Metric (GQM) template [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], which is described as follows:
To analyze the Acceptance Model proposed for the purpose of evaluating RmFFP
with respect to their acceptance in the practice from the viewpoint of the researcher in
the context of software engineering professors using a measurement procedure for
requirements specifications.
        </p>
        <p>From this goal, the following research questions were addressed by this study:
RQ1: is perceived usefulness of the RmFFP measurement procedure really influenced
by certain intrinsic factors?
RQ2: is perceived usefulness of the RmFFP measurement procedure really influenced
by certain extrinsic factors?
RQ3: is perceived ease of use the RmFFP measurement procedure really influenced
by certain intrinsic factors?
RQ4: is the intention to use really a result of the perceptions experienced by the
subjects using the RmFFP measurement procedure?
Selection of subjects. The subjects were 20 professors from various Peruvian
universities. They were enrolled in the United Nations summer school on “Advanced
Techniques in Software Development”, February - March 2007. The careful selection
of participants was based on academic qualifications, teaching or industrial
experience, technical background, and specific interest in software engineering. The
empirical study was organized as a part of the “Measurement and Software Quality”
course given during the summer school.</p>
        <p>
          Variables and Hypotheses. Using the framework proposed by Juristo and Moreno
[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], we identified three types of variables:
• Response variables: variables that correspond to the outcomes of the empirical
study. For this study, we considered certain factors and constructs of the Model
as response variables: Perceived Ease of Use (PEOU), Perceived Usefulness
(PU), Intention to Use (IU), Job Relevance (JR), Quality of Results (QR),
Tangibility of Results (TR), and Minimum Actions (MA). We omitted the
extrinsic factor: “experience” and the external factors, which will be considered
in further studies. As these outcomes should be measurable, we used a 5-point
Likert scale format.
•
•
        </p>
        <p>
          Factors: variables that affect the response variable. In our study, this variable
corresponds to the Models-based Measurement Procedures, and as single
treatment: the RmFFP procedure [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
Parameters: variables that we do not want to influence the experimental results:
level of practitioner’s experience using a measurement procedure; complexity of
conceptual models to be measured.
        </p>
        <p>The following hypotheses regarding the research questions were considered:
H1: Perceived Usefulness is determined by the quality of results of the RmFFP
measurement procedure.</p>
        <p>H2: Perceived Usefulness is determined by the tangibility of results of the RmFFP
measurement procedure.</p>
        <p>H3: Perceived Usefulness is determined by job relevance using the RmFFP
measurement procedure for the software practitioner.</p>
        <p>H4: Perceived ease of use is determined by the minimum number of actions required
using the RmFFP measurement procedure.</p>
        <p>H5: Intention to use is determined by usefulness perceived.</p>
        <p>H6: Intention to use is determined by perceived ease of use.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2 The Collection Data Method</title>
        <p>First, we gave an introduction on how to apply the RmFFP measurement procedure
by means of illustrative examples. Finally, we verified the knowledge learned by the
participants by working through an assigned application. The time used for the
training session was 4 hours distributed over two days. Then, each subject used the
RmFFP measurement guide to measure a requirements specification of a Car Rental
application with thirty-five use cases. The time allowed for this task was unlimited.</p>
        <p>Finally, each subject was asked to complete a specially-designed survey to
evaluate RmFFP acceptance. The time allowed for this task was also unlimited.</p>
        <p>Instrumentation. A survey instrument1 was designed to measure the response
variables, with twenty closed questions. These questions consisted of 6 items used to
measure PEOU; 2 items to measure PU; 3 items to measure IU; 4 items to measure
JR; 2 items to measure QR; 1 item to measure TR; and 2 items to measure MA. Table
1 presents the four items used for the job relevance factor.</p>
        <p>Responses to the instrument were based on a 5-point Likert scale ranging from (1),
strongly disagree, to (5), strongly agree. The order of the items was randomized and
some questions negated to avoid monotonous responses.</p>
        <p>We also used a set of training materials, such as: a set of instructional slides on
RmFFP procedure; an example of the application of RmFFP, and a measurement
guide.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3 Data Analysis and Interpretation</title>
        <p>As we can see in Figure 1, the intention to use a measurement procedure is
influenced by perceptions of usefulness and ease of use; which can be influenced by
certain type of factors. We identified several relationships, which were defined above
in the six hypotheses (H1-H6). In this section, we analyze them by applying the
regression analysis technique.</p>
        <sec id="sec-3-3-1">
          <title>H1: Quality of results → Perceived usefulness. The regression equation resulting</title>
          <p>from the analysis is: PU = 2.376 + 0.477*QR.</p>
          <p>The regression had a high significance level (p &lt; 0.01), which means that H1 was
confirmed. The determination coefficient (R2 = 0.316) showed that 31.6% of the total
variation in perceived usefulness can be explained by variation in quality of results.</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>H2: Tangibility of results → Perceived usefulness. The regression equation</title>
          <p>resulting from the analysis is: PU = 3.208 + 0.236*TR.
1 http://www.dsic.upv.es/~nelly/survey2.pdf</p>
          <p>The regression had a null significance level (p &gt; 0.1), which means that H2 was not
confirmed.</p>
        </sec>
        <sec id="sec-3-3-3">
          <title>H3: Job Relevance → Perceived usefulness. The regression equation resulting</title>
          <p>from the analysis is: PU = 2.86 + 0.348*JR.</p>
          <p>The regression had a medium significance level (p &lt; 0.05), which means that H1
was confirmed. The determination coefficient (R2 = 0.186) showed that 18.6% of the
total variation in perceived usefulness can be explained by variation in job relevance.</p>
        </sec>
        <sec id="sec-3-3-4">
          <title>H4: Minimum actions → Perceived ease of use. The regression equation</title>
          <p>resulting from the analysis is: PEOU = 2.733 + 0.314*MA.</p>
          <p>This regression had a null significance level (p&gt; 0.1), which means that H4 was not
confirmed.</p>
        </sec>
        <sec id="sec-3-3-5">
          <title>H5: Perceived usefulness → Intention to use. The regression equation resulting</title>
          <p>from the analysis is: ITU = 1.628 + 0.577* PU.</p>
          <p>The regression had a medium significance level (p &lt; 0.05), which means that H5
was confirmed. The determination coefficient (R2 = 0.166) showed that 16.6% of the
total variation in intention to use can be explained by variation in perceived
usefulness.</p>
        </sec>
        <sec id="sec-3-3-6">
          <title>H6: Perceived ease of use → Intention to use. The regression equation resulting</title>
          <p>from the analysis is: ITU = 2.881 + 0.298* PEOU.</p>
          <p>The regression had a null significance level (p &gt; 0.1), which means that H6 was not
confirmed.</p>
          <p>Table 2 below summarizes the regression analysis results in terms of the predictive
power (R2) and significance level of the model (p), and the confirmation of the casual
relationships.</p>
          <p>Note that three hypotheses out of six were confirmed using a regression analysis
(H1, H3, and H5). This means, that the perceived usefulness is determined by the
quality of results, and by the job relevance using RmFFP for the software practitioner.
In addition, the intention to use RmFFP is determined by the perceived usefulness.</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.4 Validity evaluation</title>
        <p>It is important to ensure that the obtained results are valid, we present the more
important threats related to our empirical study in Table 3.
* Null: α &gt; 0.1, Low : α &lt; 0.1, Medium: α &lt; 0.05, High: α &lt; 0.01, Very high: α &lt; 0.001
Random heterogeneity of subjects: All the subjects
selected for the empirical study had approximately the
same level of background. We are aware that this
homogeneity reduces the external validity of our
empirical study.</p>
        <p>Reliability of measures: We are aware that the measures
based on perceptions are less reliable than objective
measures, since it does not involve human judgment.</p>
        <p>However, to diminish this threat, we carried out a
reliability analysis on the survey used, which is explained
below.</p>
        <p>
          Inadequate pre-operational explanation of constructs: To
ascertain whether the constructs are sufficiently defined,
and, hence the experiment is sufficiently clear, we
conducted a reliability analysis on the survey, calculating
reliability using the Chronbach alpha technique. The
generic value obtained was 0.85 indicating that the items
included in the survey are reliable. However, a design
adjustment on the questions corresponding to the
constructors PU, MA and QR would be required for
further empirical studies, since their corresponding
Cronbach alpha values were lower than 0.7 ( [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]).
        </p>
        <p>Instrumentation: This is the effect caused by the artefacts
used in the study execution. The requirements
specification of the Car Rental System was reviewed; and
the measurement guide was verified in advance with a
small group of people in order to improve its
understandability.</p>
        <p>Interaction of selection and treatment: This is the effect
of not having a representative population in the
experiment with which to generalize. In our case, we are
aware that more studies with a larger number of subjects
would be appropriate to reconfirm the initial results
obtained.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and further work</title>
      <p>This paper provides a brief introduction to a theoretical model to evaluate the
acceptance of measurement procedures from an individual perspective. The model
includes three types of factors that influence perceptions of usefulness and ease of use
(intrinsic, extrinsic and external factors). An empirical study has been carried out to
verify causal relationships that include the intrinsic and extrinsic factors. The analysis
shows that perceived usefulness is influenced by the job relevance of the people that
use a measurement procedure. However, with respect to intrinsic factors, only the
quality of results could affect the perception of usefulness. Perceived ease of use
cannot be determined by the minimum actions factor. Furthermore, the results show
that the intention to use a measurement procedure can be influenced more strongly by
perceived usefulness than by perceived ease of use.</p>
      <p>We plan to make further adjustments to the questions on the survey to improve the
reliability of certain constructs, such as PU, MA, and QR. In addition, we are aware
that further experimentation with industry practitioners will be appropriate in order to
reconfirm these initial results. Finally, as further empirical studies, we also intend to
consider the influence of software practitioners’ experience on the acceptance of
model-based measurement procedures.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Kasunic</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <source>State of Software Measurement Practice Survey</source>
          , Carnegie Mellon, Software Engineering Institute,
          <year>2006</year>
          , www.sei.cmu.edu/sema/presentations/stateof-survey.pdf
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Abrahão</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomez</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Insfran</surname>
            <given-names>E. Mendes E.</given-names>
          </string-name>
          ,
          <article-title>A Model-Driven Measurement Procedure for Sizing Web Applications</article-title>
          ,
          <source>Conference on Model-Driven Engineering Languages and Systems (MODELS</source>
          <year>2007</year>
          ), Nashville,
          <string-name>
            <surname>TN</surname>
          </string-name>
          , USA,
          <source>September 30-Octuber 5</source>
          ,
          <year>2007</year>
          , LNCS Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Abrahao</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poels</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pastor</surname>
            <given-names>O. A Functional</given-names>
          </string-name>
          <string-name>
            <surname>Size</surname>
          </string-name>
          <article-title>Measurement Method for Object-Oriented Conceptual Schemas: Design and Evaluation Issues</article-title>
          .
          <source>Software &amp; System Modelling</source>
          ,
          <volume>5</volume>
          (
          <issue>1</issue>
          ):
          <fpage>48</fpage>
          -
          <lpage>71</lpage>
          , Springer Verlag,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Azzouz</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abran</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>“A Proposed Measurement Role in the Rational Unified Process and its Implementation with ISO 19761: COSMIC-FFP” in Software Measurement European Forum</article-title>
          , Rome, Italy,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Condori-Fernández</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abrahão</surname>
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Pastor</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <article-title>On the Estimation of Software Functional Size from Requirements Specifications</article-title>
          ,
          <source>Journal of Computer Science and Technology (JCST)</source>
          , Springer,
          <volume>22</volume>
          (
          <issue>3</issue>
          ):
          <fpage>358</fpage>
          -
          <lpage>370</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Marín</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pastor</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giachetti</surname>
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Automating the Measurement of Functional Size of Conceptual Models in an MDA Environment</article-title>
          , 9th International Conference
          <string-name>
            <surname>Product-Focused Software Process Improvement</surname>
          </string-name>
          , Italy,
          <year>June 2008</year>
          , pp.
          <fpage>215</fpage>
          -
          <lpage>229</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.B.</given-names>
            <surname>Cooper</surname>
          </string-name>
          and R.W Zmud, “Information Technology Implementation Research:
          <string-name>
            <given-names>A Technological</given-names>
            <surname>Diffusion</surname>
          </string-name>
          <article-title>Approach”</article-title>
          , Management Science,
          <volume>36</volume>
          (
          <issue>2</issue>
          ):
          <fpage>123</fpage>
          -
          <lpage>139</lpage>
          ,
          <year>1990</year>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W. G.</given-names>
            <surname>Chismar</surname>
          </string-name>
          , S. Wiley-Patton,
          <article-title>Does the Extended Technology Acceptance Model Apply to Physicians?</article-title>
          ,
          <source>36th Annual Hawaii International Conference on System Sciences, IEEE Computer Society</source>
          , Big Island, USA,
          <year>January 2003</year>
          , pp.
          <fpage>160</fpage>
          -
          <lpage>167</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Chau</surname>
            <given-names>P.Y. K.</given-names>
          </string-name>
          ,
          <article-title>An empirical investigation on factors affecting the acceptance of CASE by systems developers</article-title>
          ,
          <source>Journal on Information and Management</source>
          , Elsevier,
          <volume>30</volume>
          (
          <issue>6</issue>
          ):
          <fpage>269</fpage>
          -
          <lpage>280</lpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Umarji</surname>
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Emurian</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <source>Acceptance Issues in Metrics Program Implementation, Proceedings of the 11th IEEE International Software Metrics Symposium METRICS 05, IEEE Computer Society</source>
          , 2005, Washington,USA, pp.
          <fpage>10</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Gopal</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krishnan</surname>
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mukhopadhyay</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <article-title>Impact of Institutional Forces on Software Metrics Programs</article-title>
          ,
          <source>IEEE Trans. on Software Eng</source>
          ,
          <volume>31</volume>
          (
          <issue>8</issue>
          ):
          <fpage>679</fpage>
          -
          <lpage>695</lpage>
          ,
          <year>August 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Gopal</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krishnan</surname>
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mukhopadhyay</surname>
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Goldenson</surname>
          </string-name>
          ,
          <source>Measurement Programs in Software Development: Determinants of Success, IEEE Transaction on Software Eng.</source>
          ,
          <volume>28</volume>
          (
          <issue>9</issue>
          ):
          <fpage>863</fpage>
          -
          <lpage>875</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Condori-Fernández</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pastor</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <article-title>Towards a Theoretical Model for Evaluating the Acceptance of Model-Driven Measurement Procedures</article-title>
          ,
          <source>Proceedings of the 20th International Conference on Software Engineering &amp; Knowledge Engineering, SEKE</source>
          <year>2008</year>
          , San Francisco, USA, July 1-
          <issue>3</issue>
          ,
          <year>2008</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Davis F. D.</surname>
          </string-name>
          ,
          <article-title>"Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology"</article-title>
          ,
          <source>MIS Quarterly</source>
          , vol.
          <volume>3</volume>
          , no.
          <issue>3</issue>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Basili</surname>
            <given-names>V. R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Rombach H. D.</surname>
          </string-name>
          , The TAME Project:
          <article-title>Towards ImprovementOriented Software Environments</article-title>
          ,
          <source>IEEE Transactions on Software Engineering</source>
          ,
          <volume>14</volume>
          (
          <issue>6</issue>
          ):
          <fpage>758</fpage>
          -
          <lpage>773</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N.</given-names>
            <surname>Juristo</surname>
          </string-name>
          ,
          <string-name>
            <surname>Moreno</surname>
            <given-names>A</given-names>
          </string-name>
          , Basics of Software Engineering Experimentation, Kluwer Academic Publishers, Boston,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Condori-Fernández</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pastor</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <article-title>An Empirical Study on the Likelihood of Adoption in Practice of a Size Measurement Procedure for Requirements Specification</article-title>
          ,
          <source>Sixth International Conference on Quality Software (QSIC</source>
          <year>2006</year>
          ),
          <year>October 2006</year>
          , Beijing, China, pp.
          <fpage>133</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Pastor</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Molina</surname>
          </string-name>
          ,
          <source>J. Model-Driven Arquitecture in Practice. Valencia</source>
          , Springer Berlin Heidelberg, New York,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Garson</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <article-title>Scales and standard measures from statnotes</article-title>
          , North Carolina State University,
          <year>Copyright 1998</year>
          ,
          <source>last updated March</source>
          <year>2008</year>
          . http://www2.chass.ncsu.edu/garson/pa765/standard.htm.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>ISO</surname>
          </string-name>
          , ISO/IEC 19761 Software
          <string-name>
            <surname>Engineering-COSMIC-FFP-A Functional Size Measurement Method</surname>
          </string-name>
          , International Organization for Standardization_ISO, Geneva,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Moody</surname>
            <given-names>D. L.</given-names>
          </string-name>
          ,
          <article-title>The method evaluation model: a theoretical model for validating information systems design methods</article-title>
          ,
          <source>11th European Conference on Information Systems, ECIS</source>
          <year>2003</year>
          , Naples, Italy
          <volume>16</volume>
          -
          <issue>21</issue>
          <year>June 2003</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>