<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evidence-Based Software Portfolio Management</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hennie Huijgens</string-name>
          <email>h.k.m.huijgens@tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Delft University of Technology</institution>
          ,
          <addr-line>Delft</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <fpage>21</fpage>
      <lpage>26</lpage>
      <abstract>
        <p>In this paper, we describe the research proposal for an approach for Evidence-Based Software Portfolio Management; a new way to help software companies in steering their software portfolio's based on cost, duration, defects found on the one hand and stakeholder satisfaction and perceived value on the other. The research approach is based on instruments such as a Cost / Duration Matrix, the identification of success and failure factors for software projects, and the collection of data on finalized software projects from portfolios of different companies in a research repository.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Evidence-Based Software Engineering</kwd>
        <kwd>Software Portfolio Management</kwd>
        <kwd>Software Benchmarking</kwd>
        <kwd>Software Economics</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Where many other studies use either a quantitative approach (e.g.
analyze core metrics) or a qualitative approach (e.g. perform
surveys or interviews) to analyze software projects, we combine
both ways and look at a company’s software project portfolio from
a holistic point of view. The goal of our research is to combine a
quantitative, data-driven approach on analysis of finalized software
project portfolios with a qualitative, survey-based approach in
order to identify factors related to project success and failure, in
Copyright © 2015 for this paper by its authors. Copying permitted for
private and academic purposes.
combination with an approach to measure and analyze stakeholder
satisfaction and perceived value of software projects.</p>
      <p>In this paper, we describe the research proposal for the development
of evidence-based software portfolio management as a practical
approach on organizing and decision-making with regard to large
portfolios of software projects in information-intensive companies.
In particular, the main contributions in the current state of the
research are:
1.
2.
3.
4.
5.</p>
      <p>We propose a Cost / Duration Matrix as an instrument for
analysis of good practice and bad practice in large,
companywide portfolios of software projects.</p>
      <p>We identify success and failure factors for software projects,
based on analysis of a large subset of data of finalized software
projects from three different companies.</p>
      <p>We analyze series of software releases in order to identify
additional factors that contributes to projects being
best-inclass.</p>
      <p>We propose a light-weight value measurement technique
based on quantitative analysis and post-project interviews.
We provide data of industrial software projects on a
standardized set of metrics: project size, cost, duration, and
defects. We contrast these core metrics with collected data on
stakeholder satisfaction and perceived value, and look for
links between them.</p>
      <p>The remainder of this paper is organized in the following way: In
Section 2 we outline relevant prior work. In Section 3 we describe
our research objectives and questions. In Section 4 our research
approach is described. Section 5 is about the most important metrics
with regard to our research and Section 6 is about the data analysis
methods and techniques that we apply. In Section 7 we evaluate
validity threats. Finally, Section 8 includes a summary of the
current status of our research and planned next steps.</p>
    </sec>
    <sec id="sec-2">
      <title>2. BACKGROUND AND RELATED WORK</title>
      <p>
        In this section we describe a brief survey of the background of our
research area and related work with regard to our research subject.
A common idea of many research performed in the former
millennium is that success and failure of software projects are
interconnected with process-based activities: in other words, follow
the process and success will come [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. More
recent work emphasizes the success and failure factors of shorter
iterations due to an agile way of working [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
From the millennium onwards concepts such as agile and added
value become important factors in software engineering [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. And
one of the effects that can be seen in industry nowadays is that the
instruments of the “old” world, such as algorithm based estimation,
functional size measurement, and measurement and analysis seem
not to go together with “new” tools such as story points, planning
poker, and less focus on control and documentation.
      </p>
      <p>
        Shepperd argues that “the primary goal of more accurate cost
prediction systems remains largely unachieved” [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Software
engineering economics is likely to remain very challenging, as is
for example showed in recent research that undermines the
longlasting application of algorithmic cost models [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Nevertheless
the need for good economic models will grow rather than diminish
as software becomes increasingly ubiquitous [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Besides that
combinations of effort estimation methods in many cases show
better results than single effort estimation methods [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
In practice many software companies perform benchmarking of
their software activities, often based on measurement of the
functional size of projects and software applications [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Yet, a
growing variety of available benchmarks, including large
differences in outcomes of analyses on different benchmark
sources, does not always help to make life easier for decision
makers involved in software development [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        Varieties of value-based software engineering are examined [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. However, a clear link with existing approaches that focus
on measurements such as project size, project cost, project duration,
and number of defects is not to be found. A challenge in industrial
practice is that usually several approaches for estimation,
monitoring and control, and benchmarking of software projects are
in place and that, replacing an organizational process at once is not
feasible due to technical and social issues. However the change
could be adjusted incrementally, as for example is found in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
Recent research on motivation [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] shows that, although
it is difficult to quantify, motivation is considered to an important
factor in software developer productivity. There are also
suggestions that low motivation is an important factor in software
development project failure [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. RESEARCH OBJECTIVES</title>
      <p>We define as our research objective: “an integrated approach that
links existing measurements based on project size, cost, durations,
and defects of software projects with a limited set of relatively easy
to collect additional measures on stakeholder satisfaction and
perceived value, and additional qualitative research on the
backgrounds of project success and failure”.</p>
      <p>Based on the above we define the following research questions:
RQ1: In what way are project size, project cost, project duration,
and process quality (measured in number of defects) in a software
project portfolio correlated?
RQ2: What factors can be found that influence a company’s
software project portfolio in a positive or negative way?
RQ3: What factors can be found that characterize ‘best-in-class’
software projects?
RQ4: Are function points (FPs) compatible with story points (SPs)
on agile projects?
RQ5: Can a statistical, empirical, evidence-based pricing approach
for software engineering, be used as a single instrument (without a
connection with expert judgment), in distributed environments to
create cost transparency and performance management of software
project portfolios?
RQ6: In what way are project size, project cost, project duration,
and process quality (measured in number of defects) in a software
project portfolio correlated with stakeholder satisfaction of
finalized software projects?</p>
      <p>RQ6.1: Is process quality, measured in number of defects
found during a project, an early indicator for stakeholder
satisfaction?
RQ7: In what way are project size, project cost, project duration,
and process quality (measured in number of defects) in a software
project portfolio correlated with perceived value of finalized
software projects?</p>
      <p>RQ7.1: Is project size, measured in function points (FPs), an
early indicator for perceived value of finalized software
projects?</p>
    </sec>
    <sec id="sec-4">
      <title>4. PROPOSED APPROACH</title>
      <p>
        In this section we describe the approach that we use in order to
answer the research questions as stated above. Because we perform
our research in close cooperation with software companies (to be
read as information-intensive companies, such as banks, telecom
companies, governmental organizations), we set-up the research in
a way that fits with practice. Where appropriate, we perform case
studies [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. The case studies that we perform are mixed
studies: we perform both quantitative and qualitative research on
the subject projects within a portfolio as a whole of a company or
organization. Our focus is not to study single software projects, but
instead look at the effects of all software projects performed over a
period of time in a portfolio as a whole; by doing so we assume to
analyze both good practice projects and bad practice projects.
Where applicable we will use electronic surveys among
stakeholders of software projects, supplemented with
nonstructured interviews as techniques to challenge findings from the
quantitative analysis.
      </p>
      <p>A precondition that limits our research approach is the fact that we
perform research in real, live organizational environments.
Therefore the approach must not interfere with the daily operation
of the studied software projects. Surveys should impose limited
burden on people, and analysis is usually to be useful for
improvement purposes in daily operations.</p>
    </sec>
    <sec id="sec-5">
      <title>5. IMPORTANT METRICS</title>
      <p>For our research we make use of an existing data set of 352
finalized software projects from three different organizations. This
research repository is collected over a period of four years, receding
our research. Table 1 gives an overview of the organization of this
research repository. During our research we continue the collection
of data of finalized software projects; the research repository will
mature during the development of the research, both in number of
projects and in applied metrics.</p>
      <p>Based on the collected metrics as inventoried in Table 1 we
calculate three key performance indicators.</p>
      <p>1.
2.
3.</p>
      <sec id="sec-5-1">
        <title>Cost per FP;</title>
      </sec>
      <sec id="sec-5-2">
        <title>Duration per FP;</title>
      </sec>
      <sec id="sec-5-3">
        <title>Defects per FP.</title>
        <p>For all three indicators we use project size (FPs) as the weighting
factor (instead of number of projects). In order to measure
stakeholder satisfaction and perceived value we ask all stakeholders
of a finalized software project (e.g. project manager, business
representative, product owner, business analyst, scrum master,
developer, and tester) to rate scores for both metrics on a 5-point
scale. Stakeholder satisfaction is measured for both satisfaction
with regard to the projects process and the deliverables of the</p>
        <p>Year when a project was finalized; the following years Go Live were applicable: 2008 (32),
2009 (59), 2010 (81), 2011 (131), 2012 (41), 2013 (10).</p>
        <p>Customers business sector; the following BD were applicable: Finance &amp; Risk (54),
Internet &amp; Mobile (54), Payments (50), Client &amp; Account Management (incl. CRM
systems) (46), Savings &amp; Loans (40), Organization (incl. HRM) (31), Call Centre Solutions
(21), Mortgages (21), Data warehouse &amp; BI (18), Front Office Solutions (17).</p>
        <p>Primary used programming language; the following PPL were applicable: JAVA (154),
.NET (59), COBOL (55), ORACLE (29), SQL (9), 3GL (8, unknown was what specific
languages were applicable here), Visual Basic (6), RPG (6), FOCUS (5), PowerBuilder (5),
PRISMA (4), MAESTRO (3). In the analysis 4th Generation (1), PL1 (1), JSP (1), C++ (1),
Clipper (1), Document (1), PL/SQL (1), Siebel (1) and Package (1, unknown what specific
language was applicable) were referred at as Other.</p>
        <p>Classification of the used delivery model; two DM were applicable: Structured (e.g.</p>
        <p>Waterfall) (307), and Agile (Scrum) (45). One project reported as DM RUP is included in
the analysis of Structured.</p>
        <p>Classification of the development; the following DC were applicable: New development
(173), Major enhancement (25-75% new) (124), Minor enhancement (5-25% new) (27),
Conversion (28).</p>
        <p>Characteristics on a specific project (multiple keywords could be mapped on one project, on
one project no keyword was mapped); the following keywords were applicable:
Singleapplication (270), Business driven (150), Release-based (one application) (144), Once-only
project (122), Phased project (part of program) (65), Fixed, experienced team (62), ),
Technology driven (58), Steady heartbeat (49), Dependencies with other systems (41),
Migration (35), Rules &amp; Regulations driven (33), Multi-application release (21), Many
team changes, inexperienced team (17), Package with customization (16), Legacy (15),
Security (14), Pilot; Proof of Concept (10), Bad relation with external supplier (9), New
technology, framework solution (3), Package off-the-shelf (1).</p>
        <p>For every project in the repository the measurements indicated in the table below are
inventoried.</p>
        <p>Size of a project in Function Points (FPs).</p>
        <p>Duration of a project in Months; measured from the start of Project Initiation to (technical)
Go Live.</p>
        <p>Cost of a project in Euros; measured from the start of Project Initiation to (technical) Go
Live.</p>
        <p>Effort spent in a project in Person Hours (PHRs); measured from the start of Project
Initiation to (technical) Go Live.</p>
        <p>The number of errors or faults found in a project from System Integration Test to
(technical) Go Live. Not for all projects defects were administrated; for 172 projects defects
info was recorded in the repository.
*No occurrences are indicated for the 5 Measures, due to the fact that these are different for every measured project.
project (the product). Perceived value is measured for four aspects:
a company’s customer, financial, internal process, and innovation.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. DATA ANALYSIS TECHNIQUES</title>
      <p>In our research we make use of different data analysis techniques,
as described in the following paragraphs.</p>
    </sec>
    <sec id="sec-7">
      <title>6.1 The Cost / Duration Matrix</title>
      <p>The most important data analysis instrument that we use is a
socalled Cost / Duration Matrix (see Figure 1). This matrix is a model
based on power regression of project cost (Euros) versus project
size (FPs) and project duration (months) versus project size (FPs).
For both regressions the percentage deviation from the mean is
calculated for each software project. This percentage deviation is
for both cost and duration plotted in a plotter chart; the Cost /
Duration Matrix.</p>
      <p>
        As Figure 1 visualizes the matrix shows larger or smaller dots,
depending on the size in FPs of a specific project. A color ranging


from blue to red indicates the process quality (number of defects
per FP) of each project, where blue stand for a good process quality
and red for a bad quality (meaning more than average defects per
FP for a specific project). The Cost / Duration Matrix (see Figure
1) is used as a model to visualize and assess the performance in
terms of cost, duration and quality of software projects based on
four quadrants that describe specific characterizations [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]:
Good Practice (upper right): This quadrant shows software
projects that scored better than average of the total repository
(or a specific subset of the repository) for both cost and
duration.
      </p>
      <p>Cost over Time (bottom right): In this quadrant software
projects are reported that scored better than the average of the
total repository (or a specific subset of the repository) for cost,
yet worse than average for duration.</p>
      <p>Bad Practice (bottom left): This quadrant holds software
projects that scored worse than average of the total repository
(or a specific subset of the repository) for bot cost and
duration.</p>
      <p>Cost over Time (upper left): In this quadrant software projects
are plotted that scored better than average of the total
repository (or a specific subset of the repository) for duration,
and worse than average for project cost.</p>
      <p>Keep in mind that the underlying nominator for all software
projects in the Cost / Duration Quadrant is functional size
(measured in FPs). Due to this we can compare the performance in
terms of cost, duration, defects found, satisfaction, and value of
projects with different sizes with each other.</p>
    </sec>
    <sec id="sec-8">
      <title>6.2 A Software Project Benchmark Tool</title>
      <p>We develop a Software Project Benchmark Tool that enables the
functionality for practitioners in industry to benchmark a subset of
finalized software projects against our research repository. The tool
is based on the Cost / Duration Matrix and makes it possible for
measurement practitioners in industry to upload a subset of
finalized software projects from their own organization and to
benchmark the performance in terms of cost, duration, and defects
found with that of comparable projects in our research repository.</p>
    </sec>
    <sec id="sec-9">
      <title>6.3 Stakeholder Satisfaction and Value</title>
      <p>In order to add new metrics such as Stakeholder Satisfaction and
Perceived Value of finalized software projects to our research
repository, we build a survey questionnaire that is send to
stakeholders once software projects are finalized. We add the
metrics resulting from this questionnaire to our research repository
and relate the outcomes with software projects in the four quadrants
of the Cost / Duration Matrix.</p>
    </sec>
    <sec id="sec-10">
      <title>6.4 Prioritize projects in a portfolio</title>
      <p>Finally we develop and describe Evidence-Based Software
Portfolio Management, as an approach for software companies to
prioritize software activities within their software project portfolio,
based on quantification of project size, cost, duration, defects
found, stakeholder satisfaction, and perceived value of finalized
software projects. We test this approach as a whole in a different
information-intensive organization to examine whether the
outcomes correlate with those companies that are already available
in our research repository, and to analyze whether the approach can
be used in practice as a valuable addition to tools already in place
for a software project portfolio management capability in
organizations.</p>
    </sec>
    <sec id="sec-11">
      <title>7. VALIDITY THREATS</title>
      <p>With regard to validity constraints we assess the following threats.
We use function point analysis (FPA) as a way to normalize
software projects and to make it possible to compare performances
of projects with different sizes. We use functional documentation
as a source for FPA. A consequence is that low quality
documentation can lead to low quality FPAs. However, we
thoroughly review all sets of documentation on completeness and
correctness and have FPAs performed by experienced, and in many
cases certified, experts. FPAs are reviewed by different experts than
the ones that performed the count itself to prevent from bias. With
regard to data quality we argue that all project data is reviewed by
the applicable project managers. All data is discussed with the
applicable company management and the financial controller.
By normalizing all project data with the functional size in FPs we
warrant internal validity, the extent to which a causal conclusion is
based on our study. Due to this we can objectively compare
performances of all software projects, in order to minimize
systematic error. The effect of outliers is limited and the risk on
bias is mitigated responsibly based on the diversity of projects and
business domains within each subject company, the number of
software projects, and the fact that we measure and analyze
software project portfolios as a whole in an empirical way.</p>
    </sec>
    <sec id="sec-12">
      <title>8. CURRENT STATUS AND NEXT STEPS</title>
    </sec>
    <sec id="sec-13">
      <title>8.1 Finalized research</title>
      <p>At this moment the following research results are in place:</p>
      <sec id="sec-13-1">
        <title>8.1.1 Good Practice versus Bad Practice (RQ1/RQ2)</title>
        <p>
          We analyzed a dataset containing 352 finalized software projects,
with the goal to discover what factors affect software project
performance, and what actions can be taken to increase project
performance when building a software project portfolio. The
software projects are classified in four quadrants of a cost/duration
matrix: analysis is performed on factors that are strongly related to
two of those quadrants, Good Practices and Bad Practices. A
ranking is performed on the factors based on statistical significance,
resulting in an inventory of ‘what factors should be embraced when
building a project portfolio?’ (Success Factors), and ‘what factors
should be avoided when doing so?’ (Failure Factors). This research
result is documented in a paper that was accepted at ICSE 2014,
SEIP-track [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ].
        </p>
      </sec>
      <sec id="sec-13-2">
        <title>8.1.2 Best-in-class software projects (RQ3)</title>
        <p>
          We aimed to identify distinguishing factors in software releases.
For this purpose we analyzed the metrics of 26 software projects.
These projects were release-based deliveries from two stable,
experienced development teams in a Banking company. During the
measurement period both teams transformed from a plan-driven
delivery model (waterfall) to an agile approach (Scrum). Overall,
we observed that these small release-based projects differ largely
from non-release-based projects. Our research indicates that a
combination of release-based working, a fixed and experienced
development team, and a steady heartbeat contribute to
performances that can be characterized as best practice. This
research result is documented in a paper that was accepted at
IWSM-Mensura 2013 [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ].
        </p>
        <p>A case study that replicates the research above, with additional
qualitative research on a series of best-in-class releases from
another Telecom company is described in a paper that is to be
submitted.</p>
      </sec>
      <sec id="sec-13-3">
        <title>8.1.3 Story Points versus Function Points (RQ4)</title>
        <p>
          In order to find differences and similarities between two many used
size metrics we replicated a study on the relation between story
points and function points performed in 2011 by a group of
Brazilian researchers. We used data collected in a Banking
organization. Based on a statistical correlation test we conclude that
it appears too early to make generic claims on the relation between
function points and story points; in fact FSM-theory seems to
underpin that such a relationship is a spurious one. The results of
this research were published in a paper that was accepted at
WETSoM 2014 [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ].
        </p>
      </sec>
      <sec id="sec-13-4">
        <title>8.1.4 Pricing via functional size (RQ5)</title>
        <p>
          We analyzed how a medium-sized west-European telecom
company experienced a worsening trend in performance, indicating
that the organization did not learn from history, in combination with
much time and energy spent on preparation and review of project
proposals. In order to create more transparency in the supplier
proposal process a pilot was started on Functional Size
Measurement pricing (FSM-pricing). In our research we evaluated
the implementation of FSM-pricing in the software engineering
domain of the company, as an instrument useful in the context of
software management and supplier proposal pricing. We found that
a statistical, empirical, evidence-based pricing approach for
software engineering, as a single instrument (without a connection
with expert judgment), can be used in distributed environments to
create cost transparency and performance management of software
project portfolios. A research paper on our research results is
accepted at ESEM 2015 [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-14">
      <title>8.2 Studies in preparation</title>
      <p>The following research topics are to be studied in the remaining
part of the research period:</p>
      <sec id="sec-14-1">
        <title>8.2.1 Software project benchmark tool</title>
        <p>
          We developed a tool based on the cost/duration matrix as used in
[
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] and [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] with the purpose to support software companies to
benchmark the performance of their own software delivery against
400 finalized software projects in our research repository. We
validate the tool by analyzing the performance of a subset of
finalized software projects from the ISBSG repository [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ].
        </p>
      </sec>
      <sec id="sec-14-2">
        <title>8.2.2 Stakeholder Satisfaction and Perceived Value (RQ6 and RQ7)</title>
        <p>As a point on the horizon we will focus on research on the
quantification of Stakeholder Satisfaction and Perceived Value of
software projects, related to the cost/duration matrix that we
defined in earlier research (see Figure 1). We enrich this model by
mapping Stakeholder Satisfaction and Perceived Value with regard
to a company’s customers, financial, internal process and
innovation aspects to cost, duration and quality of finalized
software projects. A paper on this subject, including a survey on
five finalized projects in a Telecom company is in preparation.</p>
      </sec>
    </sec>
    <sec id="sec-15">
      <title>ACKNOWLEDGMENT</title>
      <p>I thank Arie van Deursen and Rini van Solingen for their great
support and work as advisors for my PhD activities. Furthermore I
thank all companies that support our research for their generosity
to allow us to use company data for research purposes.
a small software organisation," in IEEE Proceedings of the
Australian Software Engineering Conference, 2004.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rainer</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Baddoo</surname>
          </string-name>
          ,
          <article-title>"Implementing Software Process Improvement: An Empirical Study,"</article-title>
          <source>Software Process Improvement and Practice</source>
          , vol.
          <volume>7</volume>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Reel</surname>
          </string-name>
          ,
          <article-title>"Critical Success Factors in Software Projects," IEEE Software, Vols</article-title>
          . May-June, pp.
          <fpage>18</fpage>
          -
          <lpage>23</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Dyba</surname>
          </string-name>
          ,
          <article-title>"An Empirical Investigation of the Key Factors for Success in Software Process Improvement,"</article-title>
          <source>IEEE Transactions on Software Engineering</source>
          , vol.
          <volume>31</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>410</fpage>
          -
          <lpage>424</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Dybå</surname>
          </string-name>
          ,
          <article-title>"Factors of Software Process Improvement Success in Small and Large Organizations: an Emperical Study in the Scandinavian Context,"</article-title>
          <source>in ESEC/FSE</source>
          , Helsinki, Finland,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Niazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wilson</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Zowgh</surname>
          </string-name>
          ,
          <article-title>"Critical Success Factors for Software Process Improvement Implementation: An Empirical Study,"</article-title>
          <source>Software Process Improvement and Practice</source>
          , vol.
          <volume>11</volume>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>211</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rainer</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Hall</surname>
          </string-name>
          ,
          <article-title>"Key success factors for implementing software process improvement: a maturitybased analysis,"</article-title>
          <source>Journal of Systems and Software</source>
          , vol.
          <volume>62</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>84</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Stelzer</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Mellis</surname>
          </string-name>
          ,
          <article-title>"Success Factors of Organizational Change in Software Process Improvement,"</article-title>
          <source>Software Process Improvement and Practice</source>
          , vol.
          <volume>4</volume>
          , pp.
          <fpage>227</fpage>
          -
          <lpage>250</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Chow</surname>
          </string-name>
          and
          <string-name>
            <surname>D.-B. Cao</surname>
          </string-name>
          ,
          <article-title>"A survey study of critical success factors in agile software projects,"</article-title>
          <source>The Journal of Systems and Software</source>
          , vol.
          <volume>81</volume>
          , pp.
          <fpage>961</fpage>
          -
          <lpage>971</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Misra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          and
          <string-name>
            <given-names>U.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>"Identifying some important success factors in adopting agile software development practices,"</article-title>
          <source>The Journal of Systems and Software</source>
          , vol.
          <volume>82</volume>
          , pp.
          <fpage>1869</fpage>
          -
          <lpage>1890</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sutherland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Viktorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Blount</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Puntikov</surname>
          </string-name>
          ,
          <article-title>"Distributed Scrum: Agile Project Management with Outsourced Development Teams,"</article-title>
          <source>in 40th International Conference on System Sciences, Hawaii</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Boehm</surname>
          </string-name>
          , “
          <article-title>A View of 20th and 21st Century Software Engineering</article-title>
          ,” in IEEE International Conference on Software Engineering (ICSE), Shanghai, Ghina,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shepperd</surname>
          </string-name>
          ,
          <article-title>"Software project economics: a roadmap," in Future of Software Engineering (FOSE), Minneapolis</article-title>
          , USA,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Suelmann</surname>
          </string-name>
          ,
          <article-title>"Putnam's Effort-Duration Trade-Off Law: Is the Software Estimation Problem Really Solved?," in IEEE IWSM-Mensura, Rotterdam</article-title>
          , The Netherlands,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kocaguneli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Menzies</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Keung</surname>
          </string-name>
          ,
          <article-title>"On the value of ensemble effort estimation,"</article-title>
          <source>IEEE Transactions on Software Engineering</source>
          , vol.
          <volume>38</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>1403</fpage>
          -
          <lpage>1416</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Minkiewicz</surname>
          </string-name>
          ,
          <article-title>"The Evolution of Software Size: A Search for Value,"</article-title>
          <source>Software Engineering Technology</source>
          , vol. March/April, pp.
          <fpage>23</fpage>
          -
          <lpage>26</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Jones</surname>
          </string-name>
          , “Sources of Software Benchmarks,” Capers Jones &amp; Associates,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Boehm</surname>
          </string-name>
          ,
          <article-title>"Value-Based Software Engineering,"</article-title>
          <source>ACM SIGSOFT Software Engineering Notes</source>
          , vol.
          <volume>28</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Biffl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aurum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Boehm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Erdogmus</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Grünbacher</surname>
          </string-name>
          ,
          <source>Value-Based Software Engineering</source>
          , Berlin Heidelberg: Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Faulk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Harmon</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Raffo</surname>
          </string-name>
          ,
          <article-title>"Value-Based Software Engineering (VBSE): A Value-Driven Approach to Product-Line Engineering,"</article-title>
          <source>in First International Conference on Software Product-Line Engineering</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Keung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jeffery</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Kitchenham</surname>
          </string-name>
          ,
          <article-title>"The challenge of introducing a new software cost estimation technology into</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Beecham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baddoo</surname>
          </string-name>
          , T. Hall,
          <string-name>
            <given-names>H.</given-names>
            <surname>Robinson</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <article-title>"Motivation in Software Engineering: A Systematic Literature Review,"</article-title>
          <source>Elsevier - Information and Software Technology</source>
          , vol.
          <volume>51</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>233</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baddoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Beecham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hall</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Robinson</surname>
          </string-name>
          ,
          <article-title>"Models of motivation in software engineering,"</article-title>
          <source>Elsevier - Information and Software Technology</source>
          , vol.
          <volume>51</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>219</fpage>
          -
          <lpage>233</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Verner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Babar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cerpa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hall</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Beecham</surname>
          </string-name>
          ,
          <article-title>"Factors that motivate software engineering teams: A four country empirical study,"</article-title>
          <source>Elsevier - The Journal of Systems and Software</source>
          , vol.
          <volume>92</volume>
          , pp.
          <fpage>115</fpage>
          -
          <lpage>127</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>P.</given-names>
            <surname>Runeson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Host</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rainer</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Regnell</surname>
          </string-name>
          ,
          <source>Case Study Research in Software Engineering; Guidelines and Examples</source>
          , Hoboken, New Jersey. USA: John Wily &amp; Sons,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>R.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <source>Case Study Research - Design and Methods</source>
          , Los Angelos, USA: Sage Publications,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>H.</given-names>
            <surname>Huijgens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. v.</given-names>
            <surname>Solingen</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. v.</given-names>
            <surname>Deursen</surname>
          </string-name>
          ,
          <article-title>"How To Build a Good Practice Software Project Portfolio?,"</article-title>
          <source>ICSE Companion 2014 Companion Proceedings of the 36th International Conference on Software Engineering (SEIP)</source>
          , vol.
          <year>2014</year>
          , no.
          <source>IEEE</source>
          , pp.
          <fpage>64</fpage>
          -
          <lpage>73</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>H.</given-names>
            <surname>Huijgens</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. v.</given-names>
            <surname>Solingen</surname>
          </string-name>
          ,
          <article-title>"Measuring Best-in-Class Software Releases,"</article-title>
          <source>IWSM-MENSURA 2013 Joint Conference of the 23rd International Workshop on Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement</source>
          , no.
          <source>IEEE</source>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>146</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>H.</given-names>
            <surname>Huijgens</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. v.</given-names>
            <surname>Solingen</surname>
          </string-name>
          ,
          <article-title>"A replicated study on correlating agile team velocity measured in function and story points,"</article-title>
          <source>WETSoM 2014 Proceedings of the 5th International Workshop on Emerging Trends in Software Metrics</source>
          , vol.
          <year>2014</year>
          , no.
          <source>ACM</source>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>36</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>H.</given-names>
            <surname>Huijgens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gousios</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. v.</given-names>
            <surname>Deursen</surname>
          </string-name>
          ,
          <article-title>"Pricing via Functional Size: A Case Study of 77 Outsourced Projects," in IEEE 9th International Symposium on Empirical Software Engineering and Measurement (ESEM) (in press)</article-title>
          , Beijing, China,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>ISBSG</surname>
          </string-name>
          , "International Software Benchmarking Standards Group,"
          <year>1997</year>
          . [Online]. Available: http://www.isbsg.org/isbsgnew.nsf/webpages/~GBL~Home .
          <source>[Accessed</source>
          <year>2014</year>
          ].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>