<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Baselining Wireless Internet Service Development - An Experience Report</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fabio Bella</string-name>
          <email>bella@iese.fraunhofer.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jürgen Münch</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexis Ocampo</string-name>
          <email>ocampo@iese.fraunhofer.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>---------------- • F. Bella is with the Fraunhofer Institute for Experimental Software Engineering (IESE)</institution>
          ,
          <addr-line>Sauerwiesen 6, 67661, Kaiserslautern</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2004</year>
      </pub-date>
      <abstract>
        <p>- New, emerging domains such as the engineering of wireless Internet services are characterized by a lack of experience based on quantitative data. Systematic tracking and observation of representative pilot projects can be seen as one means to capture experience, get valuable insight into a new domain, and build initial baselines. This helps to improve the planning of real development projects in business units. This article describes an approach to capture software development experience for the wireless Internet services domain by conducting and observing a series of case studies in the field. Initial baselines concerning effort distribution from the development of two wireless Internet pilot services are presented. Furthermore, major domain-specific risk factors are discussed based on the results of project retrospectives conducted with the developers of the services.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 INTRODUCTION</title>
      <p>Temerging application domain characterized by quickly
he engineering of wireless Internet services is an
evolving technology, upcoming new devices, new
communication protocols, support for new, different types
of media, and varying and limited communication
bandwidth, together with the need for new business models that
will fit in with completely new service portfolios. Examples
of new wireless Internet services can be expected in the
domains of mobile entertainment, telemedicine, travel
services, tracking and monitoring services, or mobile trading
services.</p>
      <p>Due to its recentness, this domain lacks explicit
experience related to technologies, techniques, and suitable
software development process models that is based on
quantitative data. Unreliable project planning, incorrect effort
estimates, and high risk with respect to process, resource,
and technology planning, as well as with regard to the
quality of the resulting product are inevitable consequences
of this lack of experience. One means to capture experience
and get valuable insight into a new domain is systematic
tracking and observation of representative pilot projects.</p>
      <p>This paper presents a study consisting of two case
studies aimed at quantitative baselining. Additionally, the case
studies were used to gain qualitative experience. The article
aims at giving managers and developers a sense of the
behavior of projects in the wireless Internet domain.</p>
      <p>The approach followed in this study is based on a
combination of descriptive process modeling, GQM-based
measurement, and collection of lessons learned: Descriptive
process modeling is applied in order to understand and
improve the software development process as applied
within the observed organizations; GQM-based
measurement is practiced to gather quantitative experience,
whereas qualitative aspects are addressed by the
retrospective-based collection of lessons learned. Therefore, this
study should be seen as a challenging attempt to
characterize a promising new application domain not only from a
qualitative, but also from a quantitative point of view. Of
particular interest is the focus placed on first effort
distribution baselines gathered from the development of suitable
pilot services.</p>
      <p>Section 2 introduces the methodologies applied within
the study and explains how they relate to it. Section 3
discusses the context in which the case studies were
performed; the overall approach applied to gather quantitative
as well as qualitative experience, and the results in terms of
effort distribution baselines and major domain-specific
risks observed. Section 4 subsumes the article and sketches
future work to be performed.
2</p>
    </sec>
    <sec id="sec-2">
      <title>BACKGROUND</title>
      <p>
        This study is based on a combination of descriptive process
modeling [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], GQM-based measurement [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], and
retrospective-based collection of lessons learned [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This Section
gives an overview of the methodologies applied and
explains how they relate to the study.
      </p>
      <p>
        The main idea of the descriptive process modeling is to
explicitly document the development processes as they are
applied within a given organization: A so-called process
engineer observes, describes, and analyzes the software
development process and its related activities, and provides
descriptions of the processes to the process performers.
Since the processes are usually complex, support is needed
for both process engineers and process performers.
Descriptive process modeling is applied within the context of
the study with the help of the Spearmint® environment.
The architecture of Spearmint® and its features for a
flexible definition of views, used for retrieving filtered and
tailored presentations of process models, is presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
One distinct Web-based view, namely the Electronic
Process Guide (EPG), is used for disseminating process
information and guiding process performers, e.g., project managers
and developers.
      </p>
      <p>
        The Goal/Question/Metric (GQM) approach is applied
to define measurement goals and a proper measurement
infrastructure [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. During the first two steps, business
and improvement goals are analyzed and metrics defined
according to the process model elicited through
Spearmint®The results of this first phase are GQM plans that
comprise all metrics defined.
      </p>
      <p>In the following step, the project plan and the process
model are used to determine by whom, when, and how
data are to be collected according to the metrics. The data
collection procedures are the results of this
instrumentation.</p>
      <p>Raw data are collected according to the data collection
procedures. The collected raw data are analyzed and
interpreted according to the GQM plan and the feedback
provided by the interested parties.</p>
      <p>In the next step, the interested parties draw
consequences based on the analysis and their interpretations.</p>
      <p>Finally, analysis, interpretations, and consequences are
resumed in the measurement results and collected as
experience in the experience database for future reuse.</p>
      <p>
        In addition to the measurement of quantitative data, the
collection of qualitative data is driven by project
retrospectives [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Therefore, meetings and interviews with the
participants of different work packages are conducted
regularly to focus lessons learned and improvement potentials.
Concerning more specific wireless-related topics, published
experience in the field were important sources of
information, particularly at the beginning of the study. An
extensive overview of related work is given by [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>CHARACTERIZING EFFORT IN THE WIRELESS</title>
    </sec>
    <sec id="sec-4">
      <title>INTERNET SERVICES ENGINEERING DOMAIN</title>
      <p>In the following, the context of the study, the approach
applied, and the main related results are described.</p>
      <sec id="sec-4-1">
        <title>3.1 Context of the Case Studies</title>
        <p>The present study was conceived as an integral part of the
evaluation of the Wireless Internet Service Engineering
(WISE) project. The project produces integrated methods
and components (COTS and open source) to engineer
services on the wireless Internet. The production of methods
and components is driven by the development of pilot
services.</p>
        <p>The methods already produced include a reference
architecture, a reference development process model, as well
as guidelines for handling heterogeneous mobile devices.</p>
        <p>The components include a service management
component and an agent-based negotiation component.</p>
        <p>Three pilot services, i.e., a financial information service,
a multi-player game, and a data management service, are
being developed by different organizations. The data from
the development processes of two of the pilot services are
the basis of this study.</p>
        <p>The duration of the project is 30 months and an iterative,
incremental development style is applied: three iterations
are performed, of roughly 9 months each.</p>
        <p>In iteration 1, a first version of the planned pilot services
was built using GPRS. At the same time, a first version of
methods and tools was developed.</p>
        <p>In iteration 2, a richer second version of the pilots was
developed on GPRS, using the first version of methods and
tools. In parallel, an improved second version of methods
and tools was developed.</p>
        <p>In iteration 3, the final version of the pilots is being
developed on UMTS, using methods and tools from the
second iteration. Also, a final version of methods and tools is
being developed.</p>
        <p>Currently, the third iteration is still running. The case
studies discussed in this article refer to data from the first
two iterations.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2 Case Studies Method</title>
        <p>This subsection presents the process-centric approach
applied within the WISE project for gathering experience in
the new domain.</p>
        <p>As mentioned in the previous subsection, in parallel to
the development of the pilot services, a measurement
infrastructure was defined in order to evaluate the effects of the
method and tools applied to develop these services. The
infrastructure is based not only on measures but also on
interviews and any other available evidence.</p>
        <p>Figure 1 sketches the strategy applied iteratively during
each of the three iterations to gather, package, and maintain
experience from the development of the pilot services. The
experience acquisition process is depicted using the
Spearmint® notation: The circles represent activities, the
rectangles artifacts, the arrows indicate produces/consumes
relationships between activities and products.</p>
        <p>At the beginning of each iteration, software development
processes are elicited as applied by the organizations; the
descriptive process models (in Figure 1, Software Process
Model) are used to set up effort measurement programs
(Measurement Plan).</p>
        <p>During the development of the pilot services, the pilot
performers collect data according to the measurement
plans. The data is validated and stored. At the end of the
development cycle, baselines are built, i.e., the data
collected are aggregated and quality models are built (Set of
Baselines).</p>
        <sec id="sec-4-2-1">
          <title>Software</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>Know-How</title>
        </sec>
        <sec id="sec-4-2-3">
          <title>Technology /</title>
        </sec>
        <sec id="sec-4-2-4">
          <title>Practice</title>
        </sec>
        <sec id="sec-4-2-5">
          <title>Experience</title>
        </sec>
        <sec id="sec-4-2-6">
          <title>Element</title>
        </sec>
        <sec id="sec-4-2-7">
          <title>Scope</title>
        </sec>
        <sec id="sec-4-2-8">
          <title>Significance</title>
        </sec>
        <sec id="sec-4-2-9">
          <title>Characterization</title>
        </sec>
        <sec id="sec-4-2-10">
          <title>Vector</title>
        </sec>
        <sec id="sec-4-2-11">
          <title>Quality</title>
        </sec>
        <sec id="sec-4-2-12">
          <title>Model</title>
        </sec>
        <sec id="sec-4-2-13">
          <title>Lesson</title>
        </sec>
        <sec id="sec-4-2-14">
          <title>Learned</title>
        </sec>
        <sec id="sec-4-2-15">
          <title>Process</title>
        </sec>
        <sec id="sec-4-2-16">
          <title>Model</title>
        </sec>
        <sec id="sec-4-2-17">
          <title>Product</title>
        </sec>
        <sec id="sec-4-2-18">
          <title>Model</title>
          <p>
            During post-mortem analysis sessions the baselines are
discussed with the involved parties, then interpretations
and consequences for the next iteration are worked out
(e.g., the possible evolution of the surrounding
development process). In order to get more insights of a qualitative
nature, lessons learned (Set of LL) are collected regularly by
interviewing project participants at project meetings or by
phone. Many lessons were also gathered through the
analysis and interpretation of baselines. Therefore, within the
context of the WISE project, different experience models
were applied (see Figure 2), which are an adaptation of
basic principles of the Experience Factory [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ] and QIP [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]
approaches.
          </p>
          <p>All kinds of software engineering experience are
regarded as experience elements: process and product
models, quantitative quality models (i.e., baselines), and
qualitative experience (such as lessons learned). For each
experience element, the scope of its validity is described.</p>
          <p>The scope consists of a characterization vector and the
significance. The characterization vector characterizes the
environment in which the experience element is valid, i.e.,
the context surrounding a given project (see Table 1). The
significance describes how the experience element has been
validated and to which extent (e.g., validation through
formal experiments, single case study, or survey).</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>3.3 Results</title>
        <p>This subsection presents the results gathered from the first
two development iterations. The discussion of results
focuses upon effort baselines from the development of the
pilot services and major domain-specific risks observed.
3.3.1 Effort Baselines related to the Development of</p>
        <p>the Pilot Services
This subsection discusses quality models concerning effort
distribution. The quality models are gathered from the
development of two pilot services.</p>
        <p>Case Study 1
Context: Pilot service 1 provides a solution for real time
stock tracking on mobile devices: the user can view real
time quotes concerning a whole market or define his/her
own watch lists. The partner responsible for this
development is a provider of high end trading services on the
Internet, aimed at banks and brokers. The pilot is the
adaptation of an existing Web-based information service.
Critical usability issues arise due to the huge amount of data
needed by a financial operator to perform an analysis and
the small-sized display of mobile devices. Furthermore,
since the Internet traffic on mobile devices is paid for by the
end user, based on data volume and not on connection
time, and since frequent refresh of a large amount of
financial data is required, the adoption of the push technology
instead of the pull technology is an important issue,
because it avoids unnecessary data refreshes for the user.</p>
        <p>Most of the usability issues were addressed during the first
iteration. The second iteration was mainly concerned with
implementing a solution based on the push technology.</p>
        <p>The life cycle model applied for developing the pilot
service during each iteration is an iterative process model
consisting of three phases: a requirements phase, a
development / coding phase, and a testing phase. The ad-hoc
process is characterized by extensive use of verbal
communication within the development team, and little use of
explicit documentation. Another important characteristic of
the development process is the absence of an explicit design
phase. This can be seen as a consequence of the fact that the
overall system architecture and the related interfaces were
known at the beginning of the project, since this was
mainly the same client server architecture used to provide
the service on the traditional Internet. The client side was a
prototype developed using the Wireless Markup Language
(WML); during the second iteration, the client was
developed using the Java 2 platform, Micro Edition (J2ME). In
both cases, the prototype and its high-level design were
documented after development.</p>
        <p>Analysis: The analysis of the effort distribution observed
during the first iteration and represented in Figure 3 shows
that most of the effort (approx. 84%) was spent on the
development phase, i.e., the creation of the first prototype.</p>
        <p>Only approx. 15% of the overall effort was spent on the
requirements phase. This can be explained as follows: the
functional requirements were described at a high degree of
abstraction, which was possible since they were derived
from the available Internet service and they were therefore
well understood; the more challenging non-functional
requirements, e.g., usability issues, were not formalized at all,
since they were not understood at the beginning of the
project and they were to be investigated with the WML
prototype.</p>
        <p>0.15%</p>
        <p>15.47%
84.38%
Requirement Phase</p>
        <p>Development Phase</p>
        <p>Test Phase
Fig. 3. Effort distribution, pilot service 1, iteration 1</p>
        <p>Since more effort than planned was spent on the
development phase, little effort remained to be spent on the
testing phase.</p>
        <p>During the first iteration, the development of pilot
service 1 required about 340 man-days.</p>
        <p>Figure 4 shows the effort distribution observed during
the second iteration. The consequences of a more accurate
description of the development process and, at the same
time, of the stabilization of the process enacted by the
development team became visible and, therefore, a different,
more balanced, effort distribution can be observed. The
greater amount of effort collected in the requirements phase
can be attributed to a change of the underlying process
model description. During the first iteration, it was noticed
that a part of the effort collected in the development phase
was spent on performing some feasibility studies rather
than on implementing the prototype. The goal of the
studies was to evaluate different mobile devices and WML
constructs with respect to usability requirements. Therefore,
for the second iteration, it was decided to collect the effort
related to the feasibility studies as requirements phase.</p>
        <p>33.01%
15.06%
51.93%
requirements_phase
coding_phase</p>
        <p>testing_phase</p>
        <p>Although more effort was spent on integration testing
than during the first iteration, most of the effort collected as
testing phase was spent on documenting the integrated
code. Due to the fact that the coding phase was
underestimated, most of the system test was shifted to the third
iteration.</p>
        <p>During the second iteration, the development of pilot
service 1 required about 200 man-days.</p>
        <p>Case Study 2
Context: Pilot service 2 is concerned with the new
development of a multi-player online game for mobile devices:
many users interact in a shared environment, i.e., a virtual
labyrinth. The players can collect different items, chat, and
fight against enemies and against each other. From a
business point of view, games and entertainment could be, after
voice and SMS, the next killer application on the wireless
Internet. The development is distributed between two
different teams / organizations: one organization is
responsible for the development of the client on the mobile device
and provides a multimedia-messaging stack on the
terminal part; the other organization customizes the multimedia
layer on the server side.</p>
        <p>The organization responsible for the client side reaches
CMM maturity level 3. An iterative life cycle model
consisting of four phases (requirements phase, design phase,
coding phase, and testing phase) was followed in this case
within the context of each single iteration. The process is
characterized by extensive use of verbal communication as
well as of explicit formal documentation.</p>
        <p>11.75%</p>
        <p>12.87%
46.16%</p>
        <p>29.22%
Requirements Phase
Coding Phase</p>
        <p>Design Phase</p>
        <p>Testing Phase</p>
        <p>Analysis (client side): During the first iteration, the
following effort distribution was observed (see Figure 5):
approx. 13% of the development effort was spent on the
requirements phase, 29% on design, 46% on coding, and
12% on testing.</p>
        <p>Unexpected problems were reported in the requirements
and the design phase: problems in determining which
organization should develop the server side led to
unexpected low effort spent on the definition of the
requirements; problems with the use of TCP/IP as transport
protocol led to unexpected great effort in designing an
alternative protocol on the basis of UDP. The problematic
behavior of the TCP/IP protocol represents a good example of
unexpected issues that may occur when applying common
Internet technologies within the wireless context.</p>
        <p>Finally, it was reported that less effort than planned was
spent on testing.</p>
        <p>During the first iteration, the development of the client
side of pilot service 2 required about 140 man-days.</p>
        <p>As depicted in Figure 6, during the second iteration,
approx. 28% of the development effort was spent on the
requirements phase, 15% on design, 50.5% on coding, and
6.5% on testing. In this case, too, unexpected problems were
reported during the requirements phase, since the
organization in charge of developing the server side left the
project. On the other hand, due to a redesign of the graphic
library that led to simplification of the further design, less
effort than planned had to be spent on the design phase. It
was also reported that, due to organizational issues, less
effort than planned was spent on testing. As a consequence,
an extensive system test must be performed during the
third iteration.</p>
        <p>50.52%</p>
        <p>During the second iteration, the development of the
client side of pilot service 2 required about 130 man-days.</p>
        <p>It should be noted that effort estimates were provided at
the beginning of each iteration. In order to obtain more
accurate estimates for the second iteration, the effort
distribution data from the first iteration were used together with
the first estimates as basis for the estimation process. Figure
7 shows how the new values for the new estimates were
chosen from within a range between the data estimated
before the beginning of the first iteration and the data
gathered during the first iteration.</p>
        <p>Effort Distribution
)
%
(
t
r
o
f
f
E
60.00%
40.00%
20.00%
0.00%
Estimate It. 1
Effort It. 1
Estimate It. 2
Effort It. 2</p>
        <p>Requiremen
ts Phase
26.31%
12.87%
16.74%
27.74%</p>
        <p>Design
Phase
9.98%
29.22%
22.36%
15.19%</p>
        <p>Phases</p>
        <p>Coding
Phase
36.09%
46.16%
45.69%
50.52%</p>
        <p>Testing
Phase
27.62%
11.75%
15.21%
6.55%</p>
        <p>Concerning the requirements phase, for example,
approx. 26% was the estimate for the first iteration, 13%
was the effort actually spent on this phase during the first
iteration, and 17% was estimated for the second iteration.</p>
        <p>The new estimated value is less than the estimate from the
first iteration, but greater than the value actually measured.</p>
        <p>The estimation values were also chosen according to the
critical issues expected in the second iteration.</p>
        <p>CPI Comparison between Iterations 1 and 2</p>
        <p>
          The comparison of the Cost Performance Indices (CPI =
planned effort / actual effort [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]) computed during the two
iterations and represented by Figure 8 shows that the effort
estimates for the second iteration were more accurate than
the estimates for the first iteration (according to the
definition of CPI, an estimate is very accurate for CPI values close
to 1, like the estimate concerning the requirements phase of
the second iteration; values greater than 1 indicate
overestimation, as in the case of the requirements phase during
the first iteration).
        </p>
        <p>Furthermore, during the first iteration, much additional
effort was spent on management-related activities, like
configuration management, project planning / tracking, and
project support. Due to this, the effort spent on these
activities was measured during the second iteration: it was seen
that approx. 82% of the overall effort (1007.5 hours) was
spent on development in the strict sense, whereas 18% (223
hours) was spent on management-related activities.</p>
        <p>Analysis (server side): As mentioned above, two
different organizations were in charge of developing the server
part of pilot service 2. During the second iteration, the
second organization extended the system developed during
the first iteration.</p>
        <p>Due to organizational issues, the requirements were
managed by the organization responsible for the client side.</p>
        <p>As a consequence, both organizations in charge of the
server side spent little effort on defining the requirements.</p>
        <p>During the first iteration, an iterative life cycle model
was adopted. As shown in Figure 9, effort was spent on
design (32.5%), coding (52%), and integrating the client
with the server part (15.5%). No requirements phase and no
acceptance test were performed. Unexpected problems
were reported during the design phase, which were caused
by the TCP/IP protocol, whose latency was too high when
used on GPRS.</p>
        <p>During the first iteration, the development of the pilot
service 2 server side required about 130 man-days.</p>
        <p>15.66%</p>
        <p>32.53%
51.82%
Requirement Phase
Coding Phase
Acceptance Test Phase</p>
        <p>Design Phase</p>
        <p>Integration Phase
Fig. 9. Effort distribution, pilot service 2 (server side), iteration 1</p>
        <p>During the second iteration, the organization involved in
the development of the server side tried to apply an
approach based on extreme programming. This makes it
difficult to compare the effort data from the first and second
iteration of the server part.</p>
        <p>0.00%</p>
        <p>12.73%
87.27%
Exploration Phase</p>
        <p>Release Phase</p>
        <p>Planning Phase
Fig. 10. Effort distribution, pilot service 2 (server side), iteration 2</p>
        <p>Furthermore, for various reasons, extreme programming
was not followed strictly. Figure 10 shows, for example,
that the planning phase was not performed, since all re- creasing demand for appealing applications.
quirements and their related priorities had already been R2: Java’s promise of code working on every platform is
defined by the organization responsible for the client side. difficult to achieve: different levels of compliance with the
Also, many difficulties were encountered in deploying the J2ME specification in the case of virtual machines
impleserver developed by the first organization in the new or- mented by different device manufacturers can lead to great
ganization’s environment; these facts did not allow many variations in performance and behavior of the same
applishort development cycles and related releases as foreseen cation running on different mobile devices.
by the XP approach. As a consequence, the whole devel- R3: The maturity of the technologies specific to the
wireopment was performed at once in one big cycle. Moreover, less domain should be carefully considered: many quality
the lack of experience with the test first technique led to aspects of mobile devices (file system, network access
capaunexpected effort and, although the technique was recog- bilities, memory, etc.) are of a much lower level than those
nized to be very interesting, it had to be given up. of regular desktop systems. This has consequences in terms</p>
        <p>During the second iteration, the development of the of predictability of the quality of services and the
developserver side of pilot service 2 required about 80 man-days. ment process.</p>
        <p>R4: Technologies proven to be reliable when applied
Comparative Analysis within the context of the traditional Internet may turn out
In both case studies, the requirements phase was difficult to to be unreliable or perform poorly when used within the
control due to the novelty of the domain and the fact that context of the wireless world.
low level requirements and, particularly, usability-related R5: Testing wireless Internet services proved very
chalrequirements (e.g., how to represent large tables on small lenging due to different reasons: The first reason are the
displays) were often not well understood at the beginning. many usability issues (e.g., consistent interfaces,
navigaFeasibility studies introduced in the second iteration tion, access, etc.) related to the great diversity of devices
proved to be a good means to make explicit and handle the available on the market. Most of the usability issues have to
related uncertainty. be further researched due to the novelty of the domain.</p>
        <p>This uncertainty is one of the reasons why the effort Another reason is the development for future announced
spent on the design phase was in all observed cases less devices: Device specifications are subject to change without
than the effort spent on coding (max. 33% in the case of the notice and are usually unreliable. Another reason is that a
development of the server side of pilot service 2 during the lot of effort has to be spent on setting a proper
environfirst iteration). ment. Emulators represent one unsatisfactory but necessary</p>
        <p>The effort data from the development of the pilot ser- alternative solution. The main advantage of using
emulavices showed that all organizations spent most of the de- tors is the automation of the testing procedures whereas
velopment effort on coding (46% - 84% of development unreliable behavior is their greatest disadvantage.
effort). This seems to be plausible if the many open issues
are considered that could be addressed only at the coding
level.</p>
        <p>Testing proved very challenging due the great diversity
of devices available on the market, the unreliability of
device specifications, the low degree of automation of the
testing procedures on real devices, and the unreliability of the
available emulators. The effort spent until the end of the
second iteration is considered by all the involved
organizations to be insufficient, with the consequence that most of
testing will be performed during the last iteration.
3.3.3 Limits of the Study
Concerning the validity of the quantitative part of the
study, i.e., the characterization of the effort distribution,
Spearmint® EPGs played a major role in assuring
consistent views on the different development processes. These
views and the GQM approach were very helpful in
defining sound measurement programs that proved suitable to
provide correct and meaningful data on a monthly basis.</p>
        <p>The training of the developers responsible for collecting
data was challenging due to the widely distributed project
environment and some personnel changes that occurred
3.3.2 Domain-specific Risks between the two iterations.</p>
        <p>During the first two iterations, qualitative experience was Concerning the comparability of the quantitative data, it
collected by interviewing people involved in the develop- is not possible to directly compare either the numerical data
ment of the pilot services. Due to the novelty of the domain, from the different pilots or all data from different iterations.
the pilot partners had to deal with several risks that were This is due to the different surrounding processes applied
unknown or at least not well understood at the beginning to develop the pilot services and the evolution of the
procof the project. In the following, the main domain-specific esses during the whole project life cycle. Moreover, despite
risks observed during the development of the services are an extensive literature search, no studies could be found
presented. with a similar focus on effort baselines.</p>
        <p>R1: The first issue to be considered is the great diversity Concerning the generality of the results, the context of
of target devices in terms of display size and mode (i.e., the single case studies defines the scope of validity of the
resolution and number of colors), memory capacity, proces- baselines presented. Transforming the results for similar
sor performance, and interaction mechanisms with the user contexts should be done with careful analysis of the
exter(i.e., keyboard, jog dial, cursor buttons, joystick, touch nal validity.
screen, voice control, etc.). This heterogeneity makes it very Referring to the validity of the qualitative part of the
difficult to reconcile the need for portability with the in- study, i.e., the collection of lessons learned, the roles played
within the pilots by the persons interviewed (mainly
developers) and the focus of the respective pilots influenced the
lessons reported.</p>
        <p>Additionally, the domain-specific risks presented in this
study should be regarded as being of high significance,
since they were generalized from the lessons learned
provided by the individual organizations involved in the
development of the pilots.
4</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>CONCLUSIONS</title>
      <p>This study aimed at providing effort baselines for managers
and developers in order to give them a sense of the
behavior of projects in the field of wireless Internet service
engineering. Of course, it is important to mention that each
project is different, and that the context in which the pilots
were developed must be taken into consideration before
making any type of analogies.</p>
      <p>The effort data from the development of the pilot
services showed that all organizations spent most of the
development effort on coding. As expected, the requirements
phase was characterized by a great degree of uncertainty
concerning performance and availability of related
technologies as well as many usability issues related to the
great heterogeneity of the devices on the market.</p>
      <p>Testing proved very challenging due the great diversity
of devices available on the market, the unreliability of
device specifications, the low degree of automation of the
testing procedures on real devices, and the unreliability of the
available emulators. As a consequence, defect
characterization is a difficult task, and a great amount of the effort
planned for the third iteration will be spent on it. It is still
unclear how to characterize defects concerning usability
issues. For this purpose, usability reports will be
introduced in the next iteration.</p>
      <p>The descriptive process modeling approach supported
by the Spearmint® environment played a key role in
stabilizing the processes, eliciting accurate process models, and
disseminating process information to the process
performers. These are all necessary preconditions for meaningful
effort tracking and planning.</p>
      <p>As expected, and in spite of the accurate process models,
effort estimation proved to be a challenging process at the
beginning. During the first iteration, the organizations
involved were not able to deliver effort estimates or the
estimates they delivered turned out to be inaccurate at the end
of the iteration. On the other hand, effort tracking
performed during the first iteration together with estimation
processes based on the effort data collected provided more
accurate effort estimates for the second iteration.</p>
      <p>How to characterize complexity and/or size of system is
still an open issue. Metrics for complexity/size can be
useful for deriving effort estimation. Also, defect density
measures can be built on them for controlling the testing
process. In any case, in order to estimate effort on the basis
of estimates of system size or complexity, much more
research should be done. For example, regarding a metric like
the number of lines of code (LOC), in the case of code
written for mobile devices, it was seen that the number should
be reduced in order to improve performance; also, a low
number of classes is often the result of a great optimization
effort and not necessarily evidence of a simpler module
with less features.</p>
    </sec>
    <sec id="sec-6">
      <title>ACKNOWLEDGMENT</title>
      <p>We would like to thank the WISE consortium, especially
the pilot partners, for their fruitful cooperation. We would
also like to thank Sonnhild Namingha from the Fraunhofer
Institute for Experimental Software Engineering (IESE) and
Jussi Ronkainen from VTT Electronics for reviewing the
first version of the article.
Pazzani, M.J.: Improving Mobile Internet Usability. In
ProceedMethod. A Practical Guide for Quality Improvement of Software</p>
      <p>F. Bella received his MS in Computer Science from the Technical</p>
      <sec id="sec-6-1">
        <title>University of Kaiserslautern, Germany, in 2002. MS Thesis: “Design</title>
        <p>and Implementation of a Similarity Analysis between Process Models
in Spearmint/EPG”. He developed his MS thesis at the Fraunhofer</p>
      </sec>
      <sec id="sec-6-2">
        <title>Institute for Experimental Software Engineering (IESE) in Kaiserslau</title>
        <p>tern. Since October 2002, Fabio Bella has been a research scientist at</p>
      </sec>
      <sec id="sec-6-3">
        <title>IESE in the department of Quality and Process Engineering. Since</title>
      </sec>
      <sec id="sec-6-4">
        <title>June 2003, he is a Provisional SPICE Assessor. Bella’s research inter</title>
        <p>ests in software engineering include: (1) modeling and measurement of
software processes and resulting products, (2) software quality
assurance and control, (3) technology transfer methods, and (4) software
process assessments.</p>
        <p>J. Münch received his PhD degree (Dr. rer. nat.) in Computer Science
from the University of Kaiserslautern, Germany. Dr. Münch is
Department Head and Competence Manager for Quality and Process
Engineering at the Fraunhofer Institute for Experimental Software
Engineering (IESE), Kaiserslautern. Since November 2001, Dr. Münch has been
an executive board member of the temporary research institute SFB
501 "Development of Large Systems with Generic Methods" funded by
the German Research Foundation (DFG). Dr. Münch’s research
interests in software engineering include: (1) modeling and measurement of
software processes and resulting products, (2) software quality
assurance and control, (3) technology evaluation through experimental
means and simulation, (4) generic methods for the development of
large systems, (5) technology transfer methods. He has been teaching
and training in both university and industry environments, and also
has significant R&amp;D project management experience. Jürgen Münch is
a member of the IEEE Computer Society and the German Computer</p>
      </sec>
      <sec id="sec-6-5">
        <title>Society (GI).</title>
        <p>A. Ocampo Alexis Ocampo received his MSc degree in Computer</p>
      </sec>
      <sec id="sec-6-6">
        <title>Science from Los Andes University, Colombia, in 1999 and his title as</title>
        <p>Systems Engineer from Industrial University of Santander, Colombia, in
1997. Since February 2002, Alexis Ocampo has been a research
scientist at the Fraunhofer Institute for Experimental Software Engineering
(IESE), Kaiserslautern, in the department of Quality and Process
Engineering. Before that, he worked for 5 years as a research-developer on
new technologies and methodologies with the software company
Heinsohn Associates, Bogota, Colombia. His master thesis entitled
“Implementation of PSP in the Colombian Industry: A case study” was
developed within this company. He also worked as an instructor at the
University of Los Andes in the Department of Systems and Computation.</p>
      </sec>
      <sec id="sec-6-7">
        <title>Alexis Ocampo’s research interests in software engineering include: (1)</title>
        <p>modeling and measurement of software processes and resulting
products, (2) software quality assurance and control, (3) technology transfer
methods.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Basili</surname>
            ,
            <given-names>V.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caldiera</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rombach H.D.</surname>
          </string-name>
          :
          <article-title>The Experience Factory</article-title>
          , in Encyclopedia of Software
          <string-name>
            <surname>Engineering (John J. Marciniak</surname>
          </string-name>
          , Ed.), John Wiley &amp; Sons, Inc., Vol.
          <volume>1</volume>
          , pp.
          <fpage>469</fpage>
          -
          <lpage>476</lpage>
          (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Basili</surname>
            ,
            <given-names>V.R</given-names>
          </string-name>
          ,
          <article-title>Quantitative Evaluation of Software Engineering Methodology</article-title>
          ,
          <source>in Proceedings of the First Pan-Pacific Computer Conference</source>
          , Melbourne, Australia (
          <year>1985</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Becker-Kornstaedt</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boggio</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muench</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ocampo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palladino</surname>
          </string-name>
          , G.:
          <article-title>Empirically Driven Design of Software Development Processes for Wireless Internet Services</article-title>
          .
          <source>Proceedings of the Fourth International Conference on Product-Focused Software Processes Improvement (PROFES)</source>
          (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4] [4]
          <string-name>
            <surname>Becker-Kornstaedt</surname>
          </string-name>
          , U., Hamann, D.,
          <string-name>
            <surname>Kempkens</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rösch</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verlage</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Webby</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zettel</surname>
          </string-name>
          , J.:
          <article-title>Support for the Process Engineer: The Spearmint Approach to Software Process Definition and Process Guidance</article-title>
          .
          <source>Proceedings of the Eleventh Conference on Advanced Information Systems Engineering (CAISE '99)</source>
          , pp.
          <fpage>119</fpage>
          -
          <lpage>133</lpage>
          . Lecture Notes in Computer Science, Springer-Verlag. Berlin Heidelberg New York (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] [5]
          <string-name>
            <surname>Briand</surname>
            ,
            <given-names>L.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Differding</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rombach</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>D: Practical Guidelines for Measurement-Based Process Improvement</article-title>
          .
          <source>Software Process Improvement and Practice</source>
          <volume>2</volume>
          , No.
          <volume>4</volume>
          , pages
          <fpage>253</fpage>
          -
          <lpage>280</lpage>
          (
          <year>1996</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Münch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Muster-basierte Erstellung von SoftwareProjektplänen</article-title>
          ,
          <source>PhD Theses in Experimental Software Engineering</source>
          , Vol.
          <volume>10</volume>
          , ISBN:
          <fpage>3</fpage>
          -
          <lpage>8167</lpage>
          -6207-7,
          <string-name>
            <surname>Fraunhofer</surname>
            <given-names>IRB</given-names>
          </string-name>
          Verlag (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Kerth</surname>
            ,
            <given-names>N.L.</given-names>
          </string-name>
          :
          <article-title>Project Retrospectives: A Handbook for Team Reviews</article-title>
          . Dorset House Publishing,
          <source>ISBN: 0-932633-44-7</source>
          , New York (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Humphrey</surname>
            ,
            <given-names>W. S.:</given-names>
          </string-name>
          <article-title>A Discipline for Software Engineering (SEI Series in Software Engineering)</article-title>
          . Carnegie Mellon University, ISBN:
          <fpage>0</fpage>
          -
          <lpage>201</lpage>
          -54610-8. Addison-Wesley Publishing Company (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Ocampo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Boggio</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Münch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Palladino,
          <string-name>
            <surname>G.</surname>
          </string-name>
          :
          <article-title>Toward a Reference Process for Developing Wireless Internet Services</article-title>
          .
          <source>In: IEEE Transactions on Software Engineering</source>
          <volume>29</volume>
          (
          <year>2003</year>
          ),
          <volume>12</volume>
          ,
          <fpage>1122</fpage>
          -
          <lpage>1134</lpage>
          : Ill., Lit.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Jedlitschka</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Nick</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Software Engineering Knowledge Repositories</article-title>
          . In: Conradi, Reidar (Ed.) u.a.:
          <article-title>Empirical Methods and Studies in Software Engineering : Experiences from ESERNET</article-title>
          . Berlin : Springer-Verlag,
          <year>2003</year>
          ,
          <fpage>55</fpage>
          -
          <lpage>80</lpage>
          : Ill.,
          <source>Lit. (Lecture Notes in Computer Science</source>
          <volume>2765</volume>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Cockburn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Agile Software Development: Addison-Wesley Pub</article-title>
          .
          <source>Co; ISBN: 0201699699; 1st edition</source>
          (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Beck</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Extreme Programming Explained: Embrace Change</article-title>
          . Addison
          <string-name>
            <surname>Wesley</surname>
          </string-name>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Boehm</surname>
            ,
            <given-names>B.W.:</given-names>
          </string-name>
          <article-title>A Spiral Model for Software Development and Enhancement</article-title>
          , IEEE Computer, vol
          <volume>21</volume>
          , No 5, pp.
          <fpage>61</fpage>
          -
          <lpage>72</lpage>
          (
          <year>1988</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Buchanan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farrant</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thimbleby</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marsden</surname>
          </string-name>
          , G.,
          <source>ings World Wide Web 10</source>
          , pp.
          <fpage>673</fpage>
          -
          <lpage>680</lpage>
          (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Solingen</surname>
            , R. van; Berghout,
            <given-names>E.</given-names>
          </string-name>
          : The Goal/ Question/ Metric Development. London,
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>