=Paper=
{{Paper
|id=Vol-537/paper-14
|storemode=property
|title=Federated Application Lifecycle Management Based on an Open Web Architecture
|pdfUrl=https://ceur-ws.org/Vol-537/D4F2009_Paper12.pdf
|volume=Vol-537
}}
==Federated Application Lifecycle Management Based on an Open Web Architecture==
Federated Application Lifecycle Management
Based on an Open Web Architecture
Florian Matthes, Christian Neubert, Alexander Steinhoff
Lehrstuhl für Informatik 19 (sebis)
Technische Universität München
Boltzmannstr. 3
85748 Garching
{matthes, neubert, steinhof}@in.tum.de
Abstract: In software maintenance one of the most important assets is a coherent,
consistent and up-to-date documentation of the application. In practice, this informa-
tion is often not accessible, outdated, or scattered over a multitude of special-purpose
systems. We propose to abandon the idea of a perfect documentation in a central
repository, and to apply the successful principles and technologies of the open web ar-
chitecture to arrive at a federated architecture to capture, connect and query distributed
information over the full lifecycle of an application.
The contributions of the paper are a concise problem analysis, an identification
of relevant related work in software engineering and lessons learned from open web
architectures. Based on these, we highlight how our approach can contribute to sub-
stantial quality and productivity improvements in the maintenance phase. The paper
concludes with a discussion of research and development issues that need to be ad-
dressed to realize and validate such a federated application lifecycle management.
1 Motivation
For long-lived business applications, maintenance efforts and expenses exceed by far the
costs of the initial development [Leh91]. As indicated in Figure 1, during the full life-
cycle of a large business application numerous stakeholders with different roles carry out
highly-specialized activities that generate and use digital artifacts of various types which
are managed using problem-specific tools and repositories. If one looks at the system
maintenance and evolution phase, change requests and bug reports are generated which
are managed using tools like ClearQuest or Bugzilla. To assess the change requests and to
resolve bugs, a deep understanding of the software application and its past evolution is re-
quired. Artifacts generated by other activities and managed in other tools and repositories
(e.g. the Rational Software Architect or a wiki) have to be consulted and connected to gain
this understanding. As an example, an explanation of the rationale for a design decision
helps to choose evolution options consistent with the initial design.
However, even if the system was initially well documented, the quality of the documen-
tation degrades over time since new or changing requirements lead to an evolution of the
Software-Maintenance &
Project- Project Management
manager
Evolution
Quality Assurance
QA-Team
Feedback Feedback Feedback Feedback
Object-oriented Test and
Requirements Product Quality
Architecture Design Design of Implementation
Engineering Maintenance
Subsystems sicherung
Staff
Software- (Senior) (Junior)
Requirements- architect Tester
Requirements Developer Developer
Determination &
Systemanalysis
Req.- System-
engineer analyst
Requirement Use Case Architecture Class
SourceCode Test Cases ChangeRequests
Specification Diagram Description Diagram
Conceptual Subversion Seapine
Wiki Component Sequenz-
Class Diagram & Interaction-Diagram
Diagram
Metrics Bug Reports
Deployment
Diagram
Rational Software Architect ConQat ClearQuest, Bugzilla
Figure 1: Activities, stakeholders, artifacts and tools in the application lifecycle
implementation and the associated documentation is often not updated accordingly, mak-
ing it a useless and dead artifact in the long run [FRJ09]. To prevent this, it is necessary
to detect anomalies and inconsistencies (e.g., invalid references) as early as possible and
thereby keep the efforts for further documentation updates low.
Our paper is structured as follows: Section 2 analyzes the problems related to software
maintenance and documentation in more detail. In Section 3 we briefly cover related work
in software engineering addressing these problems. In Section 4 we outline web technolo-
gies inspiring our approach of a network of federated systems with web-based interfaces
which is then presented in Section 5. In Section 6 we identify further research issues
deduced from the suggested open distributed architecture and conclude with Section 7.
2 Problem Statement
Availability, completeness, quality and a holistic view of all artifacts containing documen-
tation of the software, especially of its architecture, are critical for the success and effi-
ciency of software maintenance activities. In the following, we identify potential factors
which constrain these activities and may compromise their success.
In the literature various models can be found describing the activities and tasks during the
software development process, e.g. the Waterfall Model [Boe76] and the V-Modell XT
[RB08]. All of these approaches have in common that different stakeholders contribute
to the project during different phases of the development. Stakeholders involved in earlier
phases of the process (e.g., requirements engineering, design, implementation) often either
are not available, when the software is maintained, or do not remember details of the
documentation artifacts. Their knowledge is only present in those artifacts. Furthermore,
different views and different understandings of the participating stakeholders may lead to
inconsistencies [FRJ09].
For the above mentioned reasons, software documentation artifacts are especially impor-
tant for the software maintenance activities [Leh91]. A common problem is that these
artifacts are either incomplete or do not exist at all. This is often caused by tight and
short-term project plans and a limited budget [Par94]. Since these are very common and
fundamental problems, the maintenance staff usually have to be content with the avail-
able artifacts created during the execution of the particular development phases and have
to make best use of those resources. Most of them are strongly interrelated and have an
architectural documentation character. For instance, the software architecture itself is doc-
umented in an object-oriented model, the description of software components and their
dependencies are captured in textual documents (e.g., wiki pages), the architecture is re-
alized by source code elements, which are coupled with the object-oriented models, the
component descriptions and the source code documentation. To create and manage these
various kinds of artifacts, a large set of highly specialized tools is utilized. Purposes of
these tools include but are not limited to requirements management, bug tracking, change
request management, version control for source code, workflow management, creation of
graphical architecture and design models, build and release management or testing. Addi-
tionally, important information resides directly in the source code as inline comments.
A system evolves over time responding to changes in its environment, requirements and the
implementation technologies [RLL98]. Thereby different version of the artifacts emerge.
So it is challenging to keep all artifacts up-to-date and synchronized with regard to these
versions. For instance, assume that the architecture of a software component is docu-
mented on a wiki-page. For all public interfaces of this component hyperlinks to the
Java-Interface-Files are provided. The versions of those files are stored in a source code
repository. If the signature of an interface needs to be modified, e.g., caused by a change
request, and thus the version of the file within the repository changes, the reference in the
documentation-wiki might need to be updated with regard to this version.
When considering artifacts during maintenance activities, it is helpful to have references
available to other resources relevant for the current context. Unfortunately, artifacts are
often separated and disconnected. Links between artifacts of different tools cannot be cre-
ated because of incompatible APIs, heterogeneous protocols and different storage formats.
Moreover the documentation artifacts cannot be found, because no unified search over all
content is provided. To bridge these gaps between distributed tool-specific repositories,
various vendors try to consolidate functionality from other tools in a single uniform plat-
form and provide APIs to enable data exchange. This endeavor is challenging, because of
the resulting system complexity, the enormous integration effort, and the lack of generally
accepted interchange formats and protocols [Wie09].
In the following sections we present a web-based approach to provide solutions for some
of these problems.
3 Related Work
In [WTA+ 08] Weber et. al. describe their approach of using Web 2.0 technologies to
manage the information involved in the software development process. They introduce
the concept of a central Wiki-based tool called a Software Organization Platform (SOP).
Focusing on small and medium sized organizations (SME) their goal is to better lever-
age the collective knowledge associated with a single project as well as knowledge about
processes that can be reused in later projects.
The Jazz initiative by IBM Rational Software aims at supporting team collaboration by the
information integration and at providing guidance for the development process [Fro07].
The initiative offers an IDE-centered approach backed by a server-side central data repos-
itory. Further, it allows the integration of third-party tools but requires them to conform
to certain web-based interfaces to access the Jazz services. Complementing the individual
tool functionalities, there is a set of generic cross-tool capabilities like querying, content
discovery or process administration provided by the Jazz platform.
Rooting in the experiences with the Jazz platform, the Open Services for Lifecycle Col-
laboration (OSLC) community aims at establishing common, less restrictive, web-based
standards among software engineering tools [Wie09]. Inspired by the internet, the com-
munity is working on shared protocols and formats allowing independent tools to connect
and easily access contents. This is very close to our approach, presented in Section 5, but
differs in so far that we focus on opportunities not requiring universal standards beyond
those currently established in the web, although we share the same long-term vision.
The SYSIPHUS project focuses on collaboration in geographically distributed teams. Its
goal is to capture knowledge as a side effect of development and structuring it for long-
term use [BDW06]. It provides a uniform framework for models and collaboration artifacts
supporting the awareness of relevant stakeholders and the traceability of requirements. All
artifacts reside in a single shared repository, which can be accessed through a variety of
tools including a web-based client providing a hypertext view of the contents.
4 Web Technology – State of the Art and Trends
In this section, starting from the characteristics of hypertext systems, we outline which
fundamental kinds of tools and applications have emerged in the past as well as current
trends that we consider relevant in a network of federated software engineering tools.
4.1 General characteristics and trends of open hypertext systems
The most significant characteristic of an open hypertext system like the world wide web
is the fact that from within one document it can be easily linked to any other document
in the system. This is true for human readable documents as well as for more structured
ones interpretable by machines [BL06]. Furthermore, the uniformity of the presentation
format allows the navigation in the complete system using a single tool – namely a web
browser. However, the term document is misleading in so far that it suggests only static
content is considered. Instead we are meanwhile used to seeing dynamically generated
content everywhere on the web. Today there is even a clear trend towards complete desktop
applications – meaning they were traditionally perceived as desktop applications – like
email clients and word processors moving into the browser, facilitated by frameworks
as for example Eclipse RAP1 . We expect that even integrated development environments
(IDEs) will be realized as web applications soon which would be particularly interesting
for our approach (see Section 5).
4.2 Indexing and search
The most common approach to retrieve relevant documents on the web is full text search
over all accessible documents via search engines like Google. One important aspect here is
that not only the text contents of the documents can be considered but also the structure of
the links between them. Additionally there exist so-called meta-search engines [MYLL02]
which combine the results of other search engines and thereby intend to improve the rele-
vance of the retrieved documents.
Manual building of structured catalogues of the available web content is on the one hand
made difficult by the sheer amount of documents, on the other hand, it is infeasible to
keep track of the changes and new additions. However, companies like yahoo offer a
hierarchical catalogue of the most popular websites and of course for particular topics
there exist special directories of relevant resources.
A recent phenomenon are services allowing users to upload their personal bookmarks of
interesting sites and share them with others – known as social bookmarking [Neu07]. The
idea is that users assigns labels – also called tags – to the bookmarks for personal organi-
zation. Additional value is generated by aggregating the tags of all users and thus creating
a repository of URLs categorized by these tags and ranked by popularity. Furthermore, it
is possible to subscribe to certain tags and that way effectively monitor new additions to
the repository – filtered according to personal interests.
4.3 Content syndication and mashups
Making the contents of websites available for use in the context of external applications is
known as content syndication. The data providers make part or all of their data available
mainly to increase their reach. Subscribers have the opportunity to enrich their sites by
providing the user with more relevant information and thus enhance their attractiveness.
Common means for data exchange in this context are XML-based formats and protocols
like RSS and ATOM, especially used by news portals and blogs mainly to inform about
1 http://www.eclipse.org/rap. Visited on August 30th 2009
new additions to their contents. However, also non textual information like geographical
maps (Google Maps2 ), videos (youtube3 ) and even mind maps (mindmeister4 ) or diagrams
(gliffy5 ) are made available for use in other applications by the hosting site which usu-
ally provide libraries for the scripting language JavaScript or Flash plug-ins to embed the
content in another site. While these mechanisms effectively determine the presentation
of the data, lightweight and mostly XML-based RESTful interfaces, i.e., conforming to
the REST architectural style [Fie00], allow to retrieve and modify the available data in an
abstract format.
Content syndication enables a new class of applications, so-called mashups. These ap-
plications combine or enrich the contents provided by other applications and thereby add
additional value for the user. A typical example is integrating small advertisements or
photos with geographical maps (e.g., panoramio6 ).
4.4 Advances in browser technology
In addition to embedding functionality directly into the document by the hosting site, in
recent years the trend can be identified that the browser itself can be enhanced with addi-
tional functionality. Most browsers offer a plug-in interface allowing developers to create
arbitrary kinds of feature-rich add-ons realized for example as additional toolbars or menu
items. Minor extensions can also come in the form of small portions of JavaScript code
stored in and triggered as a browser bookmark, so-called bookmarklets.
Applications reach from generic tools for example for quick look-up of words in a dictio-
nary or filtering advertisements out of the displayed page to functionality that is related to
specific services as social bookmarking sites or social networks. A good example showing
how far the integration of the contents of these specific services with arbitrary documents
on other sites can go is diigo7 , a social bookmarking website. Diigo offers a browser ex-
tension in form of a toolbar which allows the user to place sticky notes and comments on
any page, highlight text for future reference and even start discussions with other users
directly in the context of any piece of text or resource on the page.
Modifying the contents of a page by filtering out parts of it or adding further information is
also known as augmented browsing or client-side web personalization [AV08]. Moreover,
there are browser extensions specializing on changing web pages via scripts every time
before they are loaded and thus forming the basis for other plug-ins making use of this
functionality. A popular such extension for the Firefox browser is Greasemonkey8 .
2 http://maps.google.com. Visited on August 30th 2009
3 http://www.youtube.com. Visited on August 30th 2009
4 http://www.mindmeister.com. Visited on August 30th 2009
5 http://www.gliffy.com. Visited on August 30th 2009
6 http://www.panoramio.com. Visited on August 30th 2009
7 http://www.diigo.com. Visited on August 30th 2009
8 https://addons.mozilla.org/en-US/firefox/addon/748. Visited on August 30th 2009
5 Improving software maintenance using a web-based architecture
We consider the following kinds of interfaces elementary for software engineering tools
for effectively being part of a federated network that facilitates the software maintenance
process. Our requirements are very similar to those described by the OSLC community
(see Section 3):
1. Every resource that is managed by or resides in one of the tools is accessible via a
distinct and persistent URL. On the one hand, this URL serves as an identifier for the
respective resource and on the other hand the document behind the URL provides
some human readable information about it.
2. A tool provides information about recent updates of its content base via RSS feeds.
It is possible to filter these feeds according to the internal structure of the content
base. For example, if a tool organizes artifacts in a hierarchical folder structure, it is
possible to subscribe to changes in a particular subtree of this hierarchy.
3. Extended access to the contained artifacts is enabled by lightweight RESTful APIs.
This extended access includes listing all artifacts, search for them by particular prop-
erties and optionally modification.
Fulfilling the first requirement can be considered the absolute minimum for a tool for
effectively being part of a federated network, while the last is not likely to be found in
all tools in the network. In contrast to OSLC, we do not require shared resource formats
among the tools. However, this issue will be covered later in Section 6.
Figure 2 illustrates how additional services make use of the interfaces provided by basic
tools while providing the same interfaces themselves. Although it is not explicitly dis-
played in the figure for the sake of clarity, tools on the same level are allowed to use the
interfaces of each other – for example to provide direct links to related resources.
Inspired by applications which have proven successful on the web (see Section 4), in
the following, we describe some opportunities for additional services that build on the
interfaces we suggested above.
5.1 Discovering relevant content
Perhaps the most important benefit of the outlined approach is that it allows indexing
the resources contained by all tools effectively enabling a full-text search of all contents.
However, an additional requirement for the searched systems is that they not only provide
URLs for each resource, but these resources are at least indirectly reachable via HTML-
links starting from one dedicated page of each system.
In contrast to automatic indexing, it is possible to build systems allowing users to book-
mark certain resources and label them with tags, effectively categorizing the contents. This
maintenance
staff
web
browser
URLs RSS REST
unified change metadata/
connecting search monitor tags
services
code change requirements bugs
basic SE-tools repository requests
Figure 2: Additional connecting services built on simple web interfaces
can be seen as the equivalent of social bookmarking (see Section 4.2) in the context of soft-
ware documentation artifacts. Since the user base is rather limited, not all advantages of
this approach may unfold. However, we consider this simple and lightweight categoriza-
tion of resources residing in separate systems a substantial advantage, since in addition
to its usefulness for content discovery it can serve as the basis for additional services de-
scribed later in this section.
5.2 Connecting contents of separate systems
The most fundamental link between two resources of web-based tools is apparently an
HTML-link from within the representation of one resource to the URL of another one. For
example in the context of a bug report it is possible to reference the documentation of the
affected components or a processed change request that possibly caused the bug.
Another option is to directly embed the views on artifacts of one system in the context of
documents residing in another system. This can be established through content elements
like images – being static or generated upon each HTTP-request – or special Flash plug-ins
provided by the system containing the artifacts to embed. For instance the online diagram
software gliffy (see Section 4.3) allows to embed views on its diagrams as images into
any HTML-document. This way – making manual exports of graphical representations or
updating of links obsolete – it is easier to keep different separate documentation artifacts
consistent with each other.
When linking or embedding contents directly within a document is not possible or ap-
propriate, another possibility is to connect contents on the client side by enhancing the
presentation in the browser. The technologies presented in Section 4.4 allow for exam-
ple parsing of a documents content each time before it is displayed and adding a link to
Javadoc class documentation to any word matching the respective class name.
Another opportunity for bringing contents together on the client side is to make use of a
central bookmarking and tagging application – as mentioned above – and display links to
related resources in a browser sidebar. The relation of the documents would in this case be
established via the tags of the current document and matching tags of other resources. Tags
in this context can be considered not only classifiers but rather implicit links. By applying
the same tag, as for instance the name of a component in the application architecture,
to several resources, a relation between these resources is established without directly
manipulating them.
5.3 Awareness
If changes are made to related pieces of information in separate places by different people,
it is very hard to keep this information consistent if there is no mechanism for monitoring
these changes in order to react if necessary. These inconsistencies always bear the risks of
errors that are introduced because wrong assumptions are made by the developers resulting
from outdated documentation. Therefore, persons with particular responsibilities concern-
ing the maintained application are in need of tools keeping them informed of changes with
regard to these responsibilities, whether those relate to certain parts of the application or
global aspects as usability and security.
Content syndication mechanisms like RSS can help to meet these information needs by
allowing systems to provide web feeds about changes to the resources they contain. Since
we demand that the feeds provided can be filtered, it is possible to subscribe to task-specific
subsets of the resources of each system. However, we only required that the native filtering
provided by a tool resembles the respective internal organization of its resources which
generally cannot accommodate any appropriate view on the contents. Here it can be useful
to rely on tags attached to the URLs of the resources for filtering instead. For aggregating
the various feeds, one option is to use special desktop applications like feed readers, email
clients or a web browser capable of viewing RSS feeds. Additionally, the feeds can be
viewed using special tools automatically aggregating a configurable selection of feeds on
a website, effectively generating a dashboard that contains the current information about
changes to content, relevant for a particular user.
5.4 Connecting to external knowledge
During software maintenance, especially when trying to understand existing code, famil-
iarity with the application domain plays an important role [SV98]. However, software
developers often have insufficient knowledge of the application domain, so being able to
access information concerning the meaning of particular concepts is important for them
when editing a program. Additionally, it might be necessary to identify persons having
the respective expertise. Provided that at least part of this information is stored in e.g.
an enterprise wiki, a web-based architecture makes it easy to connect to this content in a
way that is similar to those suggested above. The connection and exchange of knowledge
can even cross the boundaries of different organizations, for example by leveraging the
knowledge provided by projects like The Open Model Initiative [KSF06], a current effort
to collaboratively create and refine conceptual models in a public process.
6 Research issues in federated application lifecycle management
Keeping track of different versions of artifacts becomes a serious issue when links exist
between them, especially when they reside in separate systems having different versioning
mechanisms. For example for each link it should be possible to make the distinction
whether it points to a specific version of a resource or if it has to be adjusted when a
new version of the resource is created. Considerable efforts have to be made to motivate
and establish common standards among tools regarding the traceability of changes and the
notion and representation of versions.
Having no central system but a variety of federated tools, the identification and autho-
rization of users becomes another important challenge. On the one hand, it is desirable
that the identity of a user does not have to be managed for each individual system, on the
other hand, a user’s roles and access rights have to be present when a resource is accessed
locally. Decentralized authentication standards like OpenID9 provide a solution for part
of this problem. However, since a systems can contain links and metadata referring to re-
sources of another system, more complicated problems arise that require further research:
One example is that a user having no rights for a particular resource must not be granted
access to meta information of this resource residing in another system.
While providing a distinct identification mechanism for resources and facilitating the dis-
covery of related content items is a major benefit of the presented approach, for many
tasks this is not sufficient. When for example searching for contents that were created or
modified at a particular date or by a particular author, the access to structured metadata of
the resources is required. To accomplish this, the systems in the network have to expose at
least part of their internal schema. Further, the corresponding concepts of the various mod-
els have to be identified and mapped onto each other to make use of the provided schema
information [Ste04]. Thereby we distinguish two types of schemas. On the one hand, data
schemas provided by tools for specific purposes, e.g., the definition of requirements. An
advantage of those schemas is that the concepts and their relationships are predefined and
static. Therefore, mappings and data exchange protocols between multiple tools can be
created in advance. On the other hand, some tools, e.g., wikis, are not intended to address
a specific purpose. However, some of them allow the definition of metadata by adding se-
mantic annotations to the contents. In this case the data schema and the formal definition
of the concepts and their relationships is emerging bottom-up over time. Because of the
9 http://www.openid.net. Visited on August 30th 2009
volatility of those schemas, a predefined resource mapping is infeasible. Yet, technologies
like the RDF (Resource Description Framework10 ) provide means for dynamically making
semantic information available to other applications. However, since we do not expect a
single data interchange format and semantic representation for resources among all tools
in the future, it remains challenging to combine data from various schemas to provide ser-
vices like global search and querying, particularly if some schemas evolve dynamically.
As mentioned above, tags can serve as a generic tool for connecting resources of different
systems. While research has been done on large tagging systems and their potential (e.g.,
[GH06] or [HKGM08]), tagging in a comparatively small environment like a software
project is not well studied yet. An interesting research question in this context is for
example in how far tagging can contribute to establishing a shared vocabulary of technical
terms as well as concepts of the application domain among the developers. Additionally,
solutions have to be found to combine tagging functionality of different systems which is
not trivial because they can differ in various ways like for instance the possibility to assign
weights or visibility restrictions to tags. Furthermore, the identification of synonymical
tags among separate systems has to be accomplished. For this problem the mapping of
tags to URLs – particularly wiki pages [HSB07] – might provide a solution.
7 Conclusion
Software maintenance is a complex and costly task relying heavily on the access to up-
to-date documentation of the maintained system. To better accommodate this information
need, we suggested that all tools being used during the application lifecycle offer cer-
tain web-based interfaces that allow them to form a federated network making all created
documentation artifacts accessible in a hypertext format. Inspired by current trends and
successful applications on the web, we demonstrated the opportunities the approach of-
fers for the field of application lifecycle management. Finally, we identified the particular
challenges of our approach and topics for future research, respectively.
References
[AV08] Anupriya Ankolekar und Denny Vrandečić. Kalpana - enabling client-side web person-
alization. In Proceedings of the nineteenth ACM conference on Hypertext and hyper-
media, 2008.
[BDW06] B. Bruegge, A. H. Dutoit und T. Wolf. Sysiphus: Enabling informal collaboration in
global software development. In IEEE International Conference on Global Software
Engineering (ICGSE’06), 2006.
[BL06] Tim Berners-Lee. Linked Data. Website, July 2006. Available online at http:
//www.w3.org/DesignIssues/LinkedData.html; visited on August 27th
2009.
10 http://www.w3.org/RDF. Visited on August 30th 2009
[Boe76] Barry W. Boehm. Software Engineering. IEEE Trans. on Computers, C-25(12), 1976.
[Fie00] Roy Thomas Fielding. Architectural styles and the design of network-based software
architectures. Dissertation, University of California, Irvine, 2000.
[FRJ09] Martin Feilkas, Daniel Ratiu und Elmar Jürgens. The Loss of Architectural Knowl-
edge during System Evolution: An Industrial Case Study. In Proceedings of the 17th
International Conference on Program Comprehension, 2009.
[Fro07] Randall Frost. Jazz and the Eclipse Way of Collaboration. IEEE Software, 24(6), 2007.
[GH06] Scott A. Golder und Bernardo A. Huberman. Usage patterns of collaborative tagging
systems. Journal of Information Science, 32(2), 2006.
[HKGM08] Paul Heymann, Georgia Koutrika und Hector Garcia-Molina. Can social bookmarking
improve web search? In WSDM ’08: Proceedings of the International Conference on
Web Search and Web Data Mining, New York, NY, USA, February 2008. ACM.
[HSB07] Martin Hepp, Katharina Siorpaes und Daniel Bachlechner. Harvesting Wiki Consensus:
Using Wikipedia Entries as Vocabulary for Knowledge Management. IEEE Internet
Computing, 11(5), 2007.
[KSF06] Stefan Koch, Stefan Strecker und Ulrich Frank. Conceptual Modelling as a New Entry
in the Bazaar: The Open Model Approach. In Proceedings of The Second International
Conference on Open Source Systems, Berlin, 2006. Springer.
[Leh91] Franz Lehner. Softwarewartung – Management, Organisation und methodische Un-
terstützung. Carl Hanser Verlag, München, Wien, 1991.
[MYLL02] Weiyi Meng, Clement Yu und King Liu-Lup. Building efficient and effective
metasearch engines. ACM Computing Surveys, 34(1), 2002.
[Neu07] Dagmar Neuberger. Social Bookmarking – Entwicklungen, Geschäftsmodelle und Ein-
satzmöglichkeiten. VDM Verlag, January 2007.
[Par94] David Lorge Parnas. Software Aging. In ICSE, 1994.
[RB08] Andreas Rausch und Manfred Broy. Das V-Modell XT : Grundlagen, Erfahrungen und
Werkzeuge. dpunkt.verlag, 2008.
[RLL98] David Rowe, John Leaney und David Lowe. Defining Systems Evolvability - A Tax-
onomy of Change. In Conference on Engineering of Computer-Based Systems (ECBS
’98), Los Alamitos, CA, USA, 1998. IEEE Computer Society.
[Ste04] Ulrike Steffens. Metadaten-Management für kooperative Anwendungen. Dissertation,
TU Hamburg-Harburg, 2004.
[SV98] Teresa M. Shaft und Iris Vessey. The relevance of application domain knowledge: char-
acterizing the computer program comprehension process. Journal of Management In-
formation Systems, 15(1), 1998.
[Wie09] John Wiegand. The Case for Open Services. Website, May 2009. Available online
at http://open-services.net/html/case4oslc.pdf; visited on August
27th 2009.
[WTA+ 08] Sebastian Weber, Ludger Thomas, Ove Armbrust, Eric Ras, Jörg Rech, Özgür Uenalan,
Martin Wessner, Marcel Linnenfelser und Björn Decker. The Software Organization
Platform (SOP): Current Status and Future Vision. In Proceedings of the 10th Interna-
tional Workshop on Learning Software Organizations (LSO 2008), June 2008.