<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Six Strategies for Building High Performance SOA Applications</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Uwe</forename><surname>Breitenbücher</surname></persName>
							<email>uwe.breitenbuecher@iaas.uni-stuttgart.de</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Architecture of Application Systems (IAAS)</orgName>
								<orgName type="institution">University of Stuttgart</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oliver</forename><surname>Kopp</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Architecture of Application Systems (IAAS)</orgName>
								<orgName type="institution">University of Stuttgart</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Frank</forename><surname>Leymann</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Architecture of Application Systems (IAAS)</orgName>
								<orgName type="institution">University of Stuttgart</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Reiter</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Architecture of Application Systems (IAAS)</orgName>
								<orgName type="institution">University of Stuttgart</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dieter</forename><surname>Roller</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Architecture of Application Systems (IAAS)</orgName>
								<orgName type="institution">University of Stuttgart</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tobias</forename><surname>Unger</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Architecture of Application Systems (IAAS)</orgName>
								<orgName type="institution">University of Stuttgart</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Six Strategies for Building High Performance SOA Applications</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">254DC1DA2CF934EC75A0CE0051B4A5FB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:21+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Service-oriented architecture</term>
					<term>High Performance</term>
					<term>Strategies</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The service-oriented architecture (SOA) concepts such as loose coupling may have negative impact on the overall execution performance of a single request. There are ways to facilitate high performance applications which benefit from this kind of architectural style compensating the caused overhead significantly. This paper gives an overview on six high level strategies to improve the performance of SOAs with a central service bus and presents how to apply them to build high performance service-oriented applications without corrupting the SOA paradigm and concepts on the technical level.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>The key concepts of service-oriented architectures (SOAs) such as loose coupling, interoperability, or abstraction may have negative impact on the overall performance of applications. The reasons are additional costs for time-consuming operations such as message format transformations, dynamic service discovery, etc. <ref type="bibr" target="#b9">[10]</ref>. In this paper we present six different improvement strategies which may increase the performance and show how to apply them to build high performance SOA applications. As the presented strategies are applied on a higher level than the operations causing the overhead, the strategies compensate these time-consumptions and additionally increase the overall performance.</p><p>In this paper we use two metrics for assessing the performance: Throughput and response time. Throughput denotes the maximum number of requests a SOA application can process in a certain period. Response time measures the time an application needs to respond to a request <ref type="bibr" target="#b13">[14]</ref>.</p><p>The remainder of this paper is structured as follows: Section 2 discusses related work. Section 3 presents six strategies to improve the performance and how to apply them in an abstract service-oriented architecture with a central service bus. Finally, Section 4 concludes and provides an outlook on future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>This paper is a first attempt to show how a set of high level strategies can be applied to improve the performance of a SOA application without corrupting the underlying SOA concepts. Other work in the area of SOA performance improvements are focusing on the technical level. One example for technical improvements are performance best practices considering optimization strategies focusing on message processing, message structure, and message design of XML based protocols <ref type="bibr" target="#b10">[11]</ref>. These optimizations are different from the presented strategies in this paper in the level of abstraction: The six presented strategies in this paper are applied on a high abstract level while the best practices propose optimizations for concrete technologies. FastSOA <ref type="bibr" target="#b11">[12]</ref> is an architecture and software coding practice which considers optimization through native XML environments, a mid-tier service cache, and the use of native XML persistence. It combines the cache strategy presented in this paper with best practices by Endrei et al. <ref type="bibr" target="#b10">[11]</ref>, but lacks applying the other high level strategies in order to gain a higher overall performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">High Performance Strategies</head><p>In this section we present six strategies which enable high performance serviceoriented applications without corrupting the underlying SOA concepts <ref type="bibr" target="#b3">[4]</ref>. We do not claim that these strategies are complete: They are inspired by the experiences in our research projects -mainly SimTech 1 -and have to be seen as a first attempt to design high performance SOA applications which has to be extended. We present Parallel Processing, Caching, Dynamic Service Discovery, Dynamic Service Migration, Multiple Service Instantiation, and Multiple Service Invocation. Each strategy targets performance issues on an architectural level.  The strategies focus on SOAs having a service bus as central component (see Fig. <ref type="figure" target="#fig_1">1</ref>): A service is an application processing request messages and may returning response messages. A client is any application that sends request messages which have to be processed by services to a central component called service bus (a client can be a service, too). The service bus is a middleware component providing an integration platform to connect clients with services <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b6">[7]</ref>. It uses a service registry which stores all available services combined with a description of their functionality to look up appropriate services <ref type="bibr" target="#b6">[7]</ref>. All messages sent by a client are routed through the service bus, which looks up an appropriate endpoint and sends the message to the selected service. After the service finishes the processing, response messages may be routed back to the requesting client.</p><p>The following subsections describe the six strategies. Each strategy has a goal describing the strategy's impact on the performance in one sentence. The description explains in more detail how to apply the strategy and why the performance is improved. The assumptions paragraph describes preconditions which have to be met to apply the strategy successfully. Benefits of applying the strategies are summarized as well as downsides and problems in a separate paragraph.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Parallel Processing</head><p>Goal. The goal of this strategy is increasing the internal throughput of requests in the application to improve the overall application performance.</p><p>Description. An application implemented as SOA consists of different services orchestrated together to provide new functionality: The application receives a request, invokes several services and thereby delegates tasks to them needed to provide the overall application functionality. This concept is called "Programming in the large" <ref type="bibr" target="#b8">[9]</ref>. If the invocations are independent from each other they can be done in parallel at the same time which increases throughput and therefore the overall processing performance. The distributed computing paradigm of service-oriented architectures enables this feature. The strategy has to be implemented in clients (Fig. <ref type="figure" target="#fig_1">1</ref>, Point 1).</p><p>Assumptions. It is assumed that each service is hosted on its own physical environment and therefore isolated from each other regarding performance. Thus, multiple concurrent service executions do not influence the performance of each other.</p><p>Benefits. The application of this strategy does not need a special performance optimized service bus to achieve high throughput.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Downsides and Problems.</head><p>The client has to be able to send multiple requests at the same time to the service bus and has to wait for multiple responses which may arrive in various orders. This needs special programming effort as this kind of requesting has to be done asynchronously. There are technologies enabling this kind of service orchestration. One example is BPEL <ref type="bibr" target="#b7">[8]</ref>. Another difficulty is identifying which requests can be done in parallel and which requests have to be processed sequentially.</p><p>For applying this strategy to existing applications, the application flow may have to be changed which can lead to modifications of the overall application architecture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Application Example.</head><p>Examples for applying this strategy are all scenarios where requests can be processed independently from each other. For instance, in simulations there are often multiple matrix equations which may be solved at the same time. These equations are independent from each other as they are self-contained in a way that no external information is needed for solving.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Caching</head><p>Goal. The goal of this strategy is avoiding multiple processing of identical requests to speed up the application's performance.</p><p>Description. One opportunity to improve the performance of an application's request processing is to avoid the actual request processing at all by exploiting caching. The service bus is the central component which is responsible for any primary service request message consumption: Clients send request messages to the bus which routes the messages to selected services and the responses back to the corresponding requestors <ref type="bibr" target="#b2">[3]</ref>. For certain requests the responses are always the same, e.g. a matrix equation solving service returns always the same solution for the same requested equation. These requests can be cached by the service bus to decrease the response time <ref type="bibr" target="#b0">[1]</ref>.</p><p>The strategy has to be implemented in the service bus (Fig. <ref type="figure" target="#fig_1">1</ref>, Point 2).</p><p>Assumptions. The requests have to be comparable in a way that identical requests can be recognized.</p><p>Benefits. The application of this strategy is transparent for the requesting client. Thus, this strategy can be applied without the need for modifying already existing components (of course they have to send all requests to the service bus). If a request is served by the cache the whole service system is discharged.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Downsides and Problems.</head><p>The identification of cacheable request-response pairs is difficult and causes overhead at the design time of the application. A request which cannot be served by the cache causes additional overhead for cache lookup and management tasks and even decreases the performance for processing this request.</p><p>Application Example. In the scientific domain, experiments are executed many times with only little modified input values and therefore internal simulation data within the simulation is often identical <ref type="bibr" target="#b1">[2]</ref>. Thus, requests depending on this internal simulation data are also identical and can be cached for further experiment executions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Dynamic Service Discovery</head><p>Goal. The goal of this strategy is to choose the fastest service for a certain request at runtime to decrease the response time.</p><p>Description. One can distinguish between two different binding techniques: Static binding and dynamic binding <ref type="bibr" target="#b9">[10]</ref>. The first one enables the client to explicitly define which service should be used while the latter one sends the request to the service bus which discovers a service matching the functional requirements of the request and then sends the request to this selected service <ref type="bibr" target="#b2">[3]</ref>. This service discovery can be enriched by taking non-functional requirements expressing capabilities into account, too <ref type="bibr" target="#b5">[6]</ref>: If there are functionally equivalent services, non-functional capabilities of the service are analyzed to select the service guaranteeing the fastest response time. This enables optimized load balancing, too. The Dynamic Service migration strategy (see Section 3.4) may be applied to optimize the services before comparing them. The strategy has to be implemented in the service bus to enrich the service discovery (Fig.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1, Point 3).</head><p>Assumptions. To select the fastest service, all available suitable services have to be comparable in their performance for processing a certain request. This performance values have to be predictable automatically (either by the service bus or by the respective services).</p><p>Benefits. The application of this strategy is transparent for the requesting client. Thus, this strategy can be applied without the need for modifying already existing legacy components (of course they have to send all requests to a service bus).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Downsides and Problems.</head><p>This strategy only improves the performance if the overhead caused by the discovery is below the time saved by the faster service. Otherwise dynamic service discovery even slows down the performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Application</head><p>Example. An example is a simulation orchestrating services for complex calculations whose response time depends on a specified requested quality of the output data. Some algorithms offer only low quality of data but guarantee a fast calculation. Other algorithms are designed to achieve high quality of data but are more time-consuming. Depending on the required quality of data (non-functional requirement) the fastest service can be chosen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Dynamic Service Migration</head><p>Goal. The goal of this strategy is to achieve the fastest response time for processing a request regarding the environment and location conditions a service is hosted on.</p><p>Description. There are services whose response time to process a request depends on the power of the environment they are hosted on. The Dynamic Service Migration strategy moves services from less powerful machines to more powerful ones to scale up <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b14">15]</ref>. Other scenarios increasing the performance are the migration of one service collocated to another service to cut down network costs or the migration of other services hosted on the same environment to other environments to free resources. This component-based migration is possible because of the loose coupling concept of SOA. The strategy may be implemented in the service bus which triggers and manages the migration (Fig. <ref type="figure" target="#fig_1">1</ref>, Point 4).</p><p>Assumptions. To apply this strategy, the migration of services has to be feasible.</p><p>Benefits. The application of this strategy is transparent for the requesting client. Thus, this strategy can be applied without the need for modifying already existing components. Recall that we assume that the services send all requests to a service bus.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Downsides and Problems.</head><p>The migration of a running service from one machine to another machine is complex and needs management operations which have to be implemented. Especially handling of local data is difficult, because even if a service can be migrated to another machine, a huge amount of data which also has to be transferred can lead to problems: the time savings achieved by the more powerful environment may be too small and the overall processing time (including the time needed for migration) for a single request even increases. To avoid this, the design of the services and the overall architecture of the application have to be aware that this strategy may be applied. This causes additional overhead at the development time and is generally difficult. To find out whether a migration of a service on runtime leads to a faster response time to process a certain request is difficult and depends on many factors. The component managing this migration has to calculate predictions in which scenarios and constellations a migration makes sense.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Application Example.</head><p>One example taken from our experiences with bone remodeling simulation workflows is the processing of big data sets. The simulations typically process a huge amount of data by sequentially invoking services and passing the data from one service to another service. Because the services are used by multiple simulations, it is not possible to host all services and store all needed data on a single environment. Thus, the migration of services co-located to the data to be processed on runtime may improve the performance in terms of response time because network costs are cut down.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Multiple Service Invocation</head><p>Goal. The goal of this strategy is to choose the fastest service for a certain request at runtime to decrease the response time.</p><p>Description. The selection of the fastest service can be difficult, especially if there are completely different ways to process a single request. There are situations where the Dynamic Service Discovery strategy cannot be applied to discover the fastest service because the required values to compare the different services are not calculable. SOA offers a solution to achieve maximum request processing performance by sending the request to all available appropriate services concurrently and taking the response returned by the first responding service. This decreases the response time to an ideal value as the fastest available service is implicitly chosen. To make this work, the different services have to be isolated in a way that they do not affect each other's response time. The strategy has to be implemented in the service bus (Fig. <ref type="figure" target="#fig_1">1</ref>, Point 5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Assumptions.</head><p>It is assumed that the multiple service invocations have no negative impact on the performance of other request processing services.</p><p>Benefits. The application of this strategy is transparent for the requesting client. Thus, this strategy can be applied without the need for modifying already existing components. Recall that we assume that the services send all requests to a service bus.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Downsides and Problems.</head><p>The concurrent invocation of multiple functional identical services basically produces unnecessary workload for the whole service-oriented environment. To avoid negative impact on other services in terms of performance cloud technology may be used for the provisioning of new services discharging the system (i.e., applying the Multiple Service Instantiation strategy, see Section 3.6).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Application Example.</head><p>One example from the mathematics domain is solving a matrix equation using numerical or algebraic techniques. For a numerical algorithm starting with random values trying to converge towards the solution by executing multiple iterations, the number of steps and thus the time needed to calculate the solution is not predictable. Thus, for certain equations, data sets and algorithms it is impossible to determine the fastest solving algorithm in advance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6">Multiple Service Instantiation</head><p>Goal. The goal of this strategy is to increase the performance by invoking only services having free capacity.</p><p>Description. The performance of the overall system decreases if services cannot process a large number of requests any more. If there is no possibility to balance the outstanding workload differently, this strategy solves the problem by instantiating more services functionally equivalent to the overloaded ones <ref type="bibr" target="#b14">[15]</ref>. The new instantiated services can be invoked in parallel and hence scale out. The distribution of the workload discharges overloaded services and thus increases the throughput. The strategy may be implemented in the service bus (Fig. <ref type="figure" target="#fig_1">1</ref>, Point 6).</p><p>Assumptions. The current workload and utilization of a service has to be visible to the service bus to enable the selection of appropriate services and there have to be free resources for hosting the new instantiated services isolated in a way that the concurrent executions do not decrease the performance of each other.</p><p>Benefits. The application of this strategy is transparent for the requesting client. Thus, this strategy can be applied without the need for modifying already existing components. Recall that we assume that the services send all requests to a service bus.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Downsides and Problems.</head><p>The application of this strategy only improves the performance if the additional needed time for instantiation is below the time which is saved by invoking the cloned service. Especially if local data also has to be cloned and transferred to another hosting environment, this leads to additional timeconsumptions caused by network costs. A solution to solve this problem of data migration is using stateless services. Applying this strategy requires additional resources in terms of hardware or virtualized systems. Thus, it has to be ensured that this has no negative impact on the performance of other services (e.g. through cloud technology).</p><p>Application Example. In service-oriented environments where multiple SOA applications run at the same time certain services may be overloaded because their offered functionality is so general that it is used by many of these applications. In our simulation experiments matrix solving services are frequently overloaded, for example.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion and Outlook</head><p>We presented six high level strategies to increase the overall performance of serviceoriented applications and showed how to apply them to build high performance SOA applications. As five of the six strategies can be implemented in a performance-driven service bus we plan to implement this bus and integrate existing migration prototypes <ref type="bibr" target="#b4">[5]</ref>. This bus enables performance optimization which is transparent to the orchestrating component and provides a basis for evaluation scenarios for showing that the applied strategies also increase the overall performance in practice.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Six high performance strategies applied to a SOA with central service bus (based on [3]) 1 http://simtech.uni-stuttgart.de</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Message Oriented Middleware Cache Pattern -a Pattern in a SOA Environment</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">Y</forename><surname>Rao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Fourth &quot;Killer Examples&quot; for Design Patterns and Objects First Workshop</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Next Generation Interactive Scientific Experimenting Based On The Workflow Technology</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sonntag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Karastoyanova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 21st IASTED International Conference on Modelling and Simulation</title>
				<meeting>the 21st IASTED International Conference on Modelling and Simulation</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Patterns: Implementing an SOA Using an Enterprise Service Bus</title>
		<author>
			<persName><forename type="first">M</forename><surname>Keen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
	<note type="report_type">IBM Redbooks</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Service-Oriented Architecture: A Field Guide to Integrating XML and Web Services</title>
		<author>
			<persName><forename type="first">T</forename><surname>Erl</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Prentice Hall PTR</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">CMotion: A Framework for Migration of Applications into and between Clouds</title>
		<author>
			<persName><forename type="first">T</forename><surname>Binz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of SOCA</title>
				<meeting>SOCA</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The (Service) Bus: Services Penetrate Everyday Life</title>
		<author>
			<persName><forename type="first">F</forename><surname>Leymann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICSOC</title>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Enterprise service bus. Theory in Practice</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Chappell</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>O&apos;Reilly Media</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Web Services Platform Architecture: SOAP, WSDL</title>
		<author>
			<persName><forename type="first">S</forename><surname>Weerawarana</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">WS-Policy, WS-Addressing, WS-BPEL, WS-Reliable Messaging, and More</title>
				<imprint>
			<publisher>Prentice Hall PTR</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Programming-in-the-Large Versus Programming-in-the-Small</title>
		<author>
			<persName><forename type="first">F</forename><surname>Deremer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kron</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Software Engineering</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="80" to="86" />
			<date type="published" when="1976">1976</date>
		</imprint>
	</monogr>
	<note>IEEE Transactions on</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Web Services: Principles and Technology</title>
		<author>
			<persName><forename type="first">M</forename><surname>Papazoglou</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>Pearson Prentice Hall</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Patterns: Service-Oriented Architecture and Web Services</title>
		<author>
			<persName><forename type="first">M</forename><surname>Endrei</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
	<note type="report_type">IBM Redbooks</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">F</forename><surname>Cohen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>FastSOA. Morgan Kaufmann</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Dynamic Service and Data Migration in the Clouds</title>
		<author>
			<persName><forename type="first">W</forename><surname>Hao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Software and Applications Conference</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Transactional Information Systems: Theory, Algorithms</title>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">and the Practice of Concurrency Control and Recovery</title>
				<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Software Approaches to Assuring High Scalability in Cloud Computing</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 7th International Conference on e-Business Engineering (ICEBE)</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
