<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Decomposition of Test Cases in Model-Based Testing</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Marcel</forename><surname>Ibe</surname></persName>
							<email>marcel.ibe@tu-clausthal.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Clausthal University of Technology Clausthal-Zellerfeld</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Decomposition of Test Cases in Model-Based Testing</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7CB1A4754FE1210FB529D97F25CB201C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:06+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>model-based testing</term>
					<term>model decomposition</term>
					<term>test case decomposition</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>For decades software testing is a fundamental part in software development. In recent years, model-based testing is becoming more and more important. Model-based testing approaches enable the automatic generation of test cases from models of the system to build. But manually derived test cases are still more efficient in finding failures. To reduce the effort but also keep the advantages of manually derived test cases a decomposition of test cases is introduced. This decomposition has to be adapted to the decomposition of the system model. The objective of my PhD thesis is to analyse these decompositions and develop a method to transfer them to the test cases. That allows the reusing of manually derived test cases at different phases of a software development project.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>During a software development project, testing is one of the most important activities to ensure the quality of a software system. About 30 to 60 per cent of the total effort within a project is spent for testing <ref type="bibr" target="#b18">[19]</ref>, <ref type="bibr" target="#b13">[14]</ref>. This value did not change during the last three decades. Even though testing is a key aspect of research and there are constantly improving methods and tools that can be applied. One fundamental problem during testing is the fact, that it is not possible to show completely absence of errors in a software system. <ref type="bibr" target="#b6">[7]</ref>. Nevertheless by executing enough test cases a certain level of correctness can be ensured. But the number of test cases must not be too large otherwise it would not be efficient to test the systems any longer. One of the most important challenges is to create a good set of test cases. That means the number of test cases should be minimal but it should also test as much as possible of the systems behaviour. Model-based testing is one technique that addresses this problem. A potential infinite set of test cases is generated from the test model, an abstract model of the system to construct. Based on a test case specification a finite set of these generated test cases can be selected <ref type="bibr" target="#b15">[16]</ref>. These test cases can be executed manually or automatically.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>In <ref type="bibr" target="#b19">[20]</ref> a distinction is made between four levels of testing: acceptance testing, system testing, integration testing and component or unit testing. The focus here is integration testing and unit testing. Furthermore testing approaches can be divided by the kind of model from which the test cases are generated <ref type="bibr" target="#b5">[6]</ref>.</p><p>Tretmans describes an approach to generate test cases from labelled transition systems <ref type="bibr" target="#b16">[17]</ref>. He introduces the ioco-testing theory. By this theory it is possible to define for example when a test case has passed. An algorithm is introduced that allows generating test cases from labelled transition systems. Several tools implement the ioco-testing theory. For example TorX <ref type="bibr" target="#b17">[18]</ref>, TestGen <ref type="bibr" target="#b9">[10]</ref> or the Agedis Tool <ref type="bibr" target="#b8">[9]</ref>. Jaffuel and Legeard presented an approach in <ref type="bibr" target="#b11">[12]</ref> that generates test cases for functional testing. The test model is described by the B-notation <ref type="bibr" target="#b0">[1]</ref>. Different coverage criteria allow the selection of test cases. Another approach was described in <ref type="bibr" target="#b12">[13]</ref> by Katara and Kervinen. It bases on so called action machines and refinement machines. These are also labelled transitions systems with keywords as labels. Keyword based scenarios are defined by use cases. Then they are mapped to the action machines and detailed by the refinement machines. An approach that generates test cases for service oriented software systems from activity diagrams is introduced in <ref type="bibr" target="#b7">[8]</ref>. Test stories are derived from activity diagrams. These user stories are the basis for generating test code. Several coverage criteria can be checked by constraints. In <ref type="bibr" target="#b14">[15]</ref> Ogata and Matsuura describe an approach that is also based on activity diagrams. It allows the creation of test cases for integration testing. Use cases from use case diagrams are refined by activity diagrams. For every system or component that is involved in an activity diagram there is an own partition. So it is possible to select only these actions from the diagram, which define the interface between the systems or the components. Now the test cases can be generated from these actions. Blech et al. describe an approach in <ref type="bibr" target="#b3">[4]</ref> which allows the reusing of test cases in different levels of abstractions. For that purpose relations between more abstract and more concrete models are introduced. After that they try to prove, that the more concrete model is in fact a refinement of the more abstract model. That approach is based on the work of Aichernig <ref type="bibr" target="#b1">[2]</ref>. He used the refinement calculus from Back and von Wright <ref type="bibr" target="#b2">[3]</ref> to create test cases from requirements specifications by abstraction. In <ref type="bibr" target="#b4">[5]</ref> Briand et al. introduce an approach to make an impact analyse. So it is possible to determine which test cases are affected by changes of the model. The test cases are divided into different categories and can be handled respectively. That approach was developed to determine the reusability of test cases for regression testing during changes within the specification.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Problem</head><p>Nowadays there are a lot of approaches that are able to generate test cases for different kinds of tests from a model of a system automatically. The advantage of these approaches is that the effort for the test case generation is low. Furthermore a test specification can ensure that the test cases meet several criteria of test coverage. What is not considered there is the efficiency of the test cases. That means a set of generated test cases can more or less test the whole system. But therefor maybe a huge number of test cases is necessary. However, manually derived test cases can be much more efficient. Because of its experience a test architect is able to derive such test cases that test the most error-prone parts of a system. So a smaller set of test cases can cover a big and important part of the system. But such a manual test case derivation is more expensive than an automatic generation. This addition effort cannot be balanced by the less number of test cases that has to be executed. During a software development project there are different kinds of tests that test the system or parts of it. For this purpose for every kind of test there are new test cases necessary. Starting with the creation of test cases for system testing from the requirements to creation of test cases for integration and unit testing from the architecture, at every test case creation there is the possibility to choose the manual test case derivation or the automatic test case generation with all its advantages and disadvantages. The more complex the model of the system gets the more the automatic generation is in advantage over the manual derivation because there is a point where a model is not manageable for a person any longer. Generally the requirements model is much smaller than the architecture because the latter contains much more additional information about the inner structure and behaviour of the system. Therefore, a manual test case derivation is more reasonable for system testing than for integration or unit testing. But the advantages of the manually derived test cases are limited to system testing. The approach that is introduced in the following section should automatically transfer the advantages of manually derived test cases for system testing to test cases for integration and unit testing. This is done by decomposing the test cases. In this way the information that were added to the test cases during derivation can be reused for further test cases but without the effort for another manual test case derivation. The question that should be answered in the doctoral thesis is: Can the advantages of manually derived test cases over automatically generated ones be transferred to another level of abstraction by an automatic decomposition of these test cases?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Proposed Solution</head><p>To use the advantages of manually derived test cases for at least one time in the project a set of test cases has to be derived manual. As stated above, suitable for this are the test cases for system testing. They can be derived from the requirements model. It does not contain details about the system like the internal structure or behaviour. So the test architect can aim solely at the functions of the complete system. Hence one can get a set of test cases to test the system against its requirements. Based on the requirements an architecture of the system is created after that. This architecture is getting more and more refined and decomposed. For example the system itself can be decomposed into several components that can be decomposed into subcomponents again. The functions can be decomposed analogue into subfunctions which are provided by the components. To test the particular subfunctions and the interaction of the components integration and unit test are executed. Therefore, test cases are required again. They could be derived from the architecture. But that would entail much additional effort. Another option is an automatic generation. But that would mean to lose the advantages of manually derived test cases. A third option is to reuse the manually derived test cases from system testing. To do this the following problem has to be solved. Meanwhile there are additional information added to the architecture for example information about the internal composition or subfunctions of the system. The test cases also need these information. For instance it is not possible to test a function if there is no test case that has information about the existence of that function. Hence, the refinements and decompositions that were made at the architecture must also be made at the test cases. That means the test cases also has to be decomposed. After that the test cases from system testing can be used as basis for test cases for integration and unit testing. A manual re-deriving of test cases is not necessary any longer. Figure <ref type="figure" target="#fig_0">1</ref> shows this process schematically. To illustrate how such a decomposition of test cases could look like it is shown at the Common Component Modelling Example (CoCoME) <ref type="bibr" target="#b10">[11]</ref>. CoCoME is the component based model of a trading system of a supermarket. Here we focus only on the CashDesk component of the trading system. The requirements model contains the trading system itself and other systems and actors of its environment. Besides the models that describe the static structure of the system the behaviour is described by usecases. Such a usecase is for example the handling of the express mode of the cash desk. Under certain conditions a cash desk can switch into the express mode. That means that a customer can buy a maximum of eight products at that cash desk and has to pay cash. Card payment is not allowed any longer. The cashier can always switch off the express mode at his cash desk. Figur 2 shows the system environment and an excerpt of the usecase Manage Express Checkout. A test case that tests the management of the express mode could consists of the follwoing three steps:</p><p>1. The cashier presses the button DisableExpressMode at his cash desk. 2. The cash desk ensures that the colour of the light display changes to black. 3. The cash desk ensures that the card reader accepts credit card again and card payment is allowed. In the next step the complete system is decomposed into several components. One of these components is the CashDeskLine that also contains a set of CashDesk sub components. Also the description of the behaviour, in this case the management of the express mode, is decomposed into smaller steps (see figure <ref type="figure" target="#fig_2">3</ref>).</p><p>Similarly, the test case that was defined above has to be decomposed to test the new subfunctions. After the decomposition it would look as follows: The steps at the ordinary level are identic to that from the original test case. Because of the information about the additional components and the communication between them, there are a few new test steps at the second level necessary. New these new components and their communication can also be tested by this test case. So the test case for system testing that was created from the requirements can also be used for integration and unit testing. To do that test case decomposition the following challenges has to be addressed:</p><p>-Definition of associations between requirements or the first architecture and the manually derive test cases. This is necessary to transfer the decompositions that are made at the architecture to the test cases. -Tracing of the elements of the architecture during the further development.</p><p>So it can be decided which elements of the requirements or architecture are decomposed. -Definition of the decomposition of the test cases. Now that it has been established how the elements of the architecture are decomposed and the corresponding test cases can be identified it can be analysed how the test cases has to be decomposed according to the decomposition of the architecture elements. -Automatic transfer of the decomposition steps from the architecture to the test cases. Therefore all the possible decomposition steps have to be analysed and classified. After that they can be detected automatically and the corresponding test cases can be decomposed respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Contribution, Evaluation and Validation</head><p>The objective of this PhD thesis is to develop an approach for decomposing test case analogous to the decomposition of the corresponding system to test. That approach is based upon findings about the decomposition of a system influences the corresponding test cases. In another step the approach shall be implemented as a prototype. After that the prototype can be evaluated and compared to other implementations of model-based testing approaches. Within the next year the decomposition steps of a system and their influence to the corresponding test cases shall be analysed. For this the changes of individual model elements during detailed design have to be traced. Especially how they are extended with information about their interior structure and behaviour. Another important fact is the relation between model elements and test steps. With this knowledge it is possible to adapt the test cases after a decomposition of the system in a way that the test cases can cover also the added information about structure and behaviour.</p><p>In the six following months a first prototype shall be implemented. It is intended to evaluate this prototype within a student project. To see how efficient the test cases are that were derived with this approach a set of manually derived test cases are compared with a set of automatically generated ones. After this the manually derived test cases are decomposed. In the next step the decomposed test cases are compared with new automatically generated test cases. In each case the average number of failures that are detected by the test cases and how serious these failures for the function of the system are compared. The findings from this first evaluation are integrated in the approach and the prototype during the next six months. After that finalisation the new prototype should be set up in an industrial project and compared with other model-based tools in use.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Creation of test cases during project</figDesc><graphic coords="4,152.06,430.69,311.25,210.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Example of the system environment and one usecase</figDesc><graphic coords="5,134.77,370.31,345.83,111.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Example of the decomposed structure of the systems and its behaviour</figDesc><graphic coords="6,134.77,116.83,345.83,101.25" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">. The cashier presses the button DisableExpressMode at his cash desk. (a) The cashier presses the button DisableExpressMode th his cash box. (b) The cash box sends an ExpressModeDisableEvent to the cash box controller.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">The B-Book: Assigning Programs to Meanings</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Abrial</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Abrial</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005-11">Nov 2005</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Test-design through abstraction -a systematic approach based on the refinement calculus</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">K</forename><surname>Aichernig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">j-jucs</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="710" to="735" />
			<date type="published" when="2001-08">Aug 2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Refinement Calculus: A Systematic Introduction</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J R</forename><surname>Back</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998-01">Jan 1998</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Reusing test-cases on different levels of abstraction in a model based development tool</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">O</forename><surname>Blech</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ratiu</surname></persName>
		</author>
		<idno>arXiv e-print 1202.6119</idno>
		<ptr target="http://arxiv.org/abs/1202.6119" />
		<imprint>
			<date type="published" when="2012-02">Feb 2012. 2012</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="13" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Automating impact analysis and regression test selection based on UML designs</title>
		<author>
			<persName><forename type="first">L</forename><surname>Briand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Labiche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Soccar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Software Maintenance</title>
				<imprint>
			<date type="published" when="2002">2002. 2002</date>
			<biblScope unit="page" from="252" to="261" />
		</imprint>
	</monogr>
	<note>Proceedings.</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A survey on modelbased testing approaches: a systematic review</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Dias Neto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Subramanyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vieira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">H</forename><surname>Travassos</surname></persName>
		</author>
		<idno type="DOI">10.1145/1353673.1353681</idno>
		<ptr target="http://doi.acm.org/10.1145/1353673.1353681" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st ACM international workshop on Empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2007</title>
				<meeting>the 1st ACM international workshop on Empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2007<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page">3136</biblScope>
		</imprint>
	</monogr>
	<note>WEASELTech &apos;07</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The humble programmer</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">W</forename><surname>Dijkstra</surname></persName>
		</author>
		<idno type="DOI">10.1145/355604.361591</idno>
		<ptr target="http://doi.acm.org/10.1145/355604.361591" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page">859866</biblScope>
			<date type="published" when="1972-10">Oct 1972</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Modeldriven system testing of service oriented systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Felderer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chimiakopoka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Breu</surname></persName>
		</author>
		<ptr target="http://www.dbs.ifi.lmu.de/~fiedler/publication/FZFCB09.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proc. of the 9th International Conference on Quality Software</title>
				<meeting>of the 9th International Conference on Quality Software</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The AGEDIS tools for model based testing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hartman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nagin</surname></persName>
		</author>
		<idno type="DOI">10.1145/1007512.1007529</idno>
		<ptr target="http://doi.acm.org/10.1145/1007512.1007529" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis</title>
				<meeting>the 2004 ACM SIGSOFT international symposium on Software testing and analysis<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page">129132</biblScope>
		</imprint>
	</monogr>
	<note>ISSTA &apos;04</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Protocol-inspired hardware testing</title>
		<author>
			<persName><forename type="first">J</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Turner</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-0-387-35567-2_9</idno>
		<ptr target="http://link.springer.com/chapter/10.1007/978-0-387-35567-2_9" />
	</analytic>
	<monogr>
		<title level="m">IFIP The International Federation for Information Processing</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Csopaki</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Dibuz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Tarnay</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer US</publisher>
			<date type="published" when="1999-01">Jan 1999</date>
			<biblScope unit="page" from="131" to="147" />
		</imprint>
	</monogr>
	<note>Testing of Communicating Systems</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">CoCoME-the common component modeling example</title>
		<author>
			<persName><forename type="first">S</forename><surname>Herold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Klus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Welsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Deiters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rausch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Reussner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Krogmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Koziolek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mirandola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hummel</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-85289-6_3</idno>
		<ptr target="http://link.springer.com/chapter/10.1007/978-3-540-85289-6_3" />
	</analytic>
	<monogr>
		<title level="m">The Common Component Modeling Example</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page">1653</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">LEIRIOS test generator: automated test generation from b models</title>
		<author>
			<persName><forename type="first">E</forename><surname>Jaffuel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Legeard</surname></persName>
		</author>
		<idno type="DOI">10.1007/11955757_29</idno>
		<ptr target="http://dx.doi.org/10.1007/11955757_29" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th international conference on Formal Specification and Development in B. p</title>
				<meeting>the 7th international conference on Formal Specification and Development in B. p<address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page">277280</biblScope>
		</imprint>
	</monogr>
	<note>B&apos;07</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Making model-based testing more agile: A use case driven approach</title>
		<author>
			<persName><forename type="first">M</forename><surname>Katara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kervinen</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-70889-6_17</idno>
		<ptr target="http://link.springer.com/chapter/10.1007/978-3-540-70889-6_17" />
	</analytic>
	<monogr>
		<title level="m">Hardware and Software, Verification and Testing</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">E</forename><surname>Bin</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Ziv</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Ur</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007-01">Jan 2007</date>
			<biblScope unit="page" from="219" to="234" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">The art of software testing</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Myers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Thomas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Sandler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>John Wiley &amp; Sons</publisher>
			<pubPlace>Hoboken, N.J.</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A method of automatic integration test case generation from UML-based scenario</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ogata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Matsuura</surname></persName>
		</author>
		<ptr target="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.175.5822&amp;rep=rep1&amp;type=pdf" />
	</analytic>
	<monogr>
		<title level="j">WSEAS Trans Inf Sci Appl</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">598607</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">10 methodological issues in model-based testing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pretschner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Philipps</surname></persName>
		</author>
		<idno type="DOI">10.1007/11498490_13</idno>
		<ptr target="http://link.springer.com/chapter/10.1007/11498490_13" />
	</analytic>
	<monogr>
		<title level="m">Model-Based Testing of Reactive Systems</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">M</forename><surname>Broy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Jonsson</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Katoen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Leucker</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Pretschner</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2005-01">Jan 2005</date>
			<biblScope unit="volume">3472</biblScope>
			<biblScope unit="page" from="281" to="291" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Model based testing with labelled transition systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Tretmans</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-78917-8_1</idno>
		<ptr target="http://link.springer.com/chapter/10.1007/978-3-540-78917-8_1" />
	</analytic>
	<monogr>
		<title level="m">Formal Methods and Testing</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Hierons</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Bowen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Harman</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008-01">4949. Jan 2008</date>
			<biblScope unit="page" from="1" to="38" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">TorX: automated model-based testing</title>
		<author>
			<persName><forename type="first">J</forename><surname>Tretmans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Brinksma</surname></persName>
		</author>
		<imprint>
			<biblScope unit="page" from="31" to="43" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Practical Model-Based Testing: A Tools Approach</title>
		<author>
			<persName><forename type="first">M</forename><surname>Utting</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Legeard</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010-07">Jul 2010</date>
			<publisher>Morgan Kaufmann</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Standard glossary of terms used in software testing</title>
		<author>
			<persName><forename type="first">E</forename><surname>Van Veenendaal</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<publisher>International Software Testing Qualifications Board</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
