<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On the Importance of Supporting Multiple Stakeholders Points of View for the Testing of Interactive Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alexandre</forename><surname>Canny</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">ICS-IRIT</orgName>
								<orgName type="institution">University Paul Sabatier -Toulouse III</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Elodie</forename><surname>Bouzekri</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">ICS-IRIT</orgName>
								<orgName type="institution">University Paul Sabatier -Toulouse III</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Célia</forename><surname>Martinie</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">ICS-IRIT</orgName>
								<orgName type="institution">University Paul Sabatier -Toulouse III</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Philippe</forename><surname>Palanque</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">ICS-IRIT</orgName>
								<orgName type="institution">University Paul Sabatier -Toulouse III</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">On the Importance of Supporting Multiple Stakeholders Points of View for the Testing of Interactive Systems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">69590A5E82189E2BA0352A24F1BADABC</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T19:05+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Interactive-System Testing</term>
					<term>Stakeholders in Testing</term>
					<term>Testing Activities</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Testing is the activity meant to demonstrate that systems are fit for purpose and to detect their defects. On interactive systems, checking the fitness for purpose requires proper knowledge of the users' activities and profiles as well as of the context of use. Moreover, defects may be present in software, input/output device hardware or in the way interaction techniques are handled. Comprehensively testing interactive systems thus requires a large set of skills provided by usability experts, software engineers, human-factor specialists, etc. So far, these stakeholders conduct testing activities using processes from their respective areas of expertise that do not take advantage of others stakeholders' expertise effectively. This paper discusses the contribution of each stakeholders in current testing activities and highlights that a common view of the interactive system under test can serve as a mediating tool for each stakeholder to share information and identify/execute more relevant test suites.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The testing of interactive system is known to be a complex activity that cannot be exhaustive <ref type="bibr" target="#b3">[4]</ref>. Indeed, testing requires finding the system's defects and demonstrating it is fit for purposes <ref type="bibr" target="#b8">[9]</ref>, which is made difficult by the nature of interactive systems that integrates hardware, software and humans. On such systems, defects may be found in the code of applications as well as in the way the input/output devices and interaction techniques are handled in changing context (e.g. when an aircraft enters an area of turbulences), etc. Moreover, demonstrating that interactive systems are fit for purpose requires the ability to demonstrate that they let the users accomplish their goals and also that they are compliant with domain-specific constraints (e.g. is a videogame matching constraints imposed by rating organizations such as ESRB and PEGI?).</p><p>Researchers and practitioners in fields such as Software Engineering and Human-Computer Interaction developed processes and tools for supporting the testing of interactive systems using coverage criterions relevant in their respective areas of expertise. Furthermore, authorities and rating organizations introduced documentations geared towards systems manufacturers to let them know how fitness to purpose is checked for domain-specific aspects. Unfortunately, testing remains conducted by stakeholders focusing on their own areas of expertise who are not working in close collaboration with stakeholders from other areas. This may lead, for instance, to software engineers making some assumptions on the way the user will interact with the application. By doing so, they may design test cases/suites that do not properly take into account the human capabilities when searching for defects (e.g. the SteamVR motion tracking system was not tested with expert players in mind <ref type="bibr" target="#b4">[5]</ref>) even though some exchanges with usability experts could have help identifying correct ones. We claim that by allowing the various stakeholder in the testing activities of an interactive system to collaborate, designing relevant test cases would be easier.</p><p>In this paper, we first present the stakeholders in generic process for testing usability and software. Then, we present the stakeholders in the testing and validation of three kinds of interactive systems. The third section discusses the testing problem with architect view in mind and highlights how integrated the testing of interactive system should be. The fourth section highlights the need for exchange of information between stakeholders and for associated processes. The fifth section concludes the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Process View on the Testing of Interactive Systems</head><p>In the fields of HCI and of Software Engineering, the testing activities have different objectives and are thus organized by different processes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Testing in HCI</head><p>In the field of HCI, testing is associated to user evaluation, which aims at ensuring that the interactive system fulfills user needs in terms of usability, user experience and learnability. A good level of usability is always required because users have to be able to accomplish their tasks in an efficient way <ref type="bibr" target="#b10">[11]</ref>. User testing takes place at various stages of the design and development process. The alternation of prototyping and user testing phases aims to capture the maximum of user needs and to ensure that user tasks and user behavior are compatible with the interactive system presentation and behavior. The usability design process (Fig. <ref type="figure" target="#fig_0">1</ref>) <ref type="bibr" target="#b6">[7]</ref> presents a set of steps that aims at developing an usable interactive system. The main characteristics of the usability design process (see Fig. <ref type="figure" target="#fig_0">1</ref>) are: an early user involvement, an iterative and incremental set of design steps, empirical measurements, evaluation of the use in context and multi-disciplinary design teams. Users are involved since the beginning of the design process and are then regularly solicited for the evaluation of mock ups and for the testing of prototypes. Several stakeholders thus contribute to the testing activities:</p><p> Users: formulate their needs, accomplish given actions with the prototypes and give their opinion on the prototypes in terms of the perceived usability,  Designer: gather user needs and produce mock ups and prototypes, ensure that the mock ups and prototypes are legible and functional for user review and user testing,  Programmers: program high-fidelity prototypes and/or deploy the interactive system, ensure that the prototypes are reliable to be tested and used by users,  Usability experts: observe and interview users, produce experimental evaluation protocols, manage the experiments and analyze the results. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Testing in Software Engineering</head><p>In the field of software engineering, testing "consists of the dynamic verification that a program provides expected behaviors on a finite set of test cases, suitably selected from the usually infinite execution domain" <ref type="bibr" target="#b7">[8]</ref>. During the software development process, several types of testing activities aim at ensuring that the produced software behaves as specified and is free of defects. Fig. <ref type="figure" target="#fig_1">2</ref> depicts the ordering of the development and testing phases in the V software development process <ref type="bibr" target="#b1">[2]</ref>. Fig. <ref type="figure" target="#fig_1">2</ref> illustrates the different types of testing activities required to verify software systems. These activities are defined in the Software Engineering Body of Knowledge (SWEBOK) <ref type="bibr" target="#b7">[8]</ref>. "Module test" (in Fig. <ref type="figure" target="#fig_1">2</ref>) or unit test refers to the independent testing of each function and procedure. Integration test refers to the testing of several parts of the software that interact together. System test refers to the testing of the entire software. Acceptance test or validation test refers to the testing of the entire software in the context of use.</p><p>The testing of a software application involves different stakeholders <ref type="bibr" target="#b7">[8]</ref> such as:</p><p> Software engineers: produce specifications of requirements, specifications of (high level) software design and specification of system tests and integration tests. They also integrate the software components, perform the integration tests and build the entire software.  Programmers: produce component (low level) software specifications, program the components and perform the unit tests for the components they have produced.  Testers: execute the system tests, produce test reports and raise defects in case there are.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Application Domain View on Interactive System Testing</head><p>Beyond the generic nature of the processes presented in the previous section lies application domain-specific constraints and uses that may deeply influence the way to conduct the testing activities. Testing the compliance with regulatory obligations or guidelines are amongst the activity that may cause the involvement of specific stakeholders in the testing of an interactive system. In this section, we present some stakeholders involved in the testing of i) desktop application with GUI, ii) videogames and iii) safety critical systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Testing of GUI-Based Applications</head><p>Graphical User Interfaces (GUI) are known to be impossible to test exhaustively as the number of sequences of events that can be performed on their widgets is infinite <ref type="bibr" target="#b2">[3]</ref>. Thus, the main challenge in testing GUIs is to identify the relevant event sequences to execute on GUI widgets <ref type="bibr" target="#b9">[10]</ref>. Indeed, Banerjee et al. <ref type="bibr" target="#b2">[3]</ref> define GUI testing as "solely by performing sequences of events (e.g. "click on button", "enter text", "open menu") on GUI widgets (e.g. "button", "text-field", "pull-down menu")". Banerjee et al. <ref type="bibr" target="#b2">[3]</ref> present several types of GUI testing techniques (script-based testing, capture/replay testing and model-based testing). For each technique, different stakeholders are involved:</p><p> Programmer: program the GUI application. In script-based testing approaches, the programmer additionally writes scripts describing the event sequence to execute and the expected state of the GUI either between each events or after the complete sequence.</p><p> Users: accomplish given actions with the GUI applications. In capture/replay testing approaches, the users' interaction with the application are recorded. They are then used later for non-regression testing.  Software engineers: execute the tests. The capture/replay approach allows to record relevant sequences (the ones that users actually performs), its main drawback is that these recordings become outdated as soon as a GUI element changes (e.g. while adding a tab in a settings window).  Test automation managers and test automation engineers: are involved for model-based testing approaches. They select and apply techniques to build models of the GUI behavior from the results of the reverse engineering of the application <ref type="bibr" target="#b9">[10]</ref> or from the requirements and specifications of the application <ref type="bibr" target="#b14">[15]</ref>. They build models describing all the executable event sequences of up to a given length (as selected by test automation manager). The models are then used to generate relevant event sequences (e.g. all the sequences leading to the "Save as" dialog).</p><p>Besides the event-driven nature of the GUI, some organizations may want to verify that their GUI complies with specific guidelines such as accessibility one (e.g. <ref type="bibr" target="#b15">[16]</ref>). While the automation of some of these tests is possible, usability experts may be involved in this process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Testing of Games</head><p>Testing of games shares quality concerns with software applications. However, for the development of games, there is a common agreement in the community that successful games rely on an iterative development approach. Usability evaluation is an important aspect in games development: if a game is not usable (e.g. the interaction technology does not allow easy learning how to play the game), a game is typically not successful. Novak <ref type="bibr" target="#b11">[12]</ref> makes a distinction between testing activities and quality assurance activities in game development. The game testing activities focus on the usability and user experience of the game. Whereas, the quality assurance activities include process monitoring, game evaluation and auditing according to the developer and publisher standards. Novak <ref type="bibr" target="#b11">[12]</ref> identifies the following stakeholders involved in the testing of games:</p><p> Unit testing manager is the responsible of the testing of multiple game projects.  Lead tester is the testing team supervisor and manager. In addition, the lead tester must identify some types of errors (i.e. modeling or texturing errors)  Compatibility and format testers work for a publisher. They focus on the crossplatform game compatibility.  Production (developer), quality assurance (publisher), regression testers usually work together. They make suggestions to improve, to add or delete game features. They take into account prospective competing titles. Regression testers focus on severe bugs.</p><p> Playability, usability and beta testers are involved during the Beta phase. The Beta Testers are volunteers who test the game in-house. They are members of game's target users.  Focus testers are target users who test the game with the marketing department.</p><p>These tests are similar to focus group <ref type="bibr" target="#b13">[14]</ref>.</p><p>Rating organizations (e.g. PEGI, ESRB, etc.) are also part of the validation process of a game. They are responsible for rating the game prior to their release and intervene at pre-production stage to attribute provisional ratings (found in game trailers), during the main production to adjust the rating to the game changes and in postproduction to deal, for instance, with the rating of additional game content.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Testing of Safety Critical Systems</head><p>In safety critical systems, several quality factors deeply influence the development process such as reliability, fault-tolerance or security. The nature and high cost related to the evaluation of critical systems makes it necessary to test the whole system before its deployment, contrary to non-critical systems that can be patched. This constraint leads to plan certification very early in the development process of the system. To do so, the certification authority and the applicant commit an agreement as soon as a new project enters an active development phase. Then, each part of the system is tested and revised until it matches the certification requirements. In order to make the testing activities dependable, the principles of fault tolerance as detailed in <ref type="bibr" target="#b5">[6]</ref> can be applied to the testing activity. For instance, assigning people of different organizations to the development and to the system testing covers the diversity and segregation principles. Hereinafter, we present the stakeholders involved in the testing and validation process of an aircraft, as listed by the Federal Aviation Administration <ref type="bibr" target="#b0">[1]</ref> certification authority:</p><p> FAA (certification authority): authority supplies requirements (regulation and policy) and associated means of compliance to the applicant, determine conformity and airworthiness.  Applicant's inspectors and designees: must demonstrate the compliance of the system to be certified with these requirements.  Applicant's flight test pilots: conduct flight tests to show compliance.  FAA (certification authority) aircraft evaluation group and designees evaluate conformance to operations and maintenance requirements.</p><p>Because safety critical systems are large-scale systems with multiple components, the testing process needs some automation. Moreover, the applicant's engineers may use formal model-based methods to model the system with automated property checking by using model-checkers as described in the DO-178C supplement 333 <ref type="bibr" target="#b12">[13]</ref>. To avoid unnecessary tests, if a part of the already certified system is reused and unchanged in a new system to be certified, the already certified system part does not need to be tested.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Architectural View on Interactive-System Testing</head><p>In the previous sections, we highlighted that testers work with various considerations in mind. Thus, a key to support multiple stakeholder points of view in the testing of an interactive system is to benefit from a mediating view that bridges the gaps between those considerations. As architectures are meant to describe the conceptual structure and logical organization of a system, they are prime candidate to serve as mediating tools. While most architectures are domain specific (i.e. network architecture, software architecture, etc.), the H-MIODMIT architecture <ref type="bibr" target="#b3">[4]</ref> (Fig. <ref type="figure">3</ref>) highlights the presence of the human (left of Fig. <ref type="figure">3</ref>) and the software (right part of Fig. <ref type="figure">3</ref>). Moreover, this architecture considers hardware by explicitly mentioning "Input Devices" and "Output Devices".</p><p>Fig. <ref type="figure">3</ref>. The H-MIODMIT architecture (from <ref type="bibr" target="#b3">[4]</ref>)</p><p>Thanks to such architecture, it is possible to reason at a higher level of abstraction than with any domain-specific architecture. Thus, a usability expert (bringing knowledge about the human capabilities) may state that "a properly motivated human using a light enough controller can turn their wrist at up to 3600 degrees/sec" in a Virtual Reality experience <ref type="bibr" target="#b4">[5]</ref>. Looking at this statement over the H-MIODMIT architecture, we identify that it relies on the knowledge of the "Motor Processor" (leftmost component in Fig. <ref type="figure">3</ref>) and serves as an input knowledge for the testing of the "Input Devices" (i.e. the controllers motion sensors must be capable of handling rotation speed of up to 3600 degrees/sec). Moreover, this means that "Drivers and Libraries" must be able to produce relevant high-level events from the controller data (e.g. considering the way the controller samples information, is a "byte" sufficient to convey the delta angle?). Ultimately, such Usability Expert statement will translates into test specifications for components throughout H-MIODMIT. This architecture remains however insufficient to distribute all the testing requirements as it does not highlights, for instance, the existence of the context of use and its impact on the various systems' components.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>Designing reliable and usable interactive systems is complex and involves multiple stakeholders. This position paper presents some of the stakeholders involved in interactive system testing. It highlights that the stakeholders from different areas of expertise may benefit from the knowledge of each other during the testing activities. This backs our claim that processes and tools supporting multiple stakeholders' points of view in the testing of interactive systems are required. Such processes and tools should provide ways for each stakeholders to the high-level test requirements defined in other areas of expertise and ways to trace-back them from refined requirements to propagate changes if the architecture or purpose of the interactive system evolves. Furthermore, they should be able to cope with application domainspecific requirements to design as comprehensive as possible test suites.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. The usability design process (from [7])</figDesc><graphic coords="3,176.90,240.05,241.20,177.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Development and testing phases in the "V" development process (from [2])</figDesc><graphic coords="3,189.25,545.55,228.15,128.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="7,136.05,294.40,330.05,166.39" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">Aea</forename><surname>Aia</surname></persName>
		</author>
		<author>
			<persName><surname>Gama</surname></persName>
		</author>
		<title level="m">the FAA Aircraft Certification Service and Flight Standards Services: The FAA and Industry Guide to Product Certification</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>Third Edition</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An Introduction to Software Testing</title>
		<author>
			<persName><forename type="first">L</forename><surname>Baresi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pezzè</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.entcs.2005.12.014</idno>
		<ptr target="https://doi.org/10.1016/j.entcs.2005.12.014" />
	</analytic>
	<monogr>
		<title level="j">Electronic Notes in Theoretical Computer Science</title>
		<imprint>
			<biblScope unit="volume">148</biblScope>
			<biblScope unit="page" from="89" to="111" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Graphical user interface (GUI) testing: Systematic mapping and repository</title>
		<author>
			<persName><forename type="first">I</forename><surname>Banerjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Garousi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Memon</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.infsof.2013.03.004</idno>
		<ptr target="https://doi.org/10.1016/j.infsof.2013.03.004" />
	</analytic>
	<monogr>
		<title level="j">Information and Software Technology</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="1679" to="1694" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Rationalizing the Need of Architecture-Driven Testing of Interactive Systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Canny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bouzekri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Martinie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Palanque</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Human-Centered and Error-Resilient Systems Development</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Dent</surname></persName>
		</author>
		<ptr target="https://www.engadget.com/2019/02/12/beat-saber-players-too-fast-for-steam-vr/" />
		<title level="m">Beat Saber&quot; players were so fast that they broke Steam VR</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Fault-Tolerant User Interfaces for Critical Systems: Duplication, Redundancy and Diversity as New Dimensions of Distributed User Interfaces</title>
		<author>
			<persName><forename type="first">C</forename><surname>Fayollas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Martinie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Navarre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Palanque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fahssi</surname></persName>
		</author>
		<idno type="DOI">10.1145/2677356.2677662</idno>
		<ptr target="https://doi.org/10.1145/2677356.2677662" />
	</analytic>
	<monogr>
		<title level="m">Presented at the Proceedings of the 2014 Workshop on Distributed User Interfaces and Multimodal Interaction</title>
				<imprint>
			<date type="published" when="2014-01-07">January 7. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The usability design process -integrating usercentered systems design in the software development process</title>
		<author>
			<persName><forename type="first">B</forename><surname>Göransson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gulliksen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Boivie</surname></persName>
		</author>
		<idno type="DOI">10.1002/spip.174</idno>
		<ptr target="https://doi.org/10.1002/spip.174" />
	</analytic>
	<monogr>
		<title level="j">Improvement and Practice</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="111" to="131" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
	<note>Software Process</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Ieee Computer ; Bourque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Fairley</surname></persName>
		</author>
		<title level="m">Guide to the Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0</title>
				<meeting><address><addrLine>Los Alamitos, CA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society Press</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="https://glossary.istqb.org/search/" />
		<title level="m">International Software Testing Qualification Board: ISTQB Glossary</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">GUITAR: an innovative tool for automated testing of GUI-driven software</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">N</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Robbins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Banerjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Memon</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10515-013-0128-9</idno>
		<ptr target="https://doi.org/10.1007/s10515-013-0128-9" />
	</analytic>
	<monogr>
		<title level="j">Autom Softw Eng</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="65" to="105" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Nielsen</surname></persName>
		</author>
		<title level="m">Usability Engineering</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Game Development Essentials: An Introduction</title>
		<author>
			<persName><forename type="first">J</forename><surname>Novak</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Cengage Learning</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">Rtca</forename><surname>Do-</surname></persName>
		</author>
		<idno>178C</idno>
		<title level="m">Software Considerations in Airborne Systems and Equipment Certification</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Stewart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Shamdasani</surname></persName>
		</author>
		<title level="m">Focus Groups: Theory and Practice</title>
				<imprint>
			<publisher>SAGE Publications</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A taxonomy of model-based testing approaches</title>
		<author>
			<persName><forename type="first">M</forename><surname>Utting</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pretschner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Legeard</surname></persName>
		</author>
		<idno type="DOI">10.1002/stvr.456</idno>
		<ptr target="https://doi.org/10.1002/stvr.456" />
	</analytic>
	<monogr>
		<title level="j">Softw. Test. Verif. Reliab</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="297" to="312" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<ptr target="https://www.w3.org/WAI/standards-guidelines/wcag/" />
		<title level="m">W3C Web Accessibility Initiative: Web Content Accessibility Guidelines (WCAG) Overview</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
