<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Importance of Supporting Multiple Stakeholders Points of View for the Testing of Interactive Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexandre Canny</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elodie Bouzekri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Célia Martinie</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philippe Palanque</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ICS-IRIT, University Paul Sabatier - Toulouse III</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Testing is the activity meant to demonstrate that systems are fit for purpose and to detect their defects. On interactive systems, checking the fitness for purpose requires proper knowledge of the users' activities and profiles as well as of the context of use. Moreover, defects may be present in software, input/output device hardware or in the way interaction techniques are handled. Comprehensively testing interactive systems thus requires a large set of skills provided by usability experts, software engineers, human-factor specialists, etc. So far, these stakeholders conduct testing activities using processes from their respective areas of expertise that do not take advantage of others stakeholders' expertise effectively. This paper discusses the contribution of each stakeholders in current testing activities and highlights that a common view of the interactive system under test can serve as a mediating tool for each stakeholder to share information and identify/execute more relevant test suites.</p>
      </abstract>
      <kwd-group>
        <kwd>Interactive-System Testing</kwd>
        <kwd>Stakeholders in Testing</kwd>
        <kwd>Testing Activities</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The testing of interactive system is known to be a complex activity that cannot be
exhaustive [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Indeed, testing requires finding the system’s defects and
demonstrating it is fit for purposes [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], which is made difficult by the nature of interactive
systems that integrates hardware, software and humans. On such systems, defects may be
found in the code of applications as well as in the way the input/output devices and
interaction techniques are handled in changing context (e.g. when an aircraft enters an
area of turbulences), etc. Moreover, demonstrating that interactive systems are fit for
purpose requires the ability to demonstrate that they let the users accomplish their
goals and also that they are compliant with domain-specific constraints (e.g. is a
videogame matching constraints imposed by rating organizations such as ESRB and
PEGI?).
      </p>
      <p>
        Researchers and practitioners in fields such as Software Engineering and
HumanComputer Interaction developed processes and tools for supporting the testing of
interactive systems using coverage criterions relevant in their respective areas of
expertise. Furthermore, authorities and rating organizations introduced documentations
geared towards systems manufacturers to let them know how fitness to purpose is
Copyright © 2019 for this paper by its authors. Use permitted under Creative
Com-mons License Attribution 4.0 International (CC BY 4.0).
checked for domain-specific aspects. Unfortunately, testing remains conducted by
stakeholders focusing on their own areas of expertise who are not working in close
collaboration with stakeholders from other areas. This may lead, for instance, to
software engineers making some assumptions on the way the user will interact with the
application. By doing so, they may design test cases/suites that do not properly take
into account the human capabilities when searching for defects (e.g. the SteamVR
motion tracking system was not tested with expert players in mind [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]) even though
some exchanges with usability experts could have help identifying correct ones. We
claim that by allowing the various stakeholder in the testing activities of an interactive
system to collaborate, designing relevant test cases would be easier.
      </p>
      <p>In this paper, we first present the stakeholders in generic process for testing
usability and software. Then, we present the stakeholders in the testing and validation of
three kinds of interactive systems. The third section discusses the testing problem
with architect view in mind and highlights how integrated the testing of interactive
system should be. The fourth section highlights the need for exchange of information
between stakeholders and for associated processes. The fifth section concludes the
paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Process View on the Testing of Interactive Systems</title>
      <p>In the fields of HCI and of Software Engineering, the testing activities have different
objectives and are thus organized by different processes.
2.1</p>
      <sec id="sec-2-1">
        <title>Testing in HCI</title>
        <p>
          In the field of HCI, testing is associated to user evaluation, which aims at ensuring
that the interactive system fulfills user needs in terms of usability, user experience and
learnability. A good level of usability is always required because users have to be able
to accomplish their tasks in an efficient way [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. User testing takes place at various
stages of the design and development process. The alternation of prototyping and user
testing phases aims to capture the maximum of user needs and to ensure that user
tasks and user behavior are compatible with the interactive system presentation and
behavior. The usability design process (Fig. 1) [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] presents a set of steps that aims at
developing an usable interactive system.
        </p>
        <p>
          The main characteristics of the usability design process (see Fig. 1) are: an early
user involvement, an iterative and incremental set of design steps, empirical
measurements, evaluation of the use in context and multi-disciplinary design teams. Users
are involved since the beginning of the design process and are then regularly solicited
for the evaluation of mock ups and for the testing of prototypes. Several stakeholders
thus contribute to the testing activities:
 Users: formulate their needs, accomplish given actions with the prototypes and
give their opinion on the prototypes in terms of the perceived usability,
 Designer: gather user needs and produce mock ups and prototypes, ensure that the
mock ups and prototypes are legible and functional for user review and user
testing,
 Programmers: program high-fidelity prototypes and/or deploy the interactive
system, ensure that the prototypes are reliable to be tested and used by users,
 Usability experts: observe and interview users, produce experimental evaluation
protocols, manage the experiments and analyze the results.
In the field of software engineering, testing “consists of the dynamic verification that
a program provides expected behaviors on a finite set of test cases, suitably selected
from the usually infinite execution domain” [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. During the software development
process, several types of testing activities aim at ensuring that the produced software
behaves as specified and is free of defects. Fig. 2 depicts the ordering of the
development and testing phases in the V software development process [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
 Software engineers: produce specifications of requirements, specifications of
(high level) software design and specification of system tests and integration tests.
They also integrate the software components, perform the integration tests and
build the entire software.
 Programmers: produce component (low level) software specifications, program
the components and perform the unit tests for the components they have produced.
 Testers: execute the system tests, produce test reports and raise defects in case
there are.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Application Domain View on Interactive System Testing</title>
      <p>Beyond the generic nature of the processes presented in the previous section lies
application domain-specific constraints and uses that may deeply influence the way to
conduct the testing activities. Testing the compliance with regulatory obligations or
guidelines are amongst the activity that may cause the involvement of specific
stakeholders in the testing of an interactive system. In this section, we present some
stakeholders involved in the testing of i) desktop application with GUI, ii) videogames and
iii) safety critical systems.
3.1</p>
      <sec id="sec-3-1">
        <title>Testing of GUI-Based Applications</title>
        <p>
          Graphical User Interfaces (GUI) are known to be impossible to test exhaustively as
the number of sequences of events that can be performed on their widgets is infinite
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Thus, the main challenge in testing GUIs is to identify the relevant event
sequences to execute on GUI widgets [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Indeed, Banerjee et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] define GUI
testing as “solely by performing sequences of events (e.g. “click on button”, “enter text”,
“open menu”) on GUI widgets (e.g. “button”, “text-field”, “pull-down menu”)”.
Banerjee et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] present several types of GUI testing techniques (script-based
testing, capture/replay testing and model-based testing). For each technique, different
stakeholders are involved:
 Programmer: program the GUI application. In script-based testing approaches,
the programmer additionally writes scripts describing the event sequence to
execute and the expected state of the GUI either between each events or after the
complete sequence.
 Users: accomplish given actions with the GUI applications. In capture/replay
testing approaches, the users’ interaction with the application are recorded. They are
then used later for non-regression testing.
 Software engineers: execute the tests. The capture/replay approach allows to
record relevant sequences (the ones that users actually performs), its main drawback is
that these recordings become outdated as soon as a GUI element changes (e.g.
while adding a tab in a settings window).
 Test automation managers and test automation engineers: are involved for
model-based testing approaches. They select and apply techniques to build models
of the GUI behavior from the results of the reverse engineering of the application
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] or from the requirements and specifications of the application [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. They build
models describing all the executable event sequences of up to a given length (as
selected by test automation manager). The models are then used to generate relevant
event sequences (e.g. all the sequences leading to the “Save as” dialog).
Besides the event-driven nature of the GUI, some organizations may want to verify
that their GUI complies with specific guidelines such as accessibility one (e.g. [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]).
While the automation of some of these tests is possible, usability experts may be
involved in this process.
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Testing of Games</title>
        <p>
          Testing of games shares quality concerns with software applications. However, for the
development of games, there is a common agreement in the community that
successful games rely on an iterative development approach. Usability evaluation is an
important aspect in games development: if a game is not usable (e.g. the interaction
technology does not allow easy learning how to play the game), a game is typically
not successful. Novak [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] makes a distinction between testing activities and quality
assurance activities in game development. The game testing activities focus on the
usability and user experience of the game. Whereas, the quality assurance activities
include process monitoring, game evaluation and auditing according to the developer
and publisher standards. Novak [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] identifies the following stakeholders involved in
the testing of games:
 Unit testing manager is the responsible of the testing of multiple game projects.
 Lead tester is the testing team supervisor and manager. In addition, the lead tester
must identify some types of errors (i.e. modeling or texturing errors)
 Compatibility and format testers work for a publisher. They focus on the
crossplatform game compatibility.
 Production (developer), quality assurance (publisher), regression testers
usually work together. They make suggestions to improve, to add or delete game
features. They take into account prospective competing titles. Regression testers focus
on severe bugs.
 Playability, usability and beta testers are involved during the Beta phase. The
Beta Testers are volunteers who test the game in-house. They are members of
game’s target users.
 Focus testers are target users who test the game with the marketing department.
        </p>
        <p>
          These tests are similar to focus group [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
        <p>Rating organizations (e.g. PEGI, ESRB, etc.) are also part of the validation process
of a game. They are responsible for rating the game prior to their release and
intervene at pre-production stage to attribute provisional ratings (found in game trailers),
during the main production to adjust the rating to the game changes and in
postproduction to deal, for instance, with the rating of additional game content.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Testing of Safety Critical Systems</title>
        <p>
          In safety critical systems, several quality factors deeply influence the development
process such as reliability, fault-tolerance or security. The nature and high cost related
to the evaluation of critical systems makes it necessary to test the whole system
before its deployment, contrary to non-critical systems that can be patched. This
constraint leads to plan certification very early in the development process of the system.
To do so, the certification authority and the applicant commit an agreement as soon as
a new project enters an active development phase. Then, each part of the system is
tested and revised until it matches the certification requirements. In order to make the
testing activities dependable, the principles of fault tolerance as detailed in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] can be
applied to the testing activity. For instance, assigning people of different
organizations to the development and to the system testing covers the diversity and
segregation principles. Hereinafter, we present the stakeholders involved in the testing and
validation process of an aircraft, as listed by the Federal Aviation Administration [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
certification authority:
 FAA (certification authority): authority supplies requirements (regulation and
policy) and associated means of compliance to the applicant, determine conformity
and airworthiness.
 Applicant’s inspectors and designees: must demonstrate the compliance of the
system to be certified with these requirements.
 Applicant’s flight test pilots: conduct flight tests to show compliance.
 FAA (certification authority) aircraft evaluation group and designees evaluate
conformance to operations and maintenance requirements.
        </p>
        <p>
          Because safety critical systems are large-scale systems with multiple components, the
testing process needs some automation. Moreover, the applicant’s engineers may use
formal model-based methods to model the system with automated property checking
by using model-checkers as described in the DO-178C supplement 333 [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. To avoid
unnecessary tests, if a part of the already certified system is reused and unchanged in
a new system to be certified, the already certified system part does not need to be
tested.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Architectural View on Interactive-System Testing</title>
      <p>
        In the previous sections, we highlighted that testers work with various considerations
in mind. Thus, a key to support multiple stakeholder points of view in the testing of
an interactive system is to benefit from a mediating view that bridges the gaps
between those considerations. As architectures are meant to describe the conceptual
structure and logical organization of a system, they are prime candidate to serve as
mediating tools. While most architectures are domain specific (i.e. network
architecture, software architecture, etc.), the H-MIODMIT architecture [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] (Fig. 3) highlights
the presence of the human (left of Fig. 3) and the software (right part of Fig. 3).
Moreover, this architecture considers hardware by explicitly mentioning “Input
Devices” and “Output Devices”.
Thanks to such architecture, it is possible to reason at a higher level of abstraction
than with any domain-specific architecture. Thus, a usability expert (bringing
knowledge about the human capabilities) may state that “a properly motivated human
using a light enough controller can turn their wrist at up to 3600 degrees/sec” in a
Virtual Reality experience [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Looking at this statement over the H-MIODMIT
architecture, we identify that it relies on the knowledge of the “Motor Processor” (leftmost
component in Fig. 3) and serves as an input knowledge for the testing of the “Input
Devices” (i.e. the controllers motion sensors must be capable of handling rotation
speed of up to 3600 degrees/sec). Moreover, this means that “Drivers and Libraries”
must be able to produce relevant high-level events from the controller data (e.g.
considering the way the controller samples information, is a “byte” sufficient to convey
the delta angle?). Ultimately, such Usability Expert statement will translates into test
specifications for components throughout H-MIODMIT. This architecture remains
however insufficient to distribute all the testing requirements as it does not highlights,
for instance, the existence of the context of use and its impact on the various systems’
components.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>Designing reliable and usable interactive systems is complex and involves multiple
stakeholders. This position paper presents some of the stakeholders involved in
interactive system testing. It highlights that the stakeholders from different areas of
expertise may benefit from the knowledge of each other during the testing activities. This
backs our claim that processes and tools supporting multiple stakeholders’ points of
view in the testing of interactive systems are required. Such processes and tools
should provide ways for each stakeholders to understand the high-level test
requirements defined in other areas of expertise and ways to trace-back them from refined
requirements to propagate changes if the architecture or purpose of the interactive
system evolves. Furthermore, they should be able to cope with application
domainspecific requirements to design as comprehensive as possible test suites.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>1. AIA, AEA, GAMA, and the FAA Aircraft Certification Service and Flight Standards Services: The FAA and Industry Guide to Product Certification (Third Edition) (</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baresi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pezzè</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An Introduction to Software Testing</article-title>
          .
          <source>Electronic Notes in Theoretical Computer Science</source>
          .
          <volume>148</volume>
          ,
          <fpage>89</fpage>
          -
          <lpage>111</lpage>
          (
          <year>2006</year>
          ). https://doi.org/10.1016/j.entcs.
          <year>2005</year>
          .
          <volume>12</volume>
          .014.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Banerjee</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garousi</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Memon</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          :
          <article-title>Graphical user interface (GUI) testing: Systematic mapping and repository</article-title>
          .
          <source>Information and Software Technology</source>
          .
          <volume>55</volume>
          ,
          <fpage>1679</fpage>
          -
          <lpage>1694</lpage>
          (
          <year>2013</year>
          ). https://doi.org/10.1016/j.infsof.
          <year>2013</year>
          .
          <volume>03</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Canny</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouzekri</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinie</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palanque</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Rationalizing the Need of Architecture-Driven Testing of Interactive Systems</article-title>
          . In:
          <article-title>Human-Centered and Error-Resilient Systems Development</article-title>
          . Springer, Cham (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dent</surname>
          </string-name>
          , S.: “Beat Saber”
          <article-title>players were so fast that they broke Steam VR</article-title>
          . https://www.engadget.com/
          <year>2019</year>
          /02/12/beat-saber
          <article-title>-players-too-fast-for-steam-vr/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Fayollas</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinie</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Navarre</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palanque</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fahssi</surname>
          </string-name>
          , R.:
          <article-title>Fault-Tolerant User Interfaces for Critical Systems: Duplication, Redundancy and Diversity as New Dimensions of Distributed User Interfaces</article-title>
          .
          <source>Presented at the Proceedings of the 2014 Workshop on Distributed User Interfaces and Multimodal Interaction January</source>
          <volume>7</volume>
          (
          <year>2014</year>
          ). https://doi.org/10.1145/2677356.2677662.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Göransson</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gulliksen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boivie</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>The usability design process - integrating usercentered systems design in the software development process</article-title>
          .
          <source>Software Process: Improvement and Practice</source>
          .
          <volume>8</volume>
          ,
          <fpage>111</fpage>
          -
          <lpage>131</lpage>
          (
          <year>2003</year>
          ). https://doi.org/10.1002/spip.174.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. IEEE Computer Society, Bourque,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Fairley</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.E.</surname>
          </string-name>
          :
          <article-title>Guide to the Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0</article-title>
          . IEEE Computer Society Press, Los Alamitos, CA, USA (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>International</given-names>
            <surname>Software Testing Qualification Board: ISTQB Glossary</surname>
          </string-name>
          , https://glossary.istqb.org/search/.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>B.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robbins</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Banerjee</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Memon</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>GUITAR: an innovative tool for automated testing of GUI-driven software</article-title>
          .
          <source>Autom Softw Eng</source>
          .
          <volume>21</volume>
          ,
          <fpage>65</fpage>
          -
          <lpage>105</lpage>
          (
          <year>2014</year>
          ). https://doi.org/10.1007/s10515-013-0128-9.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Nielsen</surname>
            ,
            <given-names>J.: Usability</given-names>
          </string-name>
          <string-name>
            <surname>Engineering. Elsevier</surname>
          </string-name>
          (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Novak</surname>
          </string-name>
          , J.:
          <source>Game Development Essentials: An Introduction. Cengage Learning</source>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. RTCA. DO-178C
          <source>Software Considerations in Airborne Systems and Equipment Certification</source>
          . (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Stewart</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shamdasani</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          :
          <article-title>Focus Groups: Theory and Practice</article-title>
          .
          <source>SAGE Publications</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Utting</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pretschner</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Legeard</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>A taxonomy of model-based testing approaches</article-title>
          .
          <source>Softw. Test. Verif. Reliab</source>
          .
          <volume>22</volume>
          ,
          <fpage>297</fpage>
          -
          <lpage>312</lpage>
          (
          <year>2012</year>
          ). https://doi.org/10.1002/stvr.456.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <article-title>W3C Web Accessibility Initiative: Web Content Accessibility Guidelines (WCAG) Overview</article-title>
          , https://www.w3.org/WAI/standards-guidelines/wcag/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>