Execution of UTP test cases using fUML Marc-Florian Wendland Niels Hoppe Fraunhofer FOKUS Fraunhofer FOKUS Berlin, Germany Berlin, Germany marc-florian.wendland@fokus.fraunhofer.de niels.hoppe@fokus.fraunhofer.de ABSTRACT behave functionally equivalent. Thus, test cases designed for testing The UML Testing Profile (UTP) is a standardized modeling language an executable specification of the system ought to be reused for that offers concepts relevant to specify test cases, test data and even testing the eventual implementation of the system in a dedicated entire test automation architectures including test environments. test environment or in the field. Just recently in June 2018, the OMG adopted the official version Ideally, a methodology for continuous dynamic testing fosters 2.0 of UTP. It was primarily designed to support both manual and reuse of test cases across target technologies and test interfaces. A automated activities of the dynamic test process, in particular the prerequisite for such a continuous test design approach is a test design and specification of test cases, test data and test suites. Ba- specification language that enables the specification of platform- sically, such UTP-based test specifications remain on a platform- independent test models, consisting of a test architecture and test independent level and leave it open how the test cases shall be cases, for later generation of executable test cases for the respective executed in the end. In this paper, we describe an approach to ex- testing target environment. The UML Testing Profile (UTP) [10] is ecuting UTP test cases via the executable UML standards fUML such a test modeling language. and PSCS. Therefore, we map platform-independent UTP test cases, As an extension to UML, it offers test-specific concepts on top of expressed as Interactions, into executable fUML test cases for even- UML, used in particular to support dynamic testing. The intention tual execution against fUML systems. It is a first step towards an of using UTP for testing is, similar to using UML for system analysis executable representation of UTP. and design, to facilitate communication and easy comprehensibility of test specifications. It does not claim to be an executable modeling KEYWORDS language, even though, it allows for specifying entire test automa- tion architectures, including support for test design, test execution, UML Testing Profile, UTP, executable UML, executable UTP, auto- verdict calculation and test log analysis. mated test execution, fUML, fUML-based test automation, architec- In this paper, we concentrate on a mapping from a platform- ture independent UTP test model to a fUML test automation solution and executable test cases for testing fUML systems. Therefore, we assign 1 INTRODUCTION an executable semantics to a subset of UTP concepts necessary for In system engineering, executable specifications serve the purpose our methodology by mapping the respective concept to executable of validating the feasibility of system architecture as well as the fUML concepts. We assume that the reader has basic knowledge consistency of systems/software requirements specifications (SRS) about UML, UTP and fUML. The contributions of this paper are: through simulation. A simulation of systems requirements specifi- • Defining the structures for executable, yet platform-indepen- cations enables engineers to evaluate the efficiency of competing dent UTP test models; and architectural solutions as well as the correctness and complete- • Describing a mapping of structural and behavioral aspects ness of requirements. Furthermore, systems simulations help un- of such UTP test models to fUML test models for eventual derstanding how the systems behaves under varying (simulated) test case execution. environmental conditions or in situations that are deemed critical The remainder of this paper is outlined as follows: Section 2 but costly or even not possible to provoke in a real environment summarizes existing work that relates to or influenced our work. (e.g. failure of a flight or train control system while operating, val- Section 3 introduces the UTP concepts relevant for fUML-based idation of functional countermeasure in a nuclear power station test execution. The executable semantics for these concepts is pro- etc.). With the upcome of the executable UML that currently consist vided by a mapping to their fUML counter parts. Section 4 describes of Semantics of a Foundational Subset of Executable UML (fUML), mapping UTP test architectures to executable test environments, Precise Semantics of Composite Structures (PSCS) and Precise Se- whereas section 5 explains the mapping from UTP test case be- mantics of State Machines (PSSM), it is possible to build executable havior to fUML test cases. Section 6 briefly describes the tools and specifications with UML. technologies we used to implement and run the approach. Section From a tester’s point of view, testing an executable SRS is the 7 eventually concludes this paper and outlines future work. earliest point in time where dynamic testing is feasible. Dynamic testing mainly consists of designing and executing test cases against the system under test (SUT). This can be a simulation of the sys- 2 RELATED WORK tem or its eventual implementation, however, testing both kinds UTP has been subject to publication for over a decade now. Baker et of realizations does not vary much. In both cases, test cases are al. wrote a comprehensive book that accompanied the first version (usually) derived from an SRS and executed against the SUT. It is of the standard [1]. UTP has been used for model-based generation expected that both, the simulated and implemented SUT, basically of test cases for service-oriented architectures [11] as well as for EXE 2018, October 14, 2018, Copenhagen, Denmark Marc-Florian Wendland and Niels Hoppe testing product lines [4]. The work from Zander et al. [14] is closest concepts to fUML and finally execute those executable-made UTP to our work, for it specifies a mapping from UTP test cases to test models against a fUML-based system model for the sake of executable TTCN-3 test cases. Besides being based on the earlier early validation of its requirements is new. UTP version 1.0 as opposed to our work, which is based on the current version 2.0, Zander’s et al. work targets solely the mapping 3 UTP- AND FUML-BASED TEST MODELS of test cases without addressing aspects like test sets, scheduling or arbitration. These parts of the test automation architecture were UTP is a standardized graphical modeling language that enhances assumed to be realized by an existing TTCN-e execution engine. UML with test-specific concepts for designing, visualizing, specify- Thus, the work described by Zander et al. concentrated on the ing, analyzing, constructing, and documenting artifacts commonly mapping of Interactions to TTCN-3 in the first place, whereas our used in and required for expressing test specifications. As an open work addressed also the generation of (parts of) the components ended standard, UTP neither prescribes the test modeling method- of the fUML test automation architecture, such as scheduling and ology, application domain, testing tool, nor the eventual target logging. technology that is used for carrying out testing activities. In June To the best of our knowledge, there is no previous work that tar- 2018, a new version of UTP has been adopted by OMG, i.e., UTP gets a mapping from UTP test models to fUML models for the sake 2.0, called UTP 2. The work described in this paper uses UTP 2 test of black-box testing of fUML models. However, testing of fUML models as input for the mapping to an executable representation. models was subject to previous work. Mijatovic et al. [5] proposed In our approach, two kinds of test models are distinguished, the a framework for testing fUML models that is based on a proprietary UTP test model and fUML test model. Inspired by the principles of domain-specific language (DSL) for describing assertions on the the Model-Driven Architecture (MDA) [3], the UTP test model is execution flow of fUML Activities. Apart from not being based on henceforth called platform-independent test model (PITM), the fUML UTP, this approach is different to our work, because it is capable test model platform-specific test model (PSTM). The PITM specifies of white-box testing e.g., by asserting the execution order of the test cases and test architectures on a higher level of abstraction executed ActivityNodes or the input/output values of single Activi- and makes no assumptions on the eventual target environment or tyNodes within an executed Activity. This is not possible and not implementation of the SUT. It provides a set of logical test cases intended by our approach that concentrates on a pure black-box formalized as sequence diagrams, which are used to generate ex- approach to continuous test design. ecutable test cases for different target environments (e.g., JUnit, Craciun et al. [2] addressed testing of fUML models using a TTCN-3, fUML) or test levels (e.g., simulation testing, component rewrite-based executable semantic framework called K. The infor- testing, system testing). In contrast, the PSTM merely represents mation obtained from that early work makes it hard to relate the an executable version of the PITM test cases. Comparable to the work to our approach. Unfortunately, the work was not continued bytecode of a JUnit test case, the PSTM is, in theory, of less interest so that no further information about the approach can be stated to the test engineer, as the semantics of the test case is defined on here. PITM level, whereas the PSTM is completely derived and thereby The executable UML standard PSCS [9] contains a conformance rendered transient and transparent. Thus, there is usually no need test suite written in fUML and the Action Language for Founda- to generate a user-friendly variant of the PSTM , e.g. by using ALF tional UML (ALF) [7], that resembles the assertions-oriented unit instead of fUML, as test engineers are not required to understand test frameworks such as JUnit or NUnit1 . Additionally, the exe- the specifics of the execution. This is another benefit of continuous cutable UML standard PSSM [8] defined another conformance test- test design. ing framework based on fUML and ALF, which compares string Since UTP is not per se an executable language, it is necessary collections of expected events with a captured string collection of ex- to define an executable semantics for a set of required concepts. ecuted event sequences from a set of input events. Both approaches This executable semantics can either be specified in the same way utilize fUML and ALF to specify test cases for fUML models. Our as fUML, PSCS or PSSM was composed, in which case, a dedicated approach, however, utilizes fUML only for test execution. Test de- runtime environment would be required to actually run the UTP sign, logging and verdict calculation happens on a higher level of test cases. As a more concise and convenient way, we chose to rely abstraction, i.e., on UTP test model level, independent from any on the already existing executable semantics of fUML concepts executable language to facilitate continuous test design. (PSSM and PSCS base their executable semantics on fUML, too) Wendland [12] described an approach towards the definition and provide a mapping for every necessary UTP concept to an of a precise semantics for UML Interactions by mapping them to executable fUML concept. This mapping approach is also consistent fUML Activities. Even though that work was independent of UTP with mapping PITM test cases to other executable test languages or testing in general, it influenced and inspired the mapping of UTP such as JUnit or TTCN-3. test cases to an fUML-based representation described in the current Similar to fUML, which represents an executable subset of UML, paper. not all UTP concepts are deemed necessary for execution. For the Summarizing the related work it can be stated, that different parts definition of the mapping rules from PITM to PSTM it is necessary to of our approach (e.g., fUML, PSCS, UTP, testing of fUML models) identify those concepts from UTP for which an executable semantics were subject to past research work. Yet the approach to provide an is required. Figure 1 illustrates a (simplified) PITM featuring a subset executable semantics for a subset of UTP by mapping the respective of UTP concepts that are necessary for test execution and will be used for describing the mapping to the PSTM . These required UTP 1 See https://junit.org and http://nunit.org respectively. concepts are explained in greater detail subsequently. The SUT Execution of UTP test cases using fUML EXE 2018, October 14, 2018, Copenhagen, Denmark shown in Figure 1 represents an fUML model of an elevator system. in a generated flow consisting of a CreateObjectAction and a Start- Internals of the elevator systems are not important for this paper, ObjectBehaviorAction for each test case. Since PITM test cases are since it concentrates on technical aspects of the mapping from UTP currently unordered in a PITM test set2 , there is no possibility to test models to an fUML test automation architecture and test cases specify the eventual execution order of test cases in the PSTM , instead of actual verification of the elevator system. leaving it up to the transformation engine. The essential concept in our approach is the test case (Compo- nent with «TestCase» applied), which shall be contained by test sets 4.2 Setting up the executable test environment (Package with «TestCase» applied). Test sets serve as containers for 4.2.1 Mapping test components. On PITM level, test components a set of test cases sharing a certain purpose (e.g., regression testing, are represented as Components without any classifier behavior. The smoke testing etc.). The behavior of a test case is given as an Inter- behavior of a test component is determined by every Lifeline that action, usually visualized by a sequence diagram, but not required represents the test component within a test case. On PSTM level, to for any UML behavior can be used to specify test cases. Conse- however, a test component’s behavior must be precisely and un- quently, only the elements Lifeline, MessageOccurrenceSpecification ambiguously defined, but since a test component must only exists (MOS), Message and GeneralOrdering are of interest. during the execution of a test case, our mapping towards PSTM test UTP introduces a dedicated abstract procedural language based components is fairly simple: For each test configuration role with on stereotypes that abstract from the underlying UML metaclasses «TestComponent» applied, a dedicated fUML Class is generated that in order to simplify the construction and analysis of test cases represents the executable test component for a single test case. It without the need to know about the underlying UML behavior. is represented by an active Class with an Activity as its classifier The central elements of that procedural language are so called test behavior. This classifier behavior is derived from the covering Inter- actions. In this paper, we will focus on test actions for sending actionFragments of the Lifeline representing the test component in stimuli to (Message with «CreateStimulusAction»), and expecting the corresponding PITM test case and is described later to a greater responses (Message with «ExpectResponseAction») from the SUT. extent. See the UTP specification [10], Clause 8.5.2 ProceduralElements and Clause 8.5.3 Test-specific actions for further details. 4.2.2 Mapping the test configuration. On PITM level, reuse of A test case must always be executed on a composite structure test components and test configuration is encouraged by our method- called test configuration (i.e., Components stereotyped with «Test- ology to ensure high maintainability of the test cases. On PSTM Configuration»), which defines communication channels between level, maintainability is not important for the executable artifacts its parts, who are called test configuration roles. They represent are generated entirely from the PITM . Each test case is translated either test components or test items, distinguished by the stereo- into a dedicated Class that directly contains the PSTM represen- types «TestItem» and «TestComponent». A test item represents the tation of the test configuration by processing the Generalization SUT, whereas the test components belong to the test environment between the PITM test case and the PITM test configuration. Thus, and drive a test case’s behavior by sending stimuli to or expecting the structure of the test configuration is replicated for each sepa- responses from the SUT. Test configurations can be shared across rate test case in the PSTM (Figure 5). The test item, however, is not multiple test cases. Test cases are then linked with a respective test generated, for it already exists as fUML Class that will merely be configuration by establishing a Generalization dependency among integrated into the test configuration. themselves and the test configuration. The instantiation and activation of the generated PSTM test cases and their test configuration roles takes advantage of the CS_DefaultConstructStrategy from PSCS, making explicit creation, 4 GENERATING THE EXECUTABLE TEST assembly and activation of objects obsolete. Anyhow, preparing the ENVIRONMENT execution of a PSTM test case consists also of actions for prepar- ing for logging, starting the test item and coordinating the test 4.1 Mapping test sets components. As opposed to PITM test sets, PSTM test sets are executable entities, Since the test components are active Classes, i.e., they possess for they schedule the execution and logging of the test cases they their own thread of control and run in parallel once they are instan- contain. Therefore, each PITM test set is translated into a fUML tiated, so coordination of the test component behaviors is required. Class, whose classifier behavior, i.e., test set scheduler, is responsible In particular, the test components must not send signals to the for instantiating and eventually executing the respective test cases test item before the test item is itself instantiated and its behavior as illustrated in Figure 2. Test cases are represented as nested classes started. This coordination is achieved on PSTM level by generating of the PSTM test set and executed by the test set scheduler, whose another Class, the so called (test case) coordinator, which is inte- behavior consists of the two phases initialization and execution of grated into a test case’s test configuration and connected with the test cases. test components through dedicated synchronization Ports and an During initialization of the test set, an instance of a test set n-ary Connector, the so called synchronization bus, which is respon- log is created. This test set log links all the test case logs of the sible for the transmission of synchronization messages to the test executed test cases in the end. The created object is then passed as components. The coordinator also serves as an intermediary for an argument to the invoked (i.e., executed) test case. exchanging data between the test case and the test components. During execution of test cases, each test case that belongs to the 2 There is a concept in UTP called «TestExecutionSchedule» that enables ordering of PITM test set is scheduled for execution. This scheduling manifests test case execution, however, this concept is not part of our work yet. EXE 2018, October 14, 2018, Copenhagen, Denmark Marc-Florian Wendland and Niels Hoppe Figure 1: PITM test model example coordinator. The mechanism of creating test case logs is out of scope of this paper. In order to prevent a potential loss of signals to or from the test item, it is important to ensure the complete setup of the test environment before the test item behavior commences. Therefore we require the test item to be modeled as a passive (not active) class, even though this is in violation with fUML.3 Consequently, Figure 2: Test set (scheduler) behavior the test item must be started manually by a ReadStructuralFea- tureAction ’Read test item’, which passes the resulting object to a StartObjectBehaviorAction ’Start test item’. 4.2.4 Coordinating the test case components. Starting and syn- chronizing the test components is the responsibility of the (test case) coordinator (see Figure 4). For this purpose, it sends a start signal over the synchronization bus. The test component behaviors wait for this start signal before they commence their actual behav- ior. Added as an argument to the start signal is a reference to the test case log. This log object is required to capture the test actions executed by the test components. Following this initial broadcast is a flow consisting of paral- lel AcceptEventActions for all SignalEvents corresponding to the completion signals of the involved test components. Only after all completion signals are received, the coordinator behavior completes and the test case can enter the finalization phase. 4.2.5 Finalizing the test case execution. After the completion of all test components and the associated termination of the action ’Start coordinator’, the initially created test case log is finalized Figure 3: Test case behavior by a CallBehaviorAction calling the OpaqueBehavior ’FinalizeTest- CaseLog’. Again, the mechanism of finalizing test case logs is out of scope of this paper. 4.2.3 Preparing the test case execution. Figure 3 illustrates the preparation of a test case execution prior to running the actual PITM test case behavior. In this regard, the classifier behavior of the PSTM 3 See fUML [6], Clause 7.2.2.2.3 Class, Additional Constraint 1: Only active classes test case creates a test case log by means of a CreateObjectAction may have classifier behaviors. We found out that the fUML engine we used allows for setting classifier behaviors for passive classes, too, but those passive classes are not ’Create test case log’ and a CallBehaviorAction ’Call CreateTest- started by the instantiation of the composite structure parts. We are aware that we CaseLog’ and distributes it to the test components through the need to find a better solution for this purposes in the future. Execution of UTP test cases using fUML EXE 2018, October 14, 2018, Copenhagen, Denmark Figure 4: Coordinator behavior Figure 6: Execution and logging of a CreateStimulusAction Figure 5: Compiled PSTM executable test configuration 5 MAPPING THE TEST CASE BEHAVIOR The Interactions that represent the behavior of a PITM test case undergo the most extensive transformation. The resulting PSTM be- havior is distributed among the test case itself for setting up the test environment including the coordinator, and the individual test com- ponents. Since the first mapping has already been described in the previous section, the subsequent sections deal with the generation Figure 7: Execution and logging of an ExpectResponseAc- of the individual test component behaviors. tion Examples of the Activities implementing the test case and the coordinator behavior are shown in figures 3 and 4. The individual phases of the displayed behavior are described subsequently. 5.1.1 Sending and receiving Signals. For sendEvents of Messages 5.1 Mapping test actions that are the sort of asynchSignal, the transformation results in Each PITM Lifeline yields an ordered list of associated MessageOc- a SendSignalAction with one InputPin for every argument of the currenceSpecifications (MOS), which, together with the correspond- respective Message as well as appropriate ValueSpecificationActions ing Messages, represent the test actions to be taken. connected by ObjectFlows. For receiveEvents of such Messages, the In order to derive a test component’s behavior from a Lifeline, transformation results in an unmarshalling AcceptEventAction with these covering MOS are iterated over and each test action is trans- one OutputPin for every argument of the respective Message. The formed into a flow. The sequence of such flows, enclosed by an values of the latter will be used to create an appropriate log entry initializing and a concluding flow for synchronization and logging, as described in section 5.1.2. Examples of both flows are shown in defines the ultimate behavior of the test component. Depending on figures 6 and 7. the messageSort and whether the MOS represents a sendEvent or 5.1.2 Logging. Whenever a test action of the types «CreateS- receiveEvent, different mappings are applied to generate the appro- timulusAction» and «ExpectResponseAction» is being executed, a priate actions. Each flow representing a test action can be divided matching test log entry should be created. The implementing flow into four phases, each represented by a sub-flow: starts with a CreateObjectAction, creating an object of the type of the (1) Signaling for synchronization (where applicable) TestLogEntryStructure that belongs to the test action. Subsequently, (2) Communication with the test item AddStructuralFeatureValueActions populate the fields of the test log (3) Logging entry. In case of a «CreateStimulusAction», the values are specified (4) Signaling for synchronization (where applicable) by ValueSpecificationActions in accordance with the sent Signal (see Details on the different phases are given in the subsequent sec- figure 6). In case of an «ExpectResponseAction», the values are read tions. Section 5.1.1 covers the communication with the test item (2), from the actually received Signal (see figure 7). Finally, the object section 5.1.2 covers the logging phase (3) and section 5.1.3 covers is passed to a CallBehaviorAction, calling the appropriate opaque the synchronization mechanism in (1) and (4). behavior to persist the log entry. EXE 2018, October 14, 2018, Copenhagen, Denmark Marc-Florian Wendland and Niels Hoppe 5.1.3 Synchronization of GeneralOrderings. In order to repro- was for a long time the only supported events by fUML. With the duce the effect of GeneralOrderings (see Figure 1) in the test case upcome of PSSM, fUML was extended to support CallEvents, too. behavior, a signaling mechanism was implemented. For that pur- Therefore, future work will be spent on integrating Operations and pose, each GeneralOrdering is transformed into a corresponding CallEvents in addition. Signal and SignalEvent. Test components whose lifelines are source Finally, we do not support the UTP concept test execution sched- or target of GeneralOrderings can then send or wait for such Signals ule. A test execution schedule is able to base the execution order and SignalEvents on the synchronization bus in order to synchronize of test cases within a test set on certain condition. This might be their actions. When transforming a test action, the toBefore and as simple as a sequence of execution (e.g., test case 1 must be exe- toAfter associations of the corresponding MOS are evaluated to de- cuted before test case 2) or as something complex as conditional termine whether the test action must be synchronized. In case there execution (e.g., if test case 1 concludes with verdict pass execute test are one or more GeneralOrderings found on the toBefore association, case 3, otherwise test case 2). The related «TestExecutionSchedule» a flow consisting of parallel AcceptEventActions for all SignalEvents extends Behavior so that test execution schedules could be provided corresponding to the respective GeneralOrderings, is inserted at the as fUML-compliant Activities as well. This would in fact decrease beginning of the test action flow. Accordingly, in case there are one the effort for supporting them in our approach. or more GeneralOrderings found on the toAfter association, a flow consisting of parallel SendSignalActions for all Signals correspond- ACKNOWLEDGMENTS ing to the respective GeneralOrderings, is inserted at the end of the The work described in this paper was funded by the ITEA 3 TESTOMAT test action flow. Project (no. 16032). 6 IMPLEMENTATION AND EXECUTION REFERENCES [1] Paul Baker, Zhen Ru Dai, Jens Grabowski, Oystein Haugen, Ina Schiefdecker, We implemented the described mapping with Eclipse QVTo. As and Clay Williams. 2007. Model-Driven Testing âĂŞ using the UML Testing Profile. execution engine, we utilized Eclipse Papyrus Moka. As Moka imple- Springer, Heidelberg. ments the CS_DefaultConstructStrategy from PSCS, it only offers [2] Florin Craciun, Simona Motogna, and Ioan Lazar. 2013. Towards Better Testing of fUML Models. support for binary Connectors by default. In order to support also [3] David Frankel. 2003. Model-Driven Architecture. OMG PRESS. n-ary Connectors as they are an integral part of our synchronization [4] Beatriz Lamancha, Macario Usaola, and Mario Velthius. 2009. Towards an auto- mechanism, we implemented a custom construction strategy. mated testing framework to manage variability using the UML Testing Profile. In 2009 IEEE International Workshop on Automation of Software Test (AST’09) We successfully executed the PITM test cases against the elevator co-related with the International Conference on Software Engineering (ICSE) 2009. use case. Test logs of the test case execution were successfully cap- Vancouver, Canada. [5] Stefan Mijatov, Philip Langer, Tanja Mayerhofer, and Gerti Kappel. 2013. A tured during execution. Details of UTP test logs were subject to our Framework for Testing UML Activities Based on fUML. In 2013 10th International previous work [13]. Verdict calculation was done using an external Workshop on Model Driven Engineering, Verification and Validation (MoDeVVa) Java implementation of the default UTP arbitration specifications, co-located with the 16th International Conference on Model Driven Engineering Languages and Systems (MODELS 2013). Miami, USA. but had to be excluded from this paper due to space limitations. [6] Object Management Group (OMG). 2012. Semantics of a Foundational Subset for Executable UML Models (fUML). The Object Management Group October (2012), 441. http://www.omg.org/spec/FUML/ 7 CONCLUSION [7] Object Management Group (OMG). 2017. Action Language for Foundational In this paper, we described an approach to generate fUML test UML (ALF). The Object Management Group March (2017). https://www.omg.org/ spec/ALF/1.1/ models from UTP test models for eventual testing of fUML systems. [8] Object Management Group (OMG). 2017. Precise Semantics of UML State We addressed both, mapping rules to setup the test environment as Machines Structures (PSSM). The Object Management Group March (2017). well as mapping rules to execute the test case behavior. Therefore, https://www.omg.org/spec/PSSM/1.0/Beta1 [9] Object Management Group (OMG). 2018. Precise Semantics of UML Composite we described a mapping from PITM Interactions to PSTM Activities. Structures (PSCS). The Object Management Group March (2018). https://www. Using UTP for testing fUML systems is a new approach. omg.org/spec/PSCS/1.1/ [10] Object Management Group (OMG). 2018. UML Testing Profile (UTP). The Object Even though the result demonstrates the feasibility of our ap- Management Group March (2018). https://www.omg.org/spec/UTP/2.0/Beta1 proach towards executable UTP, a few aspects need closer discus- [11] Alin Stefanescu, Marc-Florian Wendland, and Sebastian Wieczorek. 2010. Using sion and further improvement. the UML testing profile for enterprise service choreographies. In 2010 IEEE 36th EUROMICRO Conference. We currently do not distinguish between test and system inter- [12] Marc-Florian Wendland. 2016. Towards Executable UML Interactions based on faces. The test configuration relies on the interfaces offered by the fUML. In Proceedings of the 4th International Conference on Model-Driven Engi- fUML system. This is not necessarily a shortcoming, however, con- neering and Software Development. SCITEPRESS - Science and and Technology Publications, 405–411. https://doi.org/10.5220/0005809804050411 tinuous test design should rather define dedicated test interfaces to [13] Marc-Florian Wendland, Niels Hoppe, Martin Schneider, and Steven Ulrich. 2018. keep independence of any technical details of the SUT to ensure Extending the UML Testing Profile with a fine-grained test logging model. In 2018 IEEE 11th International Conference on Software Testing, Verification and Validation easier maintenance of the test cases. Maintenance is a key success (ICST). Vasteras, Sweden. factor in continuous test design. The reason why we do not utilize [14] Schieferdecker I. Din G. Zander J., Dai Z.R. 2005. From U2TP Models to Executable test interfaces is the lack of an adaptation layer, that is capable Tests with TTCN-3 - An Approach to Model Driven Testing. In Khendek F., Dssouli R. (eds) Testing of Communicating Systems. TestCom 2005. Lecture Notes in of mapping logical test interfaces to technical system interfaces. Computer Science, vol 3502. Springer, Berlin, Heidelberg. Supporting test interfaces and providing an adaptation layer has a high priority for our approach in the future. Furthermore, we only support Signals and Receptions for commu- nication between test components and the test item. Signal sending