ENACTEST project - European Innovation Alliance for Testing Education Beatriz Marín1 , Tanja E. J. Vos1,2 , Monique Snoeck3 , Ana C. R. Paiva4 and Anna Rita Fasolino5 1 Universitat Politècnica de València (UPV), Camino de Vera s/n, Valencia, 46021, Spain 2 Open Universiteit (OU), The Netherlands 3 KU Leuven, Naamsestraat 69, box 3500, 3000 Leuven, Belgium 4 Faculty of Engineering of the University of Porto & INESC TEC, Rua Dr. Roberto Frias, s/n 4200-465 Porto, Portugal 5 Università degli Studi di Napoli Federico II, DIETI, Via Claudio 21, Italy Abstract The significance of software testing cannot be overstated, as its poor implementation often leads to problematic and faulty software applications. This problem comes from a mismatch in the required industry skills, the learning requirements of students, and the current teaching methodology for testing in higher and vocational education institutes. This project aims to create seamless teaching materials for testing education that is in line with industry standards and learning needs. Considering the diverse socioeconomic environment that will benefit from this project, a consortium of partners ranging from universities to small businesses has been assembled. The project starts with research into sense-making and cognitive models for learning and doing testing. Additionally, a study will be conducted to identify the training and knowledge transfer requirements for testing within the industry. Based on the research findings and study outcomes, teaching capsules for software testing will be developed, taking into account the cognitive models of students and the needs of the industry. After the effectiveness validation of these capsules, these capsules and the instructional material will be available to other researchers and professors to improve testing education. Keywords Software testing, education, cognitive models, industrial needs 1. Introduction Software quality is becoming increasingly important as society relies more and more on software for daily life. The impact of software failures is significant. A report by Failwatch [1] identifies 548 failures affecting billions of people and trillions of dollars in assets. The total cost of poor software quality (CPSQ) in the US alone will be $2.08 trillion in 2020 [2]. Testing is currently the most important quality assurance technique used in the industry, and it must cope with the increasing complexity of software and software development. To keep pace with increasing RPE@CAiSE’23: Research Projects Exhibition at the International Conference on Advanced Information Systems Engineering, June 12–16, 2023, Zaragoza, Spain Envelope-Open bmarin@dsic.upv.es (B. Marín); tvos@dsic.upv.es (T. E. J. Vos); monique.snoeck@kuleuven.be (M. Snoeck); apaiva@fe.up.pt (A. C. R. Paiva); fasolino@unina.it (A. R. Fasolino) Orcid 0000-0001-8025-0023 (B. Marín); 0000-0002-6003-9113 (T. E. J. Vos); 0000-0002-3824-3214 (M. Snoeck); 0000-0003-3431-8060 (A. C. R. Paiva); 0000-0001-7116-019X (A. R. Fasolino) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) quality requirements, organisations need to systematise and automate testing throughout the software and systems lifecycle. Despite the importance and need for good testing practices, there is a lack of testing culture and awareness among both practitioners in companies [3] and students in academia [4], [5], leading to poor software. Programmers may understand the importance of testing but put it off because of pressure to deliver quickly [4],[6]. Testing requires student to use multiple cognitive resources, making it challenging to teach [7]. In addition, testing is not sufficiently integrated into computer science curricula, so students do not test because they can still get away with it [8]. Knowledge transfer between projects is also lacking, resulting in the need to face testing challenges repeatedly [3]. In addition, the quality of test cases designed is affected by the domain knowledge and testing expertise of individuals [9], highlighting the need for knowledge transfer strategies within teams. In short, software testing is very important, but it is not being done well, resulting in problematic and flawed software applications. The cause is a skills mismatch between what industry needs, what students need to learn, and the way testing is currently taught in higher education and vocational training. For example, industry needs better prepared students who can improve software production through efficient testing. To improve education, colleges need to understand the industry’s testing needs and adapt their curriculum accordingly. In addition, to teach effectively, they need to understand their students’ learning styles to improve testing education. We summarise all these skills mismatches as three gaps in testing education: the gap between academia and students, the gap between industry and academia, and the gap between graduates and industry. The ENACTEST project wants to fill in these three gaps in testing education. To do that, this project will look from 3 perspectives: students, companies and teachers. The goal of this project is to identify and design seamless teaching materials for testing that are aligned with industry and learning needs. 2. Summary of project objectives The ENACTEST project (2022-2025) aims to create coherent and timely teaching materials for testing, taking into account both industry needs and students’ cognitive models. The project will assess the feasibility of integrating these materials into the curricula of higher education and vocational training partners, as well as into the training practices of small and medium-sized enterprise partners. The aim is to improve students’ learning performance while reducing industry’s training requirements for testing. The result will be improved knowledge transfer in testing between teams, professionals, academia and new engineers. The teaching materials developed, known as capsules, will be bite-sized and easy to integrate into existing courses without imposing additional workload on trainers. These teaching materials will also improve industry training for new engineers. The term ’capsules’ refers to their two main characteristics: bite-sized and seamlessly integrable, which will allow the results of the project to be widely adopted. Therefore, the following specific objectives have been defined for the ENACTEST project SO1: To identify the cognitive models for testing used by students and experts when they have to deal with testing, especially when designing test cases. SO2: To categorise the industry’s needs and concerns about testing technical and skills in order to identify fundamental topics and skills to be included in academic curricula. SO3: Design and develop new specific bite-sized testing materials (capsules) to be incorpo- rated into education as early and seamlessly as possible. These will take into account students’ cognitive models and industry needs. SO4: Provide evidence of improved learning outcomes for students and improved knowledge transfer to industry. We define 6 work packages that are directly related to reach the specific objectives and therefore the main goal of the project, which are schematized in Figure 1. WP1:Management D2.1: Design of initial WP2: Learning needs cognitive model of learning and cognitive models of students D4.2: Testing module/ D2.2: Cognitive model of capsule design learning testing WP5: Empirical validation D4.1: Analysis of current practices and status of WP4: Teaching D4.3: Testing course design and resources used for software testing capsules and modules/ capsules testing education tools D3.1: Classification of D5.1:Experimental training needs and Evaluation Plan knowledge transfer WP3: Industry needs processes at industry D5.2: Experimental for testing education Evaluation report #1 D3.2: Identification of voids that must be fulfilled by the D5.3: Experimental testing capsules Evaluation report #2 WP6: Dissemination and exploitation Figure 1: ENACTEST Work packages and corresponding deliverables. (https://enactest-project.eu). 3. Partners The project consortium is comprised of a varied group of partners, including universities (4), vocational centers (1) and small enterprises (4), to ensure that the outcomes benefit the entirety of the socio-economic landscape. The partners of the ENACTEST project are: • Universitat Politècnica de València (UPV), Spain • Katholieke Universiteit Leuven (KULeuven), Belgium • Universidade do Porto (UP), Portugal • Università degli Studi di Napoli Federico II (UNINA), Italy • Research Institutes of Sweden (RISE), Sweden • Centro Superior de Formacion Europa-Sur (CESUR), Spain • NEXO QA, Spaun • INOVA+, Portugal • CTG, Belgium 4. Summary of expected results A summary of the expected outcomes of ENACTEST are: • The description of the cognitive model that students and practitioners use to design test cases, i.e. how they decide what to test and how. The resulting cognitive model will be based on the empirical evidence of the intuitive testing approaches of students ( from VET - Vocational Education and Training - and HE - Higher Education) and practitioners. • A repository that clearly represents the practice of training, testing and knowledge transfer between teams in industry. This repository will be populated with information from focus groups with experts testers, observations of testing practices at industry and interviews with key players. Moreover, we also consider the training and knowledge transfer testing practices yearly published in well-known reports such as Gartner, StandishGroup, Failwatch, among others. • The identification of gaps that training materials for testing need to fill. • A repository of current practices used in teaching testing, filled with mapping information from standard syllabus for testing, a mapping review of academic publications related to teaching/learning testing, and the observation of materials and interviews with teachers of BSc and MSc courses. • The teaching capsules, including the teaching materials (e.g. code, test examples, quizzes, information on design procedures, etc.) and the documentation artefacts that enable their use at university level (undergraduate and masters), vocational education, and training at companies. 5. Ongoing research: To reach the objectives of the ENACTEST project, we have undertaken several initiatives. Firstly, we have developed a case study and designed a protocol that will be used to perform various experiments at vocational and higher education centers. The purpose of these experiments is to comprehend the sensemaking of students when testing software. Furthermore, we have initiated focus groups to identify the gaps and industrial requirements in knowledge transfer within testing teams. We have also conducted a comprehensive review of testing courses across the countries of the innovation alliance and we are currently interviewing professors to understand the challenges and needs of teaching testing in practice in computer science curricula. Table 1 Foreseen capsules of the ENACTEST innovation alliance Description Main Partner Online game for the early introduction of testing in computational UPV thinking for initial programmers Semi-structured clinical interviews to teach testing and promote the UPV A-HA moments Collection of analogue games for practicing essential testing skills of UPV testing and how to do them in the classroom Mutation Testing Game U Porto Educational Game for white box test case design U Porto Educational Module for practicing Test Smell Detection/ Remova UNINA Educational Game for “Man vs. Automated Testing Tools challenges” UNINA Model-based coverage KU Leuven Requirements-based testing with a focus on negative testing KU Leuven Automated state-based testing RISE BDD AcceptanceTesting NexoQA Cyber Security Web Testing NexoQA In addition, we have conducted a systematic review of the literature on techniques and tools that can enhance testing education. Based on the preliminary results of students’ sensemaking, industrial needs, and academic needs, we have designed the capsules presented in Table 1. The upcoming steps involve the final implementation and empirical evaluation of the capsules to determine their effectiveness and perception of usability among students, professors, and practitioners. Upon successful evaluation, all validated capsules and materials will be made available to the wider community for use. To accomplish this, we will conduct empirical studies and analysis to gather feedback from the aforementioned groups. This feedback will be used to refine and improve the capsules to ensure they meet the needs and expectations of the community. Once the capsules have been validated and refined, we will publish them along with all relevant materials and resources on the project website. This will allow the wider community to access and utilize them for their own educational and professional purposes. 6. Relevance to CAISE Software is at the heart of information systems. Software quality is critical to the correct use of information systems and is determined by the processes used to develop them, including testing. Unfortunately, testing is often poorly executed due to a mismatch between industry needs, student learning needs, and current testing curricula. ENACTEST will provide a comprehensive approach to address these gaps through testing capsules. These capsules will enhance students’ learning and improve their testing skills, which are becoming increasingly important in digital job profiles across the labour market. Ultimately, this will improve the quality of the software on which our digitalised society relies. This topic is particularly relevant to the CAISE community as it aims to bring together researchers, engineers and practitioners and provide opportunities to share and disseminate knowledge about a specific aspect of information systems engineering: software testing. 7. Project Information • Full name: European Innovation Alliance for Testing Education • Acronym: ENACTEST • Duration: September 2022 to August 2025 • Funding Agency: ERASMUS+ Programme of the European Union • Url: https://enactest-project.eu Acknowledgments This project has been funded by ERASMUS-EDU-2021-PI-ALL-INNO under the number 101055874, 2022-2025 References [1] The cost of poor software quality in the us: A 2020 report, 2020. URL: https://www.it-cisq. org/pdf/CPSQ-2020-report.pdf. [2] The software fail watch, 2018. URL: https://www.tricentis.com/blog/ software-fail-watch-q2-2018/. [3] V. Garousi, M. Felderer, M. Kuhrmann, K. Herkiloğlu, S. Eldh, Exploring the industry’s challenges in software testing: An empirical study, Journal of Software: Evolution and Process 32 (2020) e2251. [4] L. P. Scatalon, J. C. Carver, R. E. Garcia, E. F. Barbosa, Software testing in introductory programming courses: A systematic mapping study, in: Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019, pp. 421–427. [5] V. Garousi, A. Rainer, P. Lauvås Jr, A. Arcuri, Software-testing education: A systematic literature mapping, Journal of Systems and Software 165 (2020) 110570. [6] A. Afzal, C. Le Goues, M. Hilton, C. S. Timperley, A study on challenges of testing robotic systems, in: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), IEEE, 2020, pp. 96–107. [7] E. Enoiu, G. Tukseferi, R. Feldt, Towards a model of testers’ cognitive processes: Software testing as a problem solving approach, in: 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), IEEE, 2020, pp. 272–279. [8] T. E. J. Vos, Zoeken naar fouten: op weg naar een nieuwe manier om software te testen, 2017. [9] K. Juhnke, M. Tichy, F. Houdek, Challenges concerning test case specifications in automotive software testing: assessment of frequency and criticality, Software Quality Journal 29 (2021) 39–100.