Using Online Working Sessions as an Evaluation Technique for Research in SPE: Experience and Lessons Learned Floriment Klinaku, Alireza Hakamian and Steffen Becker Software Quality and Architecture, University of Stuttgart, Germany Keywords online working sessions, model-based software engineering, software performance engineering, Context Evaluating tools, languages and frameworks is crucial in software engineering research. A common user group for such evaluations are bachelor and master students studying related courses. Two problems with such studies remain: the incentive and the format. The recent pandemic affected both; especially the format of the study. Objective In this talk we will share our experience and lessons learned with designing and conducting an online working session for master students in the Model Driven Software Engineering course. Our design decisions were based on two requirements: (1) students should strengthen the lecture material and (2) evaluate a language for modelling scaling policies for cloud applications. The first requirement was the incentive for the students whereas the second was the ”no free lunch” part. Method We designed a two-day working sessions with two different goals: strengthening the course material and evaluating a prototype language for modelling scaling policies for cloud applica- tions. We avoided using multiple sheets and instead we used only one shared document for collaborative working. Hence, there was no screen sharing and all material was stored in one document. The collaborative document started empty and was filled incrementally where in some parts students had to contribute. SSP 2021: 12th Symposium on Software Performance 2021 Envelope-Open floriment.klinaku@iste.uni-stuttgart.de (F. Klinaku); alireza.hakamian@stud.uni-stuttgart.de (A. Hakamian); steffen.becker@iste.uni-stuttgart.de (S. Becker) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) Results On the first and second day we had 5 and 7 participants respectively. At the end of each day we ask students to fill a survey which contains question of interests for both goals. The results are promising for both requirements. For lessons learned we emphasize on: • Such a hands on workshop with two goals requires effort in design but is beneficial if content-wise the goals align. • Two working sessions of 90 minutes length each, resulted to be enough to reach teaching and evaluation goals with a group of students. • An important aspect is the expected level of skills and domain expertise for the target users of the artefact being evaluated. If this contradicts with the teaching goals then such working sessions are not the way to go. • The main effort should go in preparing simple examples and tooling to support the online session. Furthermore, in the talk we will share feedback given by students on both goals. To emphasize on the feedback: • One or more students (1) were not able to complete the hands on due to overloaded content, (2) had technical problems with their development environment and (3) managed to follow through the end with all tasks completed. • All students were able to follow the rationale of the modelling language. • Most students (1) were able to come up with examples for different language concepts, (2) found the format of the workshop suitable.