Using iStar to Describe Human-robot Collaborations: Exploring Different Ways of Goal Model Usage Jeshwitha Jesus Raja, Marian Daun Center for Robotics, Technical University of Applied Sciences Würzburg-Schweinfurt, Schweinfurt, Germany Abstract In human-robot collaboration, humans and robots work closely together in a manufacturing process. To ensure proper and efficient execution of the manufacturing process while considering human safety and damages to the robot and the work product, advanced planning of the collaborative manufacturing process is important. Goal models can be used already in the early phases to specify and analyze human-robot collaborations. However, as model-based development along other established software engineering practices do not belong to the core of roboticists training, guidance is needed for the creation and usage of goal models for human-robot collaborations. In this paper, we investigate different ways how goal modeling can be used to specify human-robot collaborations to determine a set of best practices and recommendations in the future. To do so, we compare the results of two teams, who - after initial training on goal models - applied goal modeling to specify the same human-robot collaboration system. The outcome shows that multiple useful ways of using goal modeling for human-robot collaborations exist, that need to be considered in the future. Keywords Goal Model, Human-Robot Collaboration, Assembly process 1. Introduction Human–Robot Collaboration (HRC) is an emerging development in the field of industrial and service robotics, integral to the Industry 4.0 strategy [1]. Some production tasks cannot be automized or not at an acceptable cost. For example, assembly of flexible lines is still a problematic task for robots. Another example is the automation of small batch sizes, which is commonly not achieved cost-efficiently [2]. Therefore, humans and robots collaborate, so that the human takes care of tasks difficult to automate or steps that vary between different products, while the robot executes the repeating tasks. However, establishing human-robot collaboration in industrial practice is challenging due to its safety-criticality. Human and robot collaborate closely on the same work piece, with partly overlapping movement trajectories. Therefore, early planning and advanced analysis of human-robot collaborations are needed already in early development stages. Goal modeling allows for a systematic specification of tasks and allows for early analysis [3]. Particularly, iStar [4] allows specifying actors, their goals and tasks, and their relationships, and is thus well-suited to investigate complex collaborative systems [5]. Thus, goal modeling has already iStar’24: The 17th International i* Workshop, October 28, 2024, Pittsburgh, Pennsylvania, USA Envelope-Open jeshwitha.jesusraja@study.thws.de (J. Jesus Raja); marian.daun@thws.de (M. Daun) Orcid 0009-0008-7886-7081 (J. Jesus Raja); 0000-0002-9156-9731 (M. Daun) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings been successfully applied to human-robot collaboration (e.g., [6]). However, roboticists are often times not familiar with model-based concepts and analyses, as advanced software engineering is often not part of their core curriculum. Therefore, guidance is needed to support engineers in developing iStar models of human-robot collaborations. For all modeling languages, but particularly for modeling used in early development phases, there exists a multitude of ways how the modeling language can be used, and the model creator can develop their own style. Therefore, in this paper, we take a look at different ways of developing iStar goal models for human-robot collaborations using a concrete case system. Comparing the different modeling approaches and their outcomes shall allow us in the future to define guidelines for developing goal models for human-robot collaborations. The paper is structured as follows: Section 2 discusses related work, followed by Section 3.1, which describes the study setup along with the case example used. Section 3.2 then presents the results along with Section 3.3 which presents the major findings. This is later evaluated and discussed in Section 4. This section also concludes the paper. 2. Related Work Approaches for modeling human-robot collaboration often focus on the human behavior in- side the collaboration, which is challenging to sketch, and, therefore, often times needs the combination of modeling approaches from different perspectives [7]. The term human-robot collaboration, however, subsumes different levels of autonomy and interaction between human and robot [8]. As a result, modeling human behavior in human-robot collaborations is still a challenge [9]. Furthermore, these behavior focused approaches typically can be applied at the later stages of development. In the very early stages, the exact behavior is unknown and rather a proof concept is needed or narrowing down the expected behavior of human and robot in the collaboration to be precisely specified later on. An abstract modeling language for the early phases that already allows analyses is goal modeling [10, 3]. They can be used in requirements engineering to document the high-level requirements, to reason over fundamental design decisions, and to identify conflicts [11]. Goal modeling seems particularly fitting for specifying human-robot collaboration, as here one of the first steps is to set a common goal for the collaboration that considers human preferences, task knowledge (including objects), and the capabilities of both the human and the robot [12]. In previous work, we have shown that GRL goal models are an adequate means for modeling human-robot collaborations in requirements engineering [13], particularly concerning safety aspects [6]. Therefore, we have proposed a GRL profile for capturing safety aspects of human robot collaborations [14]. This profile is based on a previous GRL-compliant iStar extension to model collaborative cyber physical systems [15]. 3. Investigating Possible Uses of iStar to Model Human-Robot Collaborations 3.1. Study Setup The goal of this study is to investigate different uses of iStar to model human-robot collaborations. This shall later on serve as foundation for defining best practices and guidelines. To do so, we recruited robotics students from a sixth semester elective requirements engi- neering course. As part of the course’s curriculum, iStar and GRL were taught. In addition, our extensions for modeling collaborative systems, particularly, human robot collaborations were presented to the students, and also used for the tasks. The course was split into two groups, which were given the opportunity to create an iStar model for a human-robot collaboration system for extra credit. As a case example, a collaborative assembly station was chosen. The existing case system could be observed by the students, and the responsible engineers were available to answer detailed questions. 3.2. Preliminary Results 3.2.1. Goal Model 1 Figure 1 shows the goal model created by the first team, which emphasizes how the workstation should be set up. The goal model features one main actor, ‘Overall Workstation,’ which has the goal of ‘Output Toy Cars.’ This goal can be achieved through the following components: the task ‘Information and Safety Layer,’ the task ‘Manipulation Tasks,’ the soft goal ‘Task Shall Be Distributed Based on the Type of Interaction,’ and the goal ‘Design Specified Workspace.’ The main actor also includes sub-actors: ‘Ulixes A600,’ which is part of the workstation and includes a projector for displaying instructions, cameras for monitoring, and a depth sensor for the engineer to notify task completion; ‘UR5e,’ the collaborative robot (cobot); and the ‘Engineer.’ The goal of these sub-actors is to be prepared for executing the assembly process. The main actor and the sub-actors have different goals that need to be fulfilled. 3.2.2. Goal Model 2 Figure 2 shows the goal model from the second team, which emphasizes more on the production process. The group also decided to define one main actor and decompose this one into further actors. The main actor is ‘Toy Car Production with HRC.’ This actor, has only one goal, namely ‘Assembly of Toy Car for Kids in HRC Assembly Line,’ whose fulfillment depends on the goals of all sub-actors. Note that we do not show the main actor and its boundary line in Figure 2 in order to improve the clarity of the figure. As shown in Figure 2, the main actor consists of 5 sub-actors. Namely: ‘Cobot’, ‘Collaboration’, ‘Camera Systems’, ‘Human Operator’ and ‘Safety System’. The main actor has one main goal that needs to be fulfilled, which is done through the fulfillment of the sub-actors’ goals. The goals include tasks that need to be performed before the assembly process and during. Overall Workspace + Output toy cars + Resources need to be in working conditions AND Information and safety Manipulation tasks Task shall be distributed layer based on the type of interaction AND AND Design specified C_WS shall be higher workspace C_WS shall be a safe Attach projector at than the H_WS and environment appropriate location R_WS AND AND Pick Place Screw Hold Have adjustable height Lockable wheels Outline robot Outline human Outline collaborative Have a planar surface workspace with color workspace with color workspace with color red green yellow Ulixes A600 Successful boot up process AND Computer Pass risk System wind-up assessment system AND AND Ensure robot‘s Ability to identify Ability to identify ability to perform engineer‘s pose robot’s pose Boot up Safety document document Ensure continuous operation AND Camera UR5e Provide data to Maintain continuous Coverage of entire Projector projector communication with workspace Perform safe AND robot Restrict engineer and operation AND robot from entering XOR each others workspace AND Project instructions Project green button for Inform robot Monitor distance Malfunction- for engineer in H_WS engineer to indicate Notify robot in Activate ‚emergency when to continue between engineer free state of Pass risk Stay within coverage completion of task case of collision ‚Torque stop‘ mechanism if a task and robot operation assessment of cameras sensor‘ needed AND Pass Recieve periodic Pass self- Notify operator in inspection maintenance tests case of failure Depth sensor Assembly Assembly steps document AND Robot Compile provided Regulate distance Regulate speed gripper instructions AND XOR XOR Without Perform collaboration, tasks in With Control tablet Perfom maintain a Speed up Slow down R_WS collaboration, collaborative distance less mantain a tasks in the than 30cm distance less C_WS than 5cm Engineer + Support continuous . Supervise operation assembly process AND Stay within the Turn on computer Perform given Screw driver coverage of Pass training system tasks as specified cameras AND Undergo education Perfom collaborative Accident prevention on how to use the tasks in the C_WS Be physically and system mentally fit AND AND Inform cobot to Activate start task Do not come in Do not destroy the emergency stop contact with the robot or button in event of robot workspace collision Successful completion of initial task before the start of next task Emergency stop button Figure 1: Goal Model 1 Cobot Cobot Goals Collaboration Assisting Human AND Has CE markings and Operator in assembly Comply VDI/VDE safety follows EU regulations standard Collaborative Allowed to move in cobot Clear light curtain workspace for main and collaborative light boundary defined AND assembly curtain boundaries ONLY Holding parts for Picking parts for assembly Placing parts for assembly assembly in collaboration ? Human Operator Goals ? The pace of cobot is adaptable to work pace of Intermediate breaks for human operator Altering gripper strength based on the ? human operator AND part material Path planning Clear light curtain Allowed to move in operator boundary defined and collaborative light curtain AND boundaries ONLY Place parts in the slow down while Follow the same handling large or sharp right order and calculated path objects location Camera Assembly for chassis Projecting and Systems mounting monitoring the Human workspace Operator AND Pior knowledge about mech., elec, and Monitoring assembly Camera System 1 Camera 2A Projecting Detecting the operator in the Working Standards AND AND workspace Camera 2B Detecting the Monitoring of the steps of assembly Mandatory training for process in the workspace working on shopfloor Projecting the Error detection workspace AND AND boundaries Constant Operator follows EU Projecting the AND monitoring of tasks for cobot regulation, IEEE standards Monitoring of the whole human operator and operator vitals process in periodic intervals Collision detection Mandatory intensive 2 Falling parts Handle problems related to day training about detection emergency stopping safety on shopfloor Mandatory intensive 1 week training about the process on the shopfloor Safety Safe execution of System assembly HRC Emergency button AND within arms reach of the operator Emergency Safe execution of tasks Safe execution of tasks button The gripper will not have sharp edges ? by Cobot by human operator ? Completes each AND assembly step within 2 ? AND mins The cobot stops Cobot speed is capped completly if the The operator uses the touch to 1m/s operator crosses the Routine safety button to notify the cobotthat saftey distance assesments of the the next process can begin system Maintenance of the Optical distance cobot every 4 weeks sensor Figure 2: Goal Model 2 3.3. Major Findings The important aspects of the assembly process include the two main actors—the human and the cobot—the monitoring system, and safety. All these aspects are featured in both goal models, but not in the same way. When discussing safety, Figure 1 represents it within each sub-actor, showcasing how the human and the cobot must individually maintain safety aspects. On the other hand, Figure 2 shows safety as a separate actor, encompassing all the safety aspects of both the human and the cobot. With regard to the monitoring system, both goal models include it as a separate actor that features cameras for monitoring and a projector for displaying instructions. Despite using different labels, the representation of the monitoring system remains consistent in both models. The main focus of the assembly process is the collaboration between the human and the cobot. Figure 1 illustrates this collaboration through tasks within the individual actors and their communication via the monitoring system ‘Ulixes A600’. On the other hand, figure 2 shows the same with the use of a separate actor. The tasks within this actor are either dependent on or influence tasks from the ‘human operator’ and ‘cobot’ actors. This demonstrates how the goal model from Team 1 (Figure 1) encompasses all safety and collaborative aspects within the respective actors, while the goal model from Team 2 (Figure 2) separates safety and collaboration into distinct actors. Regarding the elements used, both teams incorporate basic goals, tasks, decompositions, and resources. Additionally, Team 2 focuses more on contributions and soft goals. Regardless of the approach taken to create the goal models, both models represent the collaborative workspace for manufacturing toy cars, including the human, cobot, monitoring system, and workspace safety. In conclusion, although the two teams used different approaches to goal modeling for speci- fying human-robot collaborations, both successfully represented the complete collaboration and met the intended specifications. It is well known for modeling that different modelers will end up with different models by using different modeling elements, modeling at different levels of granularity, or preferring a different layout. In addition, giving the degrees of freedom, modelers might select a different focus of a model due to their intended purpose. In our case, we did make specific requirements regarding the purpose of the modeling, other than that the model should adequately specify the case example. Therefore, it is not surprising that both models look different, but it shows that for capturing the important parts of human-robot collaboration multiple aspects are relevant, which were covered in both models. However, depending on the particular intention, e.g., giving safety the visual importance of an actor, in contrast to showing how safety plays a vital role within all actors, these aspects can be treated very differently. For the future, it remains to investigate further approaches to modeling human-robot col- laborations with goal models and to analyze the usefulness of the possible approaches for different purposes. It can particularly be questioned whether a view concept is needed, as human-robot collaboration deals with a set of very vital aspects that are of importance to completely understand the collaboration and appropriately specify the system. For instance, • the production or assembly process is needed as the main context constraint, limiting the solution space; • safety must be considered as vital factor to enable real-world application of human-robot collaborations; • the physical actors, i.e. the human and the robot, where both need to be given specific tasks aligning with each other; • the collaboration itself, as source of constraints for aligning the actions of the human and the robot; • monitoring, planning systems, the production systems, aside from the robot itself human- robot collaborations rely on other technical systems needed. 4. Conclusion Human-robot collaboration is an evolving field in industrial robotics, to allow for semi- automation of complex production processes. To ensure successful collaboration in terms of product quality and safety, advanced planning of the collaboration process is needed. Goal modeling can aid in the specification and analysis of human-robot collaborations already in the early phases. However, currently there is a lack of guidance for roboticists on how to create and use goal models for human-robot collaborations best. In this paper, we reported a first study to shed light into the use of goal models for human- robot collaborations. Two teams were tasked with creating iStar goal models for an existing human-robot collaboration system. The results substantiate the assumption that iStar goal modeling is applicable and useful in this scenario. Both groups yielded in completely different models, placing emphasis on different aspects. This, highlights the need for future research to identify the crucial points on what is most useful to investigate in early requirements engineering for human-robot collaborations. Furthermore, it might indicate that a view concept is needed to emphasize multiple aspects of human-robot collaboration. In addition, since the approach was tailored to a specific human-robot collaboration use case and applied only to a particular group of engineering students, its generalizability cannot be assumed. Thus, for future work, it is important to explore more case studies involving a wider variety of robotic collaborations and different types of interactions. References [1] A. Vysocky, P. Novak, Human-robot collaboration in industry, MM Science Journal 9 (2016) 903–906. [2] A. Pichler, C. Wögerer, Towards robot systems for small batch manufacturing, in: 2011 IEEE international symposium on assembly and manufacturing (ISAM), IEEE, 2011, pp. 1–6. [3] J. Horkoff, F. B. Aydemir, E. Cardoso, T. Li, A. Maté, E. Paja, M. Salnitri, L. Piras, J. My- lopoulos, P. Giorgini, Goal-oriented requirements engineering: an extended systematic mapping study, Requirements engineering 24 (2019) 133–160. [4] F. Dalpiaz, X. Franch, J. Horkoff, istar 2.0 language guide, arXiv preprint arXiv:1605.07767 (2016). [5] D. Amyot, J. Horkoff, D. Gross, G. Mussbacher, A lightweight grl profile for i* modeling, in: Advances in Conceptual Modeling-Challenging Perspectives: ER 2009 Workshops CoMoL, ETheCoM, FP-UML, MOST-ONISW, QoIS, RIGiM, SeCoGIS, Gramado, Brazil, November 9-12, 2009. Proceedings 28, Springer, 2009, pp. 254–264. [6] M. Daun, M. Manjunath, J. Jesus Raja, Safety analysis of human robot collaborations with grl goal models, in: International Conference on Conceptual Modeling, Springer, 2023, pp. 317–333. [7] J. Karwowski, W. Dudek, M. Wegierek, T. Winiarski, Hubero: a framework to simulate human behaviour in robot research, Journal of Automation Mobile Robotics and Intelligent Systems 15 (2021) 31–38. [8] R. E. Yagoda, M. D. Coovert, How to work and play with robots: an approach to modeling human-robot interaction, Computers in human behavior 28 (2012) 60–68. [9] E. Kindler, Model-based software engineering: The challenges of modelling behaviour, in: Proceedings of the Second International Workshop on Behaviour Modelling: Foundation and Applications, 2010, pp. 1–8. [10] A. Van Lamsweerde, Goal-oriented requirements engineering: A guided tour, in: Fifth ieee Int. Symp. on requirements engineering, IEEE, 2001, pp. 249–262. [11] A. M. Grubb, M. Chechik, Formal reasoning for analyzing goal models that evolve over time, Requirements Engineering 26 (2021) 423–457. [12] S. C. Akkaladevi, M. Plasch, A. Pichler, B. Rinner, Human robot collaboration to reach a common goal in an assembly process, in: STAIRS 2016, IOS Press, 2016, pp. 3–14. [13] J. Jesus Raja, M. Manjunath, M. Daun, Towards a goal-oriented approach for engineering digital twins of robotic systems., in: ENASE, 2024, pp. 466–473. [14] M. Manjunath, J. Jesus Raja, M. Daun, Early model-based safety analysis for collaborative robotic systems, IEEE Transactions on Automation Science and Engineering (2024). [15] M. Daun, J. Brings, L. Krajinski, V. Stenkova, T. Bandyszak, A grl-compliant istar extension for collaborative cyber-physical systems, Requirements Engineering 26 (2021) 325–370.