=Paper= {{Paper |id=Vol-3144/QUAMES-paper1 |storemode=property |title=A Proposal for Measuring Understandability of Business Process Models |pdfUrl=https://ceur-ws.org/Vol-3144/QUAMES-paper1.pdf |volume=Vol-3144 |authors=Rosa Velasquez,Rene Noel,Jose Ignacio Panach,Oscar Pastor |dblpUrl=https://dblp.org/rec/conf/rcis/VelasquezNP022 }} ==A Proposal for Measuring Understandability of Business Process Models== https://ceur-ws.org/Vol-3144/QUAMES-paper1.pdf
A Proposal for Measuring Understandability of
Business Process Models
Rosa Velasquez1 , Rene Noel1,2 , Jose Ignacio Panach3 and Oscar Pastor1
1
  PROS-VRAIN: Valencian Research Institute for Artificial Intelligence, Universitat Politècnica de València, Camí de Vera,
s/n 46022, València, Spain
2
  Escuela de Ingeniería Informática, Universidad de Valparaíso, General Cruz 222, Valparaíso, Chile
3
  Escola Tècnica Superior d’Enginyeria, Departament d’Informàtica, Universitat de València, Avinguda de la Universitat,
s/n 46100 Burjassot, València, Spain


                                         Abstract
                                         Different factors affect the understandability of business process models, which depends both on the
                                         characteristics of the model but also on the model users’ knowledge and skills. Researchers have
                                         conducted experiments to find the relationships among factors and indicators, collecting data using
                                         surveys and quizzes in problem-solving experimental tasks. However, in order to collect data from a
                                         critical number and variety of model users, experimental replications can be an expensive approach.
                                         This article proposes an understandability measurement approach to collect data through a survey.
                                         The proposal is based on existing quality models and instruments, which are analysed to design a
                                         minimal instrument that allows jointly collecting information from various understandability factors.
                                         The proposed measurement approach is part of ongoing research on discovering relationships between
                                         the multiple factors and business process model understandability indicators using data analytics and
                                         machine learning techniques.

                                         Keywords
                                         understandability, instrument, business process models




1. Introduction
Comprehension is the primary goal of the pragmatic quality dimension in conceptual models
[1]. In particular, the understandability of business process models addresses how easy it is
to understand the information contained in a process model [2]. Understandability indicators
of a model can be measured with an objective approach, i.e., by asking a model user about
the information represented in the model, or with a subjective approach in terms of perceived
understandability [3].
   On the other hand, many factors can affect understandability, which is related to conceptual
modelling and the personal characteristics of the model user. The experimental research [2, 4, 5]
of these factors and their effect on understandability indicators have confirmed the need for an
empirical approach but also revealed the difficulty of controlling many levels of many variables
Joint Proceedings of RCIS 2022 Workshops and Research Projects Track, May 17-20, 2022, Barcelona, Spain
Envelope-Open rvelasquez@pros.upv.es (R. Velasquez); rnoel@pros.upv.es (R. Noel); joigpana@uv.es (J. I. Panach);
opastor@dsic.upv.es (O. Pastor)
Orcid 0000-0001-6817-1517 (R. Velasquez); 0000-0002-3652-4645 (R. Noel); 0000-0002-7043-6227 (J. I. Panach);
0000-0002-1320-8471 (O. Pastor)
                                       © 2022 Copyright for this paper by its authors.
                                       Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
at the same time. Moreover, since some factors depend on the characteristics of the model
users, covering a significant number and diversity of users would require a great number of
experimental replications. Our approach is to collect data on understandability factors and
indicators through a survey that can be continuously applied to increase the number and variety
of model users incrementally. However, carrying on such a long-term effort requires carefully
selecting the measurements related to personal factors and the understandability outcomes. In
this article, we propose a measurement approach to collect multiple understandability factors
and indicators in a minimalist way, aiming for the long-term collection of understandability
data.


2. Background and Related Work
In 2020, Dikici et al. [3] proposed an understandability framework for business process models
based on a systematic literature review. The framework groups the understandability factors
into process model and personal factors and defines understandability indicators. Process model
factors regard to inner characteristics of the model; for instance, a model with more concepts
and relationships could be harder to understand than a simpler model. Personal factors regard
the background of the model user; for instance, if the problem domain or the modelling notation
are known to the model user, it would be easier to extract information about the model. The
understandability indicators measure if the model user can extract information from the model
and can be objectively measured or self-reported. Figure 1 depicts the framework’s factors and
indicators from [3].




Figure 1: Understandability factors selected from Dikici’s understandability framework.
   Some of the above factors have been measured in experimental assessments using different
instruments. For instance, process model factors and measurements for the model graph are
proposed by Mendling et al. in [6], while in [7] adds the evaluation of the labelling style of the
model elements. Sanchez et al. [8] added the complexity measurement for process models. In [9],
the process diagrams size is measured as the number of elements; however, it opens the question
about the effect of the size of the model layout. Regarding the personal factors, Recker et al. [10]
studies the impact of the individual characteristics of model users using demographic surveys.
Finally, understandability indicators were also measured by Mendling [11] through quizzes
about the information represented in business process models. The questions are designed ad
hoc for each model, requiring that the model user interprets a set of model elements used in
combination and follows the process’s flow through decision points.


3. Understandability Measurement Proposal
This proposal aims to support collecting data on the factors and indicators of understandability
of business process models from as many model users as possible. To this end, we aim to move
from an experimental approach to data collection to a survey approach. The goal is to cover a
wide variety of personal profiles that may affect understandability. In a survey context, the set of
questions about the model user’s background and profile must be minimal to reduce dropout risk.
On the other hand, process models must be characterised in terms of their understandability
factors. However, the understandability characteristics of models should be independent of
the business process modelling tool since it is impossible to control the respondents’ settings.
Finally, it is needed to collect understandability indicators through quizzes.
   The proposed understandability measurement approach is presented in Figure 1. As shown,
we base the proposal on the quality framework by Dikici et al. [3], although some factors and
indicators are not considered. In the following subsections, we comment on the procedure and
measurements for each step.

3.1. Step 1: Initial Survey
This step regards collecting the personal information of the model user. We ask the model users
to characterise themselves for each of the personal factors in Figure 1. The following factors
are collected using 5-point Likert Scale questions: modelling expertise, knowledge in process
modelling notation, cognitive abilities, and domain familiarity. We followed this approach since
it was the same adopted in experimental instruments [2, 5]. On the other hand, the professional
background is selected from a list.
   The discarded factors for the first step regard the learning style research: ( PF5 - Learning
Style and PF6-Learning Motive and PF7 - Learning Strategy). The assumption that people can be
grouped into different learning style categories has scarce support from objective studies [12].
These elements are scratched in orange in Figure 1.
3.2. Step 2: Model Review
In this step, the model users have access to review a business process model. The survey can
present many business process models to the model user in a sequential manner, as shown
in Figure 1. The business models considered in the current version of the instrument were
carefully designed to present a combination of different values of the process model factors of
the framework. The factors taken into account are PMF2-Structural complexity, PMF5-Visual
layout, PMF6-Model element labelling and PMF10-Modelling construct type used, that are starred
in Figure 1. The combination of different levels of these factors generated seven different models.
However, the number of models could be increased by considering the non-starred factors in
the model design, which is future work.

3.3. Step 3: Understandability Assessment
Finally, in Step 3 we measure the understandability indicators. The objective indicators of
understandability are measured through a quiz that tests whether the model user can extract
information from the model. The type of questions is true or false, and no more than six questions
per model are designed. The understandability effectiveness indicator for each reviewed model
is calculated as the number of correct responses. To make different models comparable, the
quizzes for other models must have the same number of questions (four questions by each
model) and a similar difficulty level. The response time for each answer is recorded to calculate
the understandability task efficiency as the meantime for all the correct answers. The true or
false questions as well the effectiveness measurements are based on the works presented in [5]
and [11]. Since the authors did not report validity threats, we replicated the approach.
   To consider the perceived understandability factors, i.e., cognitive load, perceived usefulness,
ease of use, and intention to use, it would be necessary to introduce 13 questions, as presented
in [10]. Since we believe this would add an overwhelming load to the model user, we opted
not to consider self-reported understandability scores in order to better utilize the subjects’
time to review more models. This decision is shown in the blue striped factors in Figure 1. The
final survey instrument and a sample questionnaire are available in an open repository 1 , and
implemented in the Model Comprehensibility Survey System (MUSS)2 .


4. Conclusions and Future Work
In this paper, we presented the process for designing a survey-based measurement approach for
understandability. Based on existing understandability quality frameworks and experimental
instruments, we designed a three-step process and a 37-questions instrument to collect under-
standability indicators. We also implemented a support tool for the process that automatically
measures the model characteristics. The future work is focused on applying the measurement
approach to capture a significant amount of data for its analysis with data analytics and machine
learning techniques and on updating the instrument based on the findings.


   1
       https://doi.org/10.5281/zenodo.6391543
   2
       http://muss.informatica.uv.cl/
Acknowledgments
This work has been developed with the financial support of the Spanish State Research Agency
and the Generalitat Valenciana under the projects MICIN/AEI/10.13039/501100011033, GV/2021/
072, and INNEST/2021/57, and co-financed with ERDF and the European Union NextGen-
erationEU/ PRTR, the National Agency for Research and Development (ANID)/ Scholarship
Program/ Doctorado Becas Chile/ 2020-72210494 and Santiago Grisolía fellowship under the
project GRISOLIAP/2020/096.


References
 [1] O. I. Lindland, G. Sindre, A. Solvberg, Understanding quality in conceptual modeling, IEEE
     Software (1994) 42–49.
 [2] H. A. Reijers, J. Mendling, A Study Into the Factors That Influence the Understandability
     of Business Process Models, IEEE Transactions on Systems, Man, and Cybernetics - Part
     A: Systems and Humans (2011) 449–462.
 [3] A. Dikici, O. Turetken, O. Demirors, Factors influencing the understandability of process
     models: A systematic literature review, Information and Software Technology (2018)
     112–129.
 [4] R. Gabryelczyk, A. Jurczuk, Does Experience Matter? Factors Affecting the Understand-
     ability of the Business Process Modelling Notation, Procedia Engineering 182 (2017).
 [5] J. Mendling, M. Strembeck, J. Recker, Factors of process model comprehension—Findings
     from a series of experiments, Decision Support Systems (2012) 195–206.
 [6] J. Mendling, Metrics for Process Models: Empirical Foundations of Verification, Error Pre-
     diction, and Guidelines for Correctness, Lecture Notes in Business Information Processing,
     Springer-Verlag, 2008.
 [7] J. Mendling, H. Reijers, J. Recker, Activity labeling in process modeling: Empirical insights
     and recommendations, Information Systems 35 (2010) 467–482. Vocabularies, Ontologies
     and Rules for Enterprise and Business Process Modeling and Management.
 [8] L. Sánchez-González, F. García, F. Ruiz, J. Mendling, Quality indicators for business process
     models from a gateway complexity perspective, Information and Software Technology 54
     (2012) 1159–1174.
 [9] H. Störrle, On the impact of layout quality to understanding UML diagrams: Size matters,
     in: Model-Driven Engineering Languages and Systems, Springer, Cham, 2014, pp. 518–534.
[10] J. Recker, Continued use of process modeling grammars: the impact of individual difference
     factors, Eur J Inf Syst 19 (2010) 76–92.
[11] J. Mendling, J. Recker, H. A. Reijers, H. Leopold, An Empirical Review of the Connection
     Between Model Viewer Characteristics and the Comprehension of Conceptual Process
     Models, Information Systems Frontiers 21 (2019) 1111–1135.
[12] P. A. Kirschner, Stop propagating the learning styles myth, Computers & Education 106
     (2017) 166–171.