<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>J. v. Brocke, W. M. P. van der Aalst, N. Berente, B. van Dongen, T. Grisold, W. Kremser,
J. Mendling, B. T. Pentland, M. Roeglinger, M. Rosemann, B. Weber, Process science: the
interdisciplinary study of socio-technical change, Process Science</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/ICIII.2010.522</article-id>
      <title-group>
        <article-title>Enhanced Data-Driven Decision-Making</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vânia Sousa</string-name>
          <email>vania.sousa@ccg.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cristiana Vieira</string-name>
          <email>avieira@dsi.uminho.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ana Lavalle</string-name>
          <email>alavalle@dlsi.ua.es</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>António Vieira</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maribel Yasmina Santos</string-name>
          <email>maribel@dsi.uminho.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Analytical Requirements, Business Process, Performance Indicators, Process Analytics</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ALGORITMI Research Centre, University of Minho, Campus de Azurém</institution>
          ,
          <addr-line>4800-058 Guimarães</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>CCG/ZGDV ICT Innovation Institute, Campus de Azurém</institution>
          ,
          <addr-line>4800-058 Guimarães</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Lucentia Research Group (DLSI), University of Alicante</institution>
          ,
          <addr-line>San Vicent del Raspeig</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2010</year>
      </pub-date>
      <volume>2</volume>
      <issue>2024</issue>
      <fpage>28</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>By integrating Business Analytics and Process Analytics, organizations can gain a deeper understanding of the relationship between process ineficiencies and business outcomes, leading to improved data-driven decision-making. This integration, however, is often overlooked, with limited methodological guidance for systematically combining these two analytical domains. Motivated by this challenge, this paper proposes a methodological approach for the identification of analytical visualizations that allow the aforementioned integration to be achieved systematically. The proposed approach is tested in a running example, which includes an instantiation of a Data Warehouse system for supporting data integration and analysis for business process analytics.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <sec id="sec-2-1">
        <title>CEUR</title>
      </sec>
      <sec id="sec-2-2">
        <title>Workshop</title>
      </sec>
      <sec id="sec-2-3">
        <title>Proceedings</title>
        <p>BA and PA have traditionally been pursued as separate domains. While data warehouses
integrate data that allows the analysis of relevant historical data for business insights, process
mining tools have focused on extracting process models from event logs. Such a siloed approach
can hinder a comprehensive understanding of organizational performance. As stated by [4],
Process Science, which includes PA, requires contributions from various disciplines, ofering
perspectives such as human, social, environmental, economic, and technological (including Data
Science, which encompasses BA). By integrating these diverse perspectives, process science
can provide a more comprehensive view of processes and their impact on businesses. Thus, to
unlock the full potential of data-driven decision-making, a more integrated approach is needed.</p>
        <p>This paper proposes a methodology for the integration of PA and BA. We outline a conceptual
approach for identifying relevant indicators for the business and operational processes and
determining the types of visualizations to use in their analysis. With this approach, we aim to
empower decision-makers with a comprehensive perspective on organizational performance
through the integration of performance indicators and efective visualization techniques.</p>
        <p>This paper is structured as follows. Section 2 summarizes the related work. Section 3 presents
the proposed methodological approach and delves into the running example. Section 4 evaluates
and discusses the approach. Section 5 concludes with the main findings and future work.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>Organizations are increasingly recognizing the importance of metrics-driven management in
achieving their strategic goals, reflecting a broader trend toward data-driven decision-making
and performance optimization. While the academic and practical significance of these metrics
has long been acknowledged, as evidenced by studies such as [5], their role is now being
viewed through a new lens, emphasizing their critical importance in contemporary business
environments. As organizations navigate an increasingly complex landscape, the ability to
harness and interpret vast amounts of information has become crucial for maintaining a
competitive advantage and achieving strategic ambitions. [6] and [7] further state that “the synergy
between business performance metrics and process eficiency indicators has the potential to provide
a comprehensive understanding of organizational dynamics”.</p>
      <p>In 2010, [5] introduced a framework integrating Business Process (BP) and Information
Technology (IT) management with Business Intelligence (BI) at all decision-making levels (strategic,
tactical, and operational). This framework enables real-time monitoring and analysis to optimize
operations and align business activities with strategic goals. It connects business process data
with operational activities, implements business rules and Key Performance Indicators (KPIs),
and generates automatic alerts to prevent issues. Despite limitations, such as the absence of
practical use cases, insuficient technical detail, and a theoretical focus, the study emphasises a
more comprehensive approach to business performance management.</p>
      <p>Research in subsequent years has refined this integration, focusing more on practical
applications. For example, [8] proposes a process-oriented approach using the Business Process
Model and Notation (BPMN) to integrate interrelated business processes into an analytical data
model, encompassing behavioral aspects and performance measures. This approach results
in a comprehensive analytical data model aligned with both strategic goals and operational
execution. However, the study does not fully address the integration of detailed operational data
or practical implementation challenges. Similarly, [9] advances this discussion by presenting a
Data Warehouse (DW) model specifically for analyzing business process data. This study uses
BPMN 2.0 to emphasize the symbiotic relationships between business process management and
business intelligence, proposing an analytical data model for DW systems. While this study
emphasizes the relationship between business process management and BI, it lacks a detailed
integration of operational data with business performance indicators.</p>
      <p>By 2020, the focus shifted toward integrating business and process data within complex
technological environments. [10] explored this integration by identifying various intra- and
inter-organizational collaborative process scenarios and proposed an integrated meta-model
to address data matching and integration challenges. This model-driven approach provides a
unified view that captures all data consistently, facilitating process extraction and data mining
techniques, but may fall short in providing a detailed framework for practical implementation
within specific organizational contexts.</p>
      <p>More recently, [11] proposed the Business and Process Analytics Data Warehouse
MetaModel (B&amp;PADWM), which integrates the traditional business perspective in a DW system
with the process perspective. This meta-model details the activities supporting business
processes, enabling an analysis of both business operations’ eficiency and the efectiveness of
supporting processes. By highlighting the cause-and-efect relationships between these
dimensions, this approach helps organizations assess operational eficiency and process efectiveness.
The B&amp;PADWM meta-model supports strategic decision-making based on a comprehensive
understanding of operational knowledge.</p>
      <p>Building on the work of [11], this paper proposes a methodology to bridge the existing gaps.
This methodology provides a structured approach to integrating business performance indicators
with operational data by defining analytical requirements, proposing a multidimensional Data
Warehouse model, and generating visualizations for decision-making. This approach aims to
enhance organizational eficiency and efectiveness by addressing the limitations of previous
studies, including the need for a comprehensive and actionable data analysis approach that
integrates both business and operational perspectives efectively.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Integrating Business and Process Analytics</title>
      <p>The integrated analysis of business and process data requires a methodological approach to guide
and support the identification of a data model that integrates these two perspectives highlighting
how they impact each other. Improving organizational eficiency and eficacy requires analyzing
the data associated with the business processes and, also, data associated with the operational
processes that include the activities performed by the several actors enrolled in the business
processes. As can be seen in Figure 1, this work considers that Business Process Analytics
includes the business processes and their indicators to unveil how the business is performing.
These business processes produce/generate data that needs to be processed to highlight useful
information about the business indicators. Decision-makers use this information to devise
strategies and actions that impact business performance intending to improve it. Besides the data
from the business processes, vast amounts of data are collected from the supporting operational
processes requiring concerns about PA. In this paper, Business Process Analytics considers
the analysis of business indicators (the BA perspective) and operational indicators (the PA
perspective) and intends to highlight how business and operational processes impact each other.</p>
      <sec id="sec-4-1">
        <title>3.1. Proposed Methodological Approach</title>
        <p>The proposed methodological approach considers that to properly establish analytics that relates
process and business data, there is the need to identify the associated analytical requirements.
These will integrate business and operational concerns in improving eficiency and eficacy.
Figure 2 details the methodological approach that contributes to the field by systematically
transforming raw process data into insights through a series of well-defined steps. After
collecting the data or having access to it, including event logs, transaction records, or any other
type of process-related information, the steps that follow are:
1. Process Model. This step requires a representation of the process model using, for
instance, Process Mining techniques as they allow for the extraction of a process model
from the event logs. This model visually represents the flow of activities, the objects, the
sequence of events, and the interactions between diferent entities within the process.
These activities and related events can be modeled following the OCEL 2.0 standard,
Object-Centric Process Mining [12], as this facilitates the identification of patterns,
dependencies, and ineficiencies in the operational processes. [ 13]. In OCEL 2.0, objects
refer to the diferent activities and their events.
2. Conceptual Data Model. This step involves identifying the key concepts (entities or
classes), relationships, and data attributes that are central to the process. By analyzing the
process model, it is possible to identify the types of activities carried out by the objects
involved and how these activities relate to the underlying data structures, providing a
high-level abstraction of the data.
3. Analytical Requirements. This step is related to the identification of the Analytical</p>
        <p>Requirements to be fulfilled considering the background knowledge provided by the</p>
        <p>process and conceptual data models. This involves determining the analytical tasks needed
to support decision-making. It is a crucial step as it ensures that the analytical model and
supporting visualizations are aligned with the strategic goals of the organization.
4. Analytical Data Model. In this step, an Analytical Data Model is identified to support the
analytical requirements. It organizes the data in a way that facilitates eficient querying
and reporting, often involving data warehouses, data marts, or other data systems.
5. Analytical Visualizations. This step involves developing visualizations based on the
analytical requirements and the supporting analytical data model. These transform possible
vast amounts of complex data into intuitive insights, using tools such as dashboards.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Running Example</title>
        <p>For detailing the methodology, the running example outlines the management of customer
orders [14] (Figure 3), covering the registration, payment, packing, and shipping of orders. Staf
from sales, warehousing, and shipment departments are involved. Customers place orders
which are assigned to a dedicated sales representative for registration and payment processing.
Warehousing staf check stock, reorder if necessary, and prepare items for shipment. Packages
are compiled and shipped, often with some policy deviations in loading assistance, and deliveries
may fail repeatedly before success. In the orders, each product has a price that increases with
time due to inflation negatively impacting customers’ orders. After placing an order ( place
order ), sales employees register the order (confirm order ) and handle the payment processing
(payment reminder, pay order ). For preparing the orders, a warehousing employee verifies
the availability of the ordered items and, when necessary (item out of stock), reorder items of
products (reorder item) to have availability of items to pick (pick item). Packages are prepared
(create package) for customers including the items available for shipment (send package) and
additional packages may be sent for items of the same or diferent orders. As deliveries may fail
(failed delivery), packages are resent until successfully delivery (package delivered).</p>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. From Data to Visualizations</title>
        <p>Following the proposed approach, the Process Model (already available as shown in subsection
3.2) supports the identification of a Conceptual Data Model that includes the concepts (objects in</p>
        <p>OCEL 2.0) of the domain addressed in the operational processes and the relationships between
those concepts gathering the information that flows in the control or material/resource flows.
If no Process Model exists for a given data set, it has to be designed using techniques such as
process discovery. The Process Model depicted in Figure 3 includes six objects: Order, Customer,
Employee, Package, Item, and Product. When the behavior of the Process Model, regarding its
activities, is represented as relationships, Figure 4 depicts the obtained Class Diagram. This
model is abstracted and represented using an approximation of the Unified Modelling Language
(UML) Class Diagram notation. The representation of the classes has been simplified using a
square to represent the main concepts and no attributes have been specified to improve the
readability of the model. In this model, an extended version from [11], two types of relationships
are considered: those that result from the activities of the supporting operational processes
and the way they were defined and articulated, and those relationships that relate common
concepts of the domain. Additionally, this model makes explicit an association class between
Item and Product, as the data contains the evolution of prices over time.</p>
        <p>With the Conceptual Data Model and to identify the Analytical Data Model, it is essential to
define the Analytical Requirements. These will guide the identification of the logical model of the
data system to be implemented. In this running example, it is considered the implementation
of a DW system, justifying an Analytical Data Model represented by a multidimensional
model with fact tables, their level of detail, their dimension tables, and their relationships. The
Analytical Requirements must consider the business processes addressed in the Conceptual Data
Model, namely Manage Orders, Manage Packages, and Manage Prices, and the operational
processes that support the daily operation of the organization (activities and their events). To
efectively integrate the business and the operational processes, the key question to be addressed
is which analytical requirements should be considered?. This paper highlights the use of i* as this
allows the specification of Strategic, Decision, and Information Goals [ 15] when implementing
a DW system. Analytical requirements cannot focus exclusively on Strategic Goals (SGs) for
the business processes, but must take into account that SGs might be supported by Decision
Goals (DGs) with business and operational concerns. Consequently, the Information Goals (IGs)
will focus on the integrated analysis of business indicators and operational indicators.</p>
        <p>As an example, consider an SG associated with Orders, such as Decrease Processing Costs. To
achieve this goal, with this integrated perspective between the business and supporting activities,
DGs can address decreasing the number of packages, the processing time, and the shipping time.
While from the business perspective decreasing the number of packages will decrease costs,
from the operational perspective it requires more eficient activities in confirming orders (as
this will speed up the warehousing and shipping operational processes), creating packages, and
picking items, among others. As shown in Figure 5, the defined DGs are supported by several
IGs. For each IG, the ReqViz framework [16] is used to obtain the most suitable analytical
visualizations that will support the decision-making process. To achieve this, we complete
the requirements model (Figure 5) with the visualization context (represented in yellow for IG
17). This context includes the goals that the data visualization aims to reflect (VG), the type
of interaction that users will need to have with the visualization (IT), and the elements from
the DW that will populate the visualization. These are represented in categories (Cat) and
measures (M). It should be noted that both the VGs and the ITs are chosen from a predefined
list by following the guidelines presented in [16].</p>
        <p>With the identified analytical requirements (Figure 5) is now possible to derive the Analytical
Data Model for the DW system. Although the proposed approach is now specifically used for
this data system, its use is not mandatory. Diferent data systems can be used, as long as their
modeling principles are followed. In a DW, we need to consider the inclusion of fact tables
and dimension tables [17]. The fact tables address the business processes at diferent levels of
detail. As previously mentioned, in the Conceptual Data Model we identified Manage Orders,
Manage Packages, and Manage Prices, but from the operational perspective, the Manage
Activities process needs to be considered, facilitating the analysis of highly detailed data about
the activities and events of the business processes. Considering the analytical requirements and
the need to derive a multidimensional model for the supporting DW, Figure 6 shows how the
diferent IGs, the objects included in the process model and the timestamp object implicit in the
events, can support the identification of the required fact tables and their indicators. Also, this
mapping of the specified IGs and objects also supports the association of the dimension tables
to the identified fact tables. As shown in Figure 6, for each IG, there is the need to identify the
indicator/measure to be considered, how it is computed in case it needs to be derived, and the
aggregation function that should be used for analysing the data.</p>
        <p>The formalization of the Analytical Data Model considers the knowledge explicit in Figure 5
and the detail of the Conceptual Data Model, Figure 4, as the granularity of the fact tables may
impose diferent relationships between the fact and dimension tables or diferent indicators.
e
s</p>
        <p>G u
T n n o</p>
        <p>o o h
ry re iton tiaz tiaz rae
teago saeu tracen liisau liisau taaW
C M I V V D
t
a
C M IT VG V W</p>
        <p>D
m s
u e
n g
e a
s p
a
e
r
c
e</p>
        <p>D
e
v
i
t s t?
m p i c
r i s a
T o r y o</p>
        <p>f c l t
A r s a</p>
        <p>P re an oHw
e</p>
        <p>P
a
?
d
e
c n
i e
t p
s is p
rm o s a</p>
        <p>h
T o n y</p>
        <p>f g l is
A r h
e ia an t
P D a sa</p>
        <p>h
a
y
h
W
s
i
t
c
u
d
ro k
p c</p>
        <p>o
0 a t
1 s</p>
        <p>en
fIG h
o
t
w u
e o
z
y
l
a
n
A
?
n
e
p
p
a
e h
a v s it
i
t i e
m k
T r ip sy a</p>
        <p>o r l
A f c a m
re s n to</p>
        <p>e a o
P</p>
        <p>D
d
o
t
t
a
h
W
20 nu ilv</p>
        <p>e e
IG th d
f
o
r
e s
b ie</p>
        <p>r
m e</p>
        <p>d
e e
z l
y ia
l
a f
n
A
e
e m
th ti
e g
s n
G a i
D re ss
c e
e c
D ro</p>
        <p>p
s
t
c
u
d
o
r k
p c</p>
        <p>o
f t
o s
G</p>
        <p>k f
D s o
i
r t
fy ou
i
t
n
e
d
I
G e rd
D th o
n
o
i
t
u
l
o s
v r
e e
ez fo
y
l
a
n
A
n
i
s
eh em se
t</p>
        <p>t
14 ze if g</p>
        <p>a
y o k
G l
I an tity acp</p>
        <p>A n
a
u
q</p>
        <p>e
r m
e i
b d t
um an teh
3 n se r
1 e g ev
h a
t
G ck to
I e
z a n
y p se
l
a f
n o s
A
m
e
t
i
fo ck</p>
        <p>o
e t
p -s
y f
t o
11 e t</p>
        <p>h u
t</p>
        <p>o
IG e s
z t
y c
l u
a
n d</p>
        <p>o
A r
p</p>
        <p>G
V e</p>
        <p>A im
9 e t
1 th g</p>
        <p>n
IG ze ip</p>
        <p>p
ly i
a h
n s</p>
        <p>A
e
g
a e
k
c im
a t
p
G n</p>
        <p>e o
D s i
a t</p>
        <p>a
re e
ce rc
D
G teh tim
D e m
s e
g
n
i
k
c
i
p e
ae ti
r
c
e</p>
        <p>D
e
r
e im
d t
r
o n</p>
        <p>o
G se it</p>
        <p>a
D a
re rm
c i
e f</p>
        <p>n
D o</p>
        <p>c
5 e n
1 th io
t
a
IG ze</p>
        <p>G e
V im
A t</p>
        <p>m
y r
l i
a f
n n
A co
d
e
r
e
d
r
n
e
d
I
o ts
t
9 s c</p>
        <p>u
o d
G
I m o</p>
        <p>r
y p
f
i
t
r
e
v
o
e
m e
2 o m
IG icn ti</p>
        <p>e
ze th
y
l
a
n
A
r
e e
b h</p>
        <p>t
um re
n
z d
y r
l o
a f
n o
A</p>
        <p>v
1 e o e
G th rse itm
I e</p>
        <p>a u
G e o</p>
        <p>r c
D a s</p>
        <p>i
y
l
p
p
a
o t
t n
fy d
i
t
n
e
d
I
e
g
a
k
c
a
p e</p>
        <p>m</p>
        <p>G it
8
1 V n</p>
        <p>A o</p>
        <p>i
G e t
I h a
t e
ez rc
y
l
a
n
A
n
o f
i
t o
a</p>
        <p>t
e
r u
c o
e y s
g b m
7 a</p>
        <p>d te
1 k e i</p>
        <p>c t
G a c k
I p e c</p>
        <p>f o
if f t</p>
        <p>a s
e s
z i
ly e
a
n im</p>
        <p>t
A</p>
        <p>e</p>
        <p>G
16 V im</p>
        <p>A t
G e m
I th te</p>
        <p>i
g
n
i
k
c
i
p
e
z
y
l
a
n</p>
        <p>A
f
o ts
r c
e u
b d
m ro
12 nu p</p>
        <p>k
e c
IG th to</p>
        <p>s
e
z f
y o
l
a t
n u</p>
        <p>o
A
G s u
D fy d</p>
        <p>o
i r
e
l
b
ta s
i t
u c
tn p
e
d
I
h
t
i
w
s
r s
e r</p>
        <p>e
m d
6 o r</p>
        <p>t
IG su eo
c r</p>
        <p>o
e
z m
y
l
a
n</p>
        <p>A
e
l
b
a s
t r
i e
u
G s m
D fy to
i s
t u
n c
e
d
I
f
o
r rs
e e
b d
m in
u</p>
        <p>m
4 n e</p>
        <p>e r
IG th t</p>
        <p>n
e e
z
y ym
l
a a
n p
G
V
A ay A
e p
3 h
t o</p>
        <p>t
G e
I z e
ly im
a t
n
A</p>
        <p>E
L T</p>
        <p>IP E R
V T N A</p>
        <p>L I</p>
        <p>L H
U C</p>
        <p>M
h
t t
a n</p>
        <p>o a a
C C h
n
o
t se i</p>
        <p>t
M P u</p>
        <p>M ra</p>
        <p>D
t r
a ae
C Y</p>
        <p>W</p>
        <p>D
d
G en
V r</p>
        <p>T
n
o
s
i
r
G a
V p
m
o</p>
        <p>C
T e c</p>
        <p>a
I v r
w
e n
i
v io
r t
O te
se in
U</p>
        <p>d
f re
o e
t rd ts
s
8 se o c</p>
        <p>y u
G fy l d
I it n o</p>
        <p>o r
n p
e m
Id m
o
c
v p
7 e t</p>
        <p>e c
IG th u</p>
        <p>d
n
o
i s
t
u e
l c
o i</p>
        <p>r
ze ro
y p
l f
a o
n
A</p>
        <p>m m
5 o e
t
s th
IG cu d
o
h
w</p>
        <p>t
rse so
e en
lzy sp
a
n
A</p>
        <p>For instance, as can be seen in Figure 7, as the Orders fact table includes one record per order,
there is the need to include global indicators of the orders, such as the number of reminders
sent, the number of days the customer took to pay, or the number of days to confirm the order.</p>
        <p>Now that we have analytical tasks and the supporting DW, the visualizations that will support
the decision-making process can be developed. As [16] states, the type of analysis (AT) element
(Figure 5) determines the visualizations grouped within the same dashboard. Consequently,
these analytical requirements indicate that 3 dashboards are required. However, due to space
constraints, we will focus on the dashboard developed for the SG Decrease processing cost.</p>
        <p>
          Having the visualization context for each IG defined (yellow part of Figure 5) we can identify
the most suitable visualization type that will compose the dashboard. In this running example,
we exemplify visualization context and suitability scores for IG 17. To determine the best
visualization type for achieving IG 17, authors in [18] establish a classification system based on 7
coordinates which are represented in Table 1. First, the (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) User is classified as Tech because the
decision-maker has skills in data visualization. If the user had trouble understanding complex
data visualizations, it would have been classified as Lay. Next, the (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) VG and (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) IT come
from the Analytical Requirements (Figure 5). The VGs have been defined as Comparison, as
the purpose is to compare values, and as Trend, so that similarities and dissimilarities can be
identified. The IT has been defined as Overview to provide an overview of the entire data
collection. Then, the (4) Dimensionality is stabilized as n-dim since there are more than two
variables to visualize, and (5) Cardinality as Low since data contains few items to represent.
Finally, the (6) Independent type (referring to Month and Phase) is defined as Nominal as it is
qualitative and each variable is assigned to one category. Last, the (7) Dependent type (referring
to Duration) is defined as Ratio since it is quantitative with a unique and non-arbitrary zero
point. After defining the 7 coordinates that represent the visualization context, we applied
the suitability function presented in [18]. This function evaluates the degree of suitability (fit,
acceptable, discouraged, unfit) of each visualization type for each coordinate value.
        </p>
        <p>We present three types of visualizations for brevity, though all were evaluated. Based on
the suitability scores in Table 1, the most appropriate visualization for IG 17 is a Multiple Line
Chart.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Running Example Evaluation and Discussion</title>
      <p>In this section, we evaluate and discuss the efectiveness of the proposed methodology through
the analysis of the obtained dashboard. This dashboard is designed to support the analytical
tasks identified in the methodology and demonstrate how the DGs are efectively supported.</p>
      <p>Overall, the dashboard (Figure 8) presents insights into the evolution of package and item
quantities over time (IG 13 and IG 14), out-of-stock (IG 12) and failed deliveries (IG 20) rates,
processing times at various stages of the order fulfilment process (IG 15, IG 16, IG 18 and IG
19), and the evolution of package creation time and out-of-stock days for items (IG 17). These
visualizations provide key insights into organizational performance and process eficiency,
highlighting how diferent factors interact and afect overall operational efectiveness.</p>
      <p>Figure 8 shows the relationship between the IGs defined above. IG15, IG16, IG18, and
IG19 compare the average days taken for diferent stages of order processing, such as order
confirmation, item picking, order sending, and package creation. This comparative analysis
reveals that the time taken to create a package is significantly higher, indicating a bottleneck
in the process. This bottleneck is further examined through IG17, which shows the impact of
out-of-stock items on package creation time. The correlation identified here demonstrates how
inventory issues can lead to delays in package processing, contributing to overall ineficiency.
By addressing these stock-out issues, the organization can streamline package creation, thereby
reducing overall processing times. Although the prices of each process are not reflected in
the data, it is assumed that this reduction in time results in lower costs for the organization.
These insights directly contribute to the SG’s decreasing processing costs. The organization can
implement targeted improvements by addressing the ineficiencies highlighted in IG15, IG16,
IG18, IG19, and IG17. Reducing the time taken for package creation, minimizing stock-outs,
and improving inventory management are key steps toward achieving this strategic goal.</p>
      <p>From an operational perspective, achieving this SG does not require a complete redefinition
of the process model but may involve adjusting specific elements. For example, introducing a
“Stock Verification” activity before the “Confirm Order” step could enhance process eficiency.
This adjustment ensures that inventory levels are checked before accepting orders, which can
help reduce package creation time, as it guarantees that there is suficient stock for all accepted
orders. Such targeted modifications can improve overall operational efectiveness without
requiring a comprehensive overhaul of the existing process model. As Figure 8 shows for IG 12,
the global rate of out-of-stock items is ≈21% complementing the findings highlighted for IG 17.</p>
      <p>IG 20 points to another ineficiency in the operational process. More than 27% of failed
deliveries represent financial costs for re-sending the packages but also negatively impact the
image of the organization, with delays in delivering the products to customers (Figure 8).</p>
      <p>For IG 13 and IG 14, while the number of packages sent remains stable (IG 13 - blue line),
the number of items per package varies significantly (IG 14 - red line). This suggests that
some packages may be sent with only a few items, likely due to stock-outs. Filtering by the
highest average number of items per package shows that around 16% of the products were
out of stock, while the lowest average shows this value rising to around 22%, highlighting the
influence of stock-outs in packages’ composition. To avoid this, the items’ availability could be
communicated to customers when confirming orders to avoid sending multiple packages for the
same order. Although this may slightly delay delivery, the reduction in shipping costs and the
adoption of more sustainable policies are clear benefits. It would also be important to monitor
the impact of these changes on customer satisfaction and behavior, especially concerning the
volume and frequency of orders. For urgent products, separate shipping of packages can be
maintained, ensuring operational benefits without compromising the customer experience.</p>
      <p>Without this analytical approach, the nuanced insights into bottlenecks and ineficiencies
would remain unknown, hindering our ability to make targeted improvements. The
methodology and dashboard analysis have been crucial in identifying specific areas for operational
enhancement that were otherwise unrecognized. This evaluation of the proposed approach
highlighted how highly detailed operational data can complement business process data,
enhancing decision-making that targets specific SGs. These SGs, from the analytical perspective,
aim for descriptive, diagnostic, or prescriptive analyses for DGs that enhance eficiency and
eficacy. The proposed approach guides these improvements by considering IGs for the analysis
of specific business or process indicators. As seen in Figure 8, all IGs were properly integrated
into a dashboard considering best practices for analytical visualization.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>This paper proposed a methodological approach to integrate BA and PA to enhance data-driven
decision-making within organizations. By systematically combining these analytical domains,
our approach enables a deep understanding of how process ineficiencies impact business
performance and vice versa, providing a comprehensive view of organizational operations and
improving decision-making processes. The outlined methodology includes structured steps
such as process modeling and understanding, conceptual data modeling, defining analytical
requirements, developing an analytical data model, and developing efective visualizations. As
a contribution, the proposed steps are valuable for systematically transforming raw process
data into useful information, and identifying ineficiencies and opportunities for improvement.</p>
      <p>The approach was evaluated through a dashboard that demonstrates its efectiveness in
supporting analytical tasks and achieving strategic goals. The dashboard ofered insights into
several operational indicators, such as package and item quantities, out-of-stock rates, failed
deliveries, and processing times, uncovering bottlenecks and ineficiencies that, when addressed,
could significantly enhance operational performance and reduce processing costs.</p>
      <p>Our findings emphasize the importance of integrating business and operational data to
uncover hidden ineficiencies and support informed decision-making. The dashboard analysis
showcased how detailed operational data complements business process data, providing a
nuanced understanding of organizational dynamics. The proposed approach and resulting
visualizations illustrate how targeted improvements can be achieved without overhauling
existing process models, thus enhancing overall eficiency and efectiveness.</p>
      <p>For future work, simulation techniques will be used to generate additional data for the process.
This allows the analysis of the monetary cost of processes and their activities, for instance.
Moreover, current requirements can be further detailed to better highlight the relationships
between objects and activities. The proposed approach can also support process enhancement
with process mining or simulation, facilitating the quantification and evaluation of the benefits
of the proposed changes. Finally, although the proposed approach was tested using a classic
example, its underlying principles are adaptable to a wide range of domains. Testing the
approach in complex real-world contexts and diverse industries will further demonstrate and
evaluate its comprehensiveness and practical applicability across diverse operational contexts.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the R&amp;D
Units Project Scope: UIDB/00319/2020, by the AETHER-UA project (PID2020-112540RB-C43),
funded by Spanish Ministry of Science and Innovation, and by the BALLADEER
(PROMETEO/2021/088) project and the CIBEST/2022/122 grant, both funded by the Conselleria de
Innovación, Universidades, Ciencia y Sociedad Digital (Generalitat Valenciana). This paper uses
icons made available by www.flaticon.com.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Valmohammadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sadeghi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Taraz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mehdikhani</surname>
          </string-name>
          , Business analytics, corporate entrepreneurship, and open innovation,
          <source>Management Decision</source>
          <volume>62</volume>
          (
          <year>2024</year>
          )
          <fpage>1977</fpage>
          -
          <lpage>2001</lpage>
          . doi:
          <volume>10</volume>
          .1108/MD-04
          <string-name>
            <surname>-</surname>
          </string-name>
          2023-0502.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          ,
          <source>Process Mining: Data Science in Action</source>
          , 2 ed., Springer, Heidelberg,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -49851-4.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuster</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. J. van Zelst</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. M. van der Aalst</surname>
          </string-name>
          ,
          <article-title>Utilizing domain knowledge in datadriven process discovery: A literature review</article-title>
          ,
          <source>Computers in Industry</source>
          <volume>137</volume>
          (
          <year>2022</year>
          )
          <article-title>103612</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.compind.
          <year>2022</year>
          .
          <volume>103612</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>