<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Compliance Checking in Action for INGKA Group Inventory Management</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alessio Galassi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Barbara Re</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Reimann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Rossi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IKEA IT Germany GmbH, Capability Area Inventory and Logistics Operations</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Camerino, School of Science and Technology</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Inventory management is crucial to companies' competitiveness, especially in large-scale scenarios with many distributed departments. In this context, business intelligence solutions support companies through interactive dashboards, giving a quantitative view of their operational status. Process mining, particularly compliance checking, enhances the data analysis by aligning the resulting insights with actual process executions. This paper reports on the experience of applying compliance checking to inventory management in the INGKA Group stores. We addressed this real-world problem methodically by taking inspiration from the Process Mining Project Methodology to encourage iterative analysis and close collaboration between analysts and stakeholders. The company has systematically adopted our analysis. It provided process-oriented insights that helped make informed decisions and drove the implementation of process enhancements. The experience raised awareness of adopting compliance checking in practice and has been an opportunity to move the research in process mining alongside the business.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Inventory management</kwd>
        <kwd>Compliance checking</kwd>
        <kwd>Decision support</kwd>
        <kwd>Insights to action</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Inventory management plays a crucial role in logistics and warehouse operations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], ofering significant
potential to enhance eficiency and customer satisfaction. It is essential for maintaining clear visibility of
goods moving through warehouses and retail outlets [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This is particularly challenging in large-scale
scenarios, considering companies with many warehouses and stores spread across diferent locations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        In this regard, business intelligence solutions support companies in producing interactive dashboards,
giving a quantitative view of the operational status in inventory management [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. However, these
solutions provide analytics without a process-oriented perspective, as process mining and, in particular,
compliance checking do [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. Getting valuable process-oriented insights via compliance checking
allows companies to make informed decisions and implement actions for process enhancement [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        In this paper, we present an experience dealing with compliance checking applied in a real large-scale
inventory management process coming from INGKA Group (www.ingka.com), a leading multinational
home furnishing retailer controlling 379 stores worldwide. The experience lasted six months and
focused on answering four business challenges in the Stock Check (SC) process. SC is performed daily
in the INGKA Group stores and produces, on average, 450,000 inventory-related events per day. The
experience was inspired by the Process Mining Project Methodology (PM2) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for structuring the phases
of the working method since the methodology is designed to support process mining projects.
      </p>
      <p>As an outcome of the experience, the company advanced its operations applying follow-up actions
guided by the analysis results, so much so that the company systematically adopted our solution into
their systems to continuously check the compliance of the SC process. In addition, the experience
allowed us to deepen awareness of applying compliance checking into practice through the lessons
learned in each phase.</p>
      <p>The rest of the paper is organized as follows. Section 2 describes the process under analysis and the
problem addressed in this work. In Section 3, the research and methodological background is presented.
Section 4 reports on the experience carried out with the company. Key findings are illustrated in Section 5,
while the work’s relevance is discussed in Section 6. Finally, Section 7 summarizes concluding remarks.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem Statement</title>
      <p>This section defines the problem addressed by the study, outlining the SC process and the specific
compliance challenges encountered, set against the broader context of increasing regulatory demands
and operational complexity in industrial environments.</p>
      <p>
        Stock Check process. The process under analysis is the SC, a process adopted in all INGKA Group
stores. SC is devoted to ensuring stock accuracy, thus enabling the company to know how many articles
of each kind are present in a store. Articles may be located in diferent places in a store, and they are
counted periodically. As stores are organized diferently, they implement the SC diferently. Some
of them implement (full or partial) automation in their warehouse. In such cases, stock accuracy is
positively afected, especially in comparison to stores that work only manually, which are instead
more likely to be error-prone. For all the stores, it is common to spend a significant amount of person
hours, which involves a cost in the store budget. Two types of SC exist: Type 1 and Type 2. Type 1
is initiated on purpose when an inconsistency in article count is spotted. Type 2 is initiated daily on
hundreds of articles; it starts in the evening when the list of articles to check is prepared and terminates
before the store opens. Various automatic and manual triggers add articles to the list for a SC. The
involved employee receives the list of articles to be counted and the related locations that hold the
articles. Usually, locations are of two kinds: the sales location, which refers to the showroom accessible
by the customer, and the bufer location, which refers to the store’s warehouse. During the process, the
employee checks if the count of a specific article at a particular location equals the quantity registered
in the system. Otherwise, the quantity is updated with the observed value. In particular cases, the
article quantity is assumed as correct without an actual counting performed by the employee; these are
referred to as auto count activity. The information systems that support process execution register each
manual and automated activity. These data can be retrieved to construct the process’s event log.
Compliance checking in action. Compliance checking approaches check the adherence of processes
to laws, regulations, and standards and measure process efectiveness [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. They difer by strategy used,
such as a priori, run-time, and post-mortem. We focused on the last one, which serves as an
afterexecution analysis method to detect possible violations on past process execution (it is also known
as auditing [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]). We conducted it internally, i.e., with the process owner; while it is external when
made by an independent party [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The analysis of process executions is a crucial activity in several
application domains, and compliance checking techniques represent a powerful approach to addressing
the problem. The main objective, from the company’s point of view, was to answer highly specific
business challenges on the SC. To obtain valuable results, we acquired the SC process data from the
company’s data warehouse and preprocessed it to get the event logs. We organized dedicated workshops
with the company’s stakeholders to define rules by moving from the informal description of the business
challenges to structured compliance rules that could be checked against the event logs. To check SC
process compliance, we analyzed the event logs with ad-hoc scripts due to the lack of standardized
approaches and tools that support compliance checking in practice [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and the need of the company for
fast and incremental analysis.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Research/Methodological Background</title>
      <p>
        This section presents the research and methodological background at the basis of this work.
Related literature. Compliance checking is largely studied in the literature [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 11, 12</xref>
        ] since it
permits obtaining precise insights and performing the right actions. Concerning its application in real
Business
challenge
definition
      </p>
      <p>Data
acquisition and
preprocessing</p>
      <p>Are rules
complete No
enough?
Rules definition</p>
      <p>Yes</p>
      <p>Analysis</p>
      <p>Rules
Business
challenge</p>
      <p>Data warehouse</p>
      <p>Event log</p>
      <p>
        Results
scenarios, Becker et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] propose a generic approach consisting of a model query language and a
search algorithm implemented in a modeling tool for business process compliance checking. They
also provide an evaluation via an applicability check in the financial sector. Longo et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] outline a
methodology to improve clinical risk monitoring and workflow digitalization, emphasizing advanced
analytics to enhance patient safety and optimize healthcare processes. Diferently, Jans et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]
demonstrate the relevance of process mining techniques concerning conventional auditing procedures.
Finally, Dijkman et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], suggest incorporating unexpected exceptions into the process, making them
expected exceptions, so that they can be recognized and handled, allowing for continuous monitoring.
Methodology. The working method covered the phases depicted in Figure 1. Its structure is inspired
by the PM2 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], designed to support process mining projects by encouraging iterative analysis and close
work between analysts and business people, both pillars of our experience. The first phase was Business
challenges definition to list the objectives of the analysis. Then, we performed Data acquisition and
preprocessing to refine the event data and to assess their reliability, followed by Rule definition to map
the business challenges into rules to be checked. These two phases were repeated until we agreed with
the process owner and stakeholders on the completeness of the rules. Finally, we performed Analysis on
the event logs, which consisted of checking the process execution against the specified compliance rules
and providing the most meaningful visualization to be presented to process owners and stakeholders.
Each phase of the working method involved weekly meetings to discuss the latest findings. Moreover,
on-purpose meetings were planned to solve urgent issues.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Compliance Checking for the INGKA Group</title>
      <p>In this section, we describe the experience and its phases outlined with the methodology, discussing
the approach used and the decisions taken in developing the solution.</p>
      <p>Business challenge definition. The business challenges formed the foundation of the experience,
defining the problem to be solved throughout the subsequent phases. They have been discussed among
the participants and defined clearly to ensure a common understanding of the aspects to be investigated.
All the challenges reflected aspects of interest in the SC process. The four challenges are reported below.</p>
      <p>The first business challenge (BC-1) aimed at measure the efectiveness of automated tasks. The company
wanted to discover if automated activities were repeated manually and how often this happened.
Moreover, they wanted to discover if the results of the automated activity were consistent with the
results of the manual task. The second challenge (BC-2) had the objective to check process termination
time concerning the diferent SC triggers . The company asked what the most common trigger was for
late process termination. Challenge number three (BC-3) aimed at measure the planning eficiency of
the SC. In this case, the company asked how often articles removed from a SC were finally counted
within two days. To conclude, the fourth business challenge (BC-4) had the objective to measure the
efort employed in the process before canceling an article from a SC . The company wanted to know the
typical process behaviors leading to the cancellation of an article from the SC.</p>
      <p>Data acquisition and preprocessing. This phase was essential to obtain the right event logs to
address the business challenges. Indeed, defining the data to be used in the analysis was fundamental for</p>
      <p>Description Example value
the group the activity belongs to, it can be (CNT) count, check (CHK), CNT
location (LOC) or article (ART)
the unique identifier of a business unit 123
the inventory date in the format “YYYYMMDD" 20030819
the unique identifier of a SC 456
the identifier of an article 789
the time stamp of the event 19-08-2003 08:15:00
the activity name CHECK_CREATED
is the unique identifier of the physical location where the article is BUFF:27
the article quantity before the count activity 0.0
the article quantity after the count activity 4.0
understanding what could be analyzed. Preprocessing was equally important in ensuring a consistent
dataset regarding field definitions, date formats, and noise filtering.</p>
      <p>
        The data was acquired by accessing the INGKA Group data warehouse, where the information
systems dump event data. We did not encounter particular issues regarding data availability since
the warehouse ofers a suficiently wide spectrum of data regarding the SC process to reconstruct it
and address the business challenges. We extracted the dataset structured as described in Table 1. We
constructed the event log structure by identifying the main concepts, i.e., timestamp, activity name,
and case identifier. More in detail, the field event_dttm corresponds to the timestamp of the event,
the field EventName to the activity name, and the union of fields sc_id and art to the case identifier,
while the other fields are generic attributes associated with the event. Regarding the amount of data
used in the next phases, we extracted a sample dataset to comprehend the data structure and spot
anomalies, e.g., missing events and wrong data formats. Following the process owner’s suggestion, we
detected a representative set of stores and a time window suficiently large to catch any possible SC
process implementation. More in detail, we performed the extraction for 32 stores that difer for stock
accuracy and level of automation; for a proper period (we cannot disclose), resulting in 3,174,689 events.
Another essential step was to check the quality of the acquired data to assess the presence of issues
to be solved with preprocessing [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. We performed the inspection of the data using process mining
tools such as Disco (fluxicon.com/disco) and PMTK (processintelligence.solutions/pmtk), observing the
discovered Directly-Follows Graph (DFG) and the variants of the process. In particular, the discovered
DFGs allowed the sharing of pictures of the actual process with the company.
      </p>
      <p>The main weaknesses we found regarded (i) the format of timestamps, (ii) event redundancy, and
(iii) event swaps. The event data exhibited heterogeneous timestamp formats due to the diversity of
information systems and global store locations. To standardize the data, timestamps were converted
to the format “YYYY-MM-DD hh:mm:ss.ssssss,” preserving the original timezone based on the store’s
country, identified via the business unit code. Regarding event redundancy, multiple events with
identical field values were observed, stemming from an information system that recorded certain
events multiple times. These duplicates, ofering no additional information and potentially distorting
process variant interpretation, were removed. Regarding event swaps, several unexpected activity
orderings were identified in the DFG. It was determined that these anomalies resulted from events being
recorded with identical timestamps. With the process owner’s agreement, a millisecond was added to
the subsequent activity’s timestamp to resolve the issue, enhancing the alignment of event logs with
the actual process while maintaining data reliability. The DFGs discovered after the preprocessing were
the focus of many discussions with the company. The process owner and the stakeholders got closer
to the context of process mining by experiencing the data in a process-oriented manner, so they can
discuss and define rules to be checked. Summing up, we obtained 342,806 cases from the data acquisition
and preprocessing, with a mean duration of 9.5 hours. A total of 39 distinct activities building 24,294
execution variants.</p>
      <p>Rules definition. The increased awareness of the SC process derived from the DFGs prompted a
discussion on transitioning from the specified business challenges to formulating compliance rules to be
checked through event log analysis. Figure 2 shows an excerpt of a DFG extracted from a month of data
of one store where the company detected an undesired behavior. During the discussion, process owners
generally defined and highlighted the rules over the model as non-compliant paths. Rules to be checked
were finalized after several iterations of Data acquisition and preprocessing and Rules definition activities
until the set of rules was deemed complete enough to spot and quantify non-compliant behaviors. These
phases were essential to pass from an informal description of what the company wanted to check to
structured compliance rules, linking the business needs to the analysis solution.</p>
      <p>Concerning the first business challenge ( BC-1), we check whether, in the same trace, an automated
count activity was eventually followed by a manual count activity. In this case, an additional check
determines if the two activities had the same value in the field qty_after to spot a possible discrepancy.
Regarding the second challenge (BC-2), we check whether the SC process terminates before the store
opening hour, and we extract the distribution of SC triggers and types (type 1 or 2). Then, for the
third business challenge (BC-3), we check whether the article is deleted from a SC. The challenge is
mapped into four rules representing diferent SC behaviors. The first rule regards compliant cases
where the deleted article was re-included in another SC within two days without being deleted again.
In contrast, the following behaviors are classified as non-compliant. The second rule regards articles
re-included after two days without being deleted again. The third rule regards articles deleted more
than once before being re-included. The last rule regards articles never being re-included in a SC. In the
last business challenge (BC-4), we check the cases with articles deleted from the SC. In particular, we
measured the manual counting activities performed before the deletion. We also focus on spot process
executions, observing the counting results, which were more likely to lead to an article deletion.
Analysis. Once we had the compliance rules that needed to be checked, we realized that a traditional
ifltering activity would not be suitable for achieving a complete result due to rules complexity. Therefore,
we decided to move to a tailored solution implementing ad-hoc scripts. This decision was also motivated
by the lack of a powerful enough tool for compliance to address our business challenges. Moreover, the
development of ad-hoc scripts allowed us to have more control over the execution of the analysis and
helped to speed up the process. Thus, the defined compliance rules were translated into ad-hoc scripts
to perform the specific analysis for each business challenge. The general operation of the solution is
described as follows: each script takes as input the event logs of the SC and iterates through every
event included in all the cases. When the condition for a rule is met, the script checks the constraints
expressed by the related rule and builds the output, registering the eventual non-compliant executions
or expected behaviors.</p>
      <p>
        During the meetings, in the most collaborative context, we tried to extract meaningful insights
from the results or to understand how to refine the analysis that has been repeated until the results
provide a complete answer to the business challenges. Alongside this, as the visual representation
of process mining results has been discussed as a critical point of knowledge and insight transfer to
stakeholders [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], we carefully explored how to visualize them [
        <xref ref-type="bibr" rid="ref17">17, 18</xref>
        ]. In our case, we decided to use
simple visualizations such as stacked bar column graphs or tornado diagrams to easily quantify the
deviations observed in the analysis or to provide an immediate overview of the motivations behind the
non-compliant process executions. Following, we describe the ad-hoc compliance checking solutions,
alongside the results obtained and the visualizations provided1.
      </p>
      <p>The purpose of BC-1 was to measure the efectiveness of automated tasks. Our compliance checking
solution revealed that automated activities were repeated manually at very diferent frequencies from
store to store. Figure 3 shows the amount of automated count on the left side and how many times it is
repeated manually on the right. We also observed in the data that most of these recounts are ordered
by the system. A significant percentage of manual recounts after automated counts show a discrepancy
in the count result. Figure 4 illustrates the amounts of recounts, i.e., the brighter bars on the left of each
cluster, and how many of them present a diferent quantity of the stock level, i.e., the darker bars. The
percentage of recounts with diferent stock levels is considered to be so high as to raise attention to the
general reliability of automated counting. For what concerns BC-2, the objective was to check the
process termination time concerning the diferent SC triggers. In general, type 1 SCs were used less
frequently than type 2 SCs. As for the completion at a precise time, the first SC of the day, whether
of type 1 or 2, did not finish at a regular time. Figure 5 shows the percentage of late and on-time first
SC of the day. Some stores invest in respecting the time constraints, while others do it only partially.
BC-3 aimed to measure the SC’s planning eficiency. In the end, four behaviors were identified to
provide a complete answer to the question. In particular, all the stores partially adhere to the rule
1Due to the non-disclosure agreement between the University of Camerino and IKEA IT Aktiebolag, some information is
omitted or replaced by approximate values, in particular, the graphs in the visualizations in the following paragraphs are
populated with illustrative data, and references to store identifiers have been masked.
(a)
(b)
of re-including articles deleted within two days without being deleted again, Figure 6 first segment
of the stacked bar from the bottom. In the most frequent situations, those articles were: recounted
after two days without being deleted again, the second segment; deleted several times and recounted
(independently from the day passed), the third segment; or not recounted at all, the uppermost segment.
The analysis highlighted a general trend of “bad" SC planning, easily recognizable by the height of the
bottom segment, which never reaches the majority of the stacked bar in each store. The last, BC-4,
aimed to measure the efort employed in the process before deleting an article from a SC. The analysis
identified three diferent behaviors unevenly distributed from store to store, Figure 7. In particular, a
substantial share of articles was not counted before the deletion, i.e., the first segment of the stacked
bar from the bottom. A relevant share of articles was counted at most once in each location where
the article should be found, the second segment. Finally, in a small portion, the article was counted
multiple times in at least one location where it should be found, the uppermost segment. In addition, a
diferent behavior on counting results diferentiated the cases with deletion, Figure 8a, from the others
without deletion, Figure 8b. The distributions show a significant increment between counting results
with deviations on cases with deletion with respect to cases without. There is no common trend among
the stores, but it can be easily deduced which stores behave better in terms of wasted efort. For those
who are not properly on the good side, it can be inspected if the reason for such a high amount of
wasted time comes from suspicious counting results.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Key Findings</title>
      <p>
        This section presents the key findings of the study, including the enhancement actions implemented by
the company to improve its process and the lessons learned regarding the methodology of conducting
process mining projects, providing valuable insights for both practitioners and researchers.
Enhancement actions. According to PM2 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], the final stage of a project is process enhancement. At
this stage, the data analysts usually do not take part in the discussion, since their duties are fulfilled
with the final presentation of the results; in some cases, however, their presence can be useful to provide
further insights into the results and support for the stakeholders’ reflections.
      </p>
      <p>In our experience, the decision-making process for enhancement changes was entirely carried
out by the company. It was supported by the results obtained in the Analysis phase that had been
comprehensively presented beforehand. In particular, the main results that stood out from the analysis
and drove the decision-making process were the following. Several insights about all business challenges
of the SC process, in addition to those included in the visualizations, e.g., in BC-2, type 1 SCs contained
mainly manually added articles. In contrast, type 2 SCs contained a more homogeneous distribution of
the automated daily inventory triggers. A relevant share of automatically counted bufer stock levels
was inaccurate; from BC-1, we saw that the automated count activities are not highly reliable, thus,
the SC quality could be afected by this finding. A relevant share of the SCs were not finished before
the due time; BC-2 revealed that this business policy was not respected in the majority of the stores,
which means overlapping tasks for coworkers and an outdated stock level at the time of store opening.
A huge number of non-compliant article checks per year; BC-3 and BC-4 revealed that many SCs are
not carried out following planning rules or recommended working practices.</p>
      <p>These findings triggered a series of enhancement actions that the company decided to implement.
In detail, the company defined six new follow-ups regarding the conditions that led to encountering a
non-compliant execution, e.g., implausible counting results that are the cause for the article deletion
from a SC, or more attention on registering trigger reasons of articles included in the SC. Another action
involved synchronizing the software configuration settings of stores in diferent countries that use the same
system to support SC execution. This alignment allows stores that perform well to serve as examples
for others. Then, the company defined new ways of work, work recommendations, and standard operating
procedure, e.g., when to use SCs of type 1 or recommendations on default values on system parameters.
Moreover, there were implemented two updates on management software, mainly regarding new KPIs of
specific process executions. Finally, three additional areas of business requirements for future software
solutions have been detected.</p>
      <p>All of these enhancement actions have culminated in the integration of the SC compliance checking
analysis into the company’s business intelligence systems. Consequently, our solution has been
systematically adopted and made available to the company, including all stores and warehouses for daily use
and reference, coming to record an average of 1000 accesses per week.</p>
      <p>Lessons learned. The experience brought various reflections on the methodology used and its phases.
For each phase, we put forward recommendations that could help in conducting similar projects.</p>
      <p>The Business challenge definition was the main precondition for driving the analysis and obtaining
results useful for deriving concrete actions to improve the process. This phase must provide specific
and addressable challenges to data analysts. Clarifying the scope of the analysis and fostering strong
coordination between the business and analysis teams would ensure that the entire experience thrives
and becomes highly efective. In our experience, we arranged purposeful meetings in which the teams
reasoned together. In such a way, a continuous team interaction has been established to support each
other at any time, also through a direct communication channel, even restricted to a few people. The
strict collaboration and synchronization enabled the setting of feasible and relevant objectives.</p>
      <p>For what concerns Data acquisition and preprocessing, the precondition is to acquire the right data
according to the scope of the analysis. A dataset that is too small (with a short time window or few
stores) may exclude significant data crucial for the analysis and undermine the reliability of the results.
An overly wide dataset slows down the rest of the analysis, especially if we rely on non-scalable
algorithms. Additionally, a larger volume of data does not necessarily translate into more valuable
insights. Nonetheless, a balance in the analyzed data may not be suficient to ensure a representative
dataset. Here, domain knowledge can help analysts choose which data (or stores) to analyze.</p>
      <p>Concerning the Rules definition , we have experienced how DFGs ofer an immediate and intuitive
understanding of the process to stakeholders. This type of model provides an as-is representation
of the process executions where people can reason and take notes [19]. Frequency and performance
annotations on DFGs can help easily identify outlier relationships or time-consuming executions, which
can then be included in further analysis or inspire new business challenges to check for compliance.</p>
      <p>Finally, the Rules analysis was particularly efective since the INGKA Group had exploited for the
ifrst time insights in a process-oriented manner. These insights enhanced the process changes, allowing
informed decision-making that builds the diference between traditional business intelligence solutions
and compliance checking analysis. Moreover, presenting the results was also useful in introducing
process mining-specific concepts to non-practitioners. We aimed to balance a visualization that is easy
to understand while still accurately representing the insights behind compliance checking results.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Significance and Relevance</title>
      <p>This section reflects on the case study’s significance through the stakeholders’ feedback and project
retrospective, highlighting its impact on industry practices and academic discourse on compliance
checking.</p>
      <p>Stakeholders’ feedback. After the enhancements actions, we submitted a questionnaire to all the
company’s stakeholders to obtain feedback and a deeper awareness of what had been done (available at
tinyurl.com/363kmwup). In the following, we report some excerpts from the questionnaires’ responses,
aiming to discuss the relevance of the experience. Process mining enhanced decision-making by
providing clear visualizations that uncovered new insights into processes once thought to be
wellknown by experts. Although experienced, their understanding of the SC process was limited and
subjective. Process mining allowed the company to grasp the full process and focus on specific cases,
described as "the data-driven complement to years of practical experience". When comparing process
mining to business intelligence, challenges emerged regarding the target audience, as process mining
insights were primarily accessible to those with strong process expertise. It was concluded that "process
mining and data intelligence complement each other". Although the company had implemented numerous
performance measures over recent years, most failed to address the qualitative questions of "how" and
"why". Crucially, guidance on necessary changes for performance improvement was lacking. It was
concluded that "Process mining delivers new kinds of insights that enable great optimization opportunities".
Project retrospective. The final retrospective has been carried out by the core of the working group
involved in the project. The questions answered during this activity referred to the situations that
members of the team experienced and focused on what slowed down/powered up the project, what
the team was worried about, or wished have, and lastly how to improve the previous concerns. The
primary cause of the slowdown was attributed to the unprecedented scale of the problem, which
was particularly challenging for the university’s members, whose experience is rooted in academic
environments with limited exposure to real-world, large-scale scenarios. Technically, the large amount
of data and the substantial efort required for data extraction and preprocessing significantly hindered
the execution process. What boosted the project was the team’s strong collaborative spirit, which
fostered high levels of individual commitment. Additionally, structured coordination and a rigorous
meeting schedule promoted continuous interaction, contributing positively to the project’s development.
Worries varied depending on individual roles; however, there was a common sense of responsibility to
deliver results that met the company’s expectations. Additionally, concerns emerged about whether
scientific and business goals could be efectively aligned to achieve the desired outcomes. The wish list
contained various items, including additional time and personnel, greater involvement of stakeholders,
opportunities to employ advanced techniques, and improved skills in the communication of insights.
The discussion had a notably positive impact, allowing participants to reflect on the work completed,
identify strengths to build upon, and recognize weaknesses to address moving forward. A key proposal
that emerged from the retrospective was the establishment of a process mining unit within the company
to generate and explore new challenges. This initiative aims to promote innovative thinking and identify
opportunities at the intersection of academic research and business interests.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This paper reports on an experience of compliance checking on the INGKA Group’s inventory
management. The analysis was conducted on 32 stores out of the 425 on an overall amount of 3,174,689
events belonging to the SC process. The experience aimed to check the adherence of the SC process to
compliance rules derived from business challenges. The experience results are new process-oriented
insights provided via ad-hoc compliance checking solutions, and the opportunity to deepen the
awareness of applying compliance checking into practice. The process-oriented insights were the result of a
detailed analysis carried out following the PM2, in which the close collaboration between the analysis
and business team was the cornerstone that allowed the project to succeed. The experience brought
innovative technologies to the company that could become a competitive advantage for positioning in
the market [20]. Still, even if the practical experience is not replicable, due to the confidentiality of the
employed data, nor generalizable due to the specific business challenges of the company; it has been
an opportunity to move research alongside the business, bringing them together and harnessing the
motivation from both perspectives to add value to the research community.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used DeepL, and Grammarly in order to: Grammar
and spelling check, Paraphrase and reword. After using this tool/service, the authors reviewed and
edited the content as needed and take full responsibility for the publication’s content.
[18] K. Kubrak, F. Milani, A. Nolte, A visual approach to support process analysts in working with
process improvement opportunities, BPM Journal 29 (2023) 101–132.
[19] W. M. P. van der Aalst, A practitioner’s guide to process mining: Limitations of the directly-follows
graph, in: ENTERprise Information Systems, volume 164 of PCS, Elsevier, 2019, pp. 321–328.
[20] A. Mamudu, W. Bandara, S. Leemans, M. Wynn, A process mining impacts framework, BPM
Journal 29 (2023) 690–709.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Granillo-Macías</surname>
          </string-name>
          ,
          <article-title>Inventory management and logistics optimization: a data mining practical approach</article-title>
          ,
          <source>LogForum</source>
          <volume>16</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Koumanakos</surname>
          </string-name>
          ,
          <article-title>The efect of inventory management on firm performance, Productivity and performance management 57 (</article-title>
          <year>2008</year>
          )
          <fpage>355</fpage>
          -
          <lpage>369</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kannadhasan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nagarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Srividhya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Recent trends in logistics management: Past, present, and future, in: Utilizing Blockchain Technologies in Manufacturing and Logistics Management</article-title>
          ,
          <string-name>
            <surname>IGI</surname>
          </string-name>
          ,
          <year>2022</year>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>249</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yeung</surname>
          </string-name>
          , E. Cheng,
          <article-title>The impact of business intelligence systems on profitability and risks of firms</article-title>
          ,
          <source>International Journal of Production Research</source>
          <volume>59</volume>
          (
          <year>2021</year>
          )
          <fpage>3951</fpage>
          -
          <lpage>3974</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>W. van der Aalst</surname>
          </string-name>
          , J. Carmona, Process Mining Handbook, volume
          <volume>448</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Grisold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mendling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Otto</surname>
          </string-name>
          , J. vom Brocke,
          <article-title>Adoption, use and management of process mining in practice</article-title>
          ,
          <source>BPM Journal 27</source>
          (
          <year>2021</year>
          )
          <fpage>369</fpage>
          -
          <lpage>387</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          ,
          <article-title>Decision support based on process mining</article-title>
          ,
          <source>in: Handbook on Decision Support Systems 1</source>
          , Springer, Berlin,
          <year>2008</year>
          , pp.
          <fpage>637</fpage>
          -
          <lpage>657</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>M. van Eck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Leemans</surname>
          </string-name>
          , W. van der Aalst,
          <article-title>PM2: A Process Mining Project Methodology</article-title>
          ,
          <source>in: Advanced Information Systems Engineering</source>
          , volume
          <volume>9097</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2015</year>
          , pp.
          <fpage>297</fpage>
          -
          <lpage>313</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hashmi</surname>
          </string-name>
          , G. Governatori,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lam</surname>
          </string-name>
          , M. T. Wynn,
          <article-title>Are we done with business process compliance: state of the art and challenges ahead</article-title>
          ,
          <source>Knowledge and Information Systems</source>
          <volume>57</volume>
          (
          <year>2018</year>
          )
          <fpage>79</fpage>
          -
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            J. van der Werf,
            <given-names>H.</given-names>
            <surname>Verbeek</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          ,
          <article-title>Context-aware compliance checking</article-title>
          ,
          <source>in: Business Process Management</source>
          , volume
          <volume>7481</volume>
          <source>of LNCS</source>
          , Springer, Berlin,
          <year>2012</year>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Eulerich</surname>
          </string-name>
          ,
          <article-title>Process mining for financial auditing</article-title>
          ,
          <source>in: Process Mining Handbook</source>
          , volume
          <volume>448</volume>
          <source>of LNBIP</source>
          , Springer, Berlin,
          <year>2022</year>
          , pp.
          <fpage>445</fpage>
          -
          <lpage>467</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vasarhelyi</surname>
          </string-name>
          ,
          <article-title>A field study on the use of process mining of event logs as an analytical procedure in auditing</article-title>
          ,
          <source>The Accounting Review</source>
          <volume>89</volume>
          (
          <year>2014</year>
          )
          <fpage>1751</fpage>
          -
          <lpage>1773</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Becker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Delfmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-A.</given-names>
            <surname>Dietrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Steinhorst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Eggert</surname>
          </string-name>
          ,
          <article-title>Business process compliance checking - applying and evaluating a generic pattern matching approach for conceptual models in the financial sector</article-title>
          ,
          <source>Information Systems Frontiers</source>
          <volume>18</volume>
          (
          <year>2016</year>
          )
          <fpage>359</fpage>
          -
          <lpage>405</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Tomarchio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Trapani</surname>
          </string-name>
          ,
          <article-title>A structured approach for enhancing clinical risk monitoring and workflow digitalization in healthcare</article-title>
          ,
          <source>Decision Analytics Journal</source>
          <volume>11</volume>
          (
          <year>2024</year>
          )
          <fpage>100462</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Dijkman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Türetken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. van IJzendoorn</given-names>
            ,
            <surname>M. de Vries</surname>
          </string-name>
          ,
          <article-title>Business processes exceptions in relation to operational performance</article-title>
          ,
          <source>BPM Journal 25</source>
          (
          <year>2019</year>
          )
          <fpage>908</fpage>
          -
          <lpage>922</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Gschwandtner</surname>
          </string-name>
          ,
          <article-title>Visual analytics meets process mining: Challenges and opportunities</article-title>
          ,
          <source>in: International Symposium on Data-Driven Process Discovery and Analysis</source>
          , volume
          <volume>244</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2015</year>
          , pp.
          <fpage>142</fpage>
          -
          <lpage>154</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rehse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pufahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grohs</surname>
          </string-name>
          , L. Klein,
          <article-title>Process mining meets visual analytics: the case of conformance checking</article-title>
          ,
          <source>in: Hawaii International Conference on System Sciences, ScholarSpace</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>5452</fpage>
          -
          <lpage>5461</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>