<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>XABPs: Towards eXplainable Autonomous Business Processes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Barbara Weber</string-name>
          <email>barbara.weber@unisg.ch</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Fettke</string-name>
          <email>peter.fettke@dfki.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabiana Fournier</string-name>
          <email>fabiana@il.ibm.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lior Limonad</string-name>
          <email>liorli@il.ibm.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Metzger</string-name>
          <email>andreas.metzger@paluno.uni-due.de</email>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefanie Rinderle-Ma</string-name>
          <email>stefanie.rinderle-ma@tum.de</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Business Process Management, Autonomous Business Processes, Explainability, Agentic AI</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>German Research Center for Artificial Intelligence (DFKI)</institution>
          ,
          <addr-line>Campus D3 2, 66123 Saarbrücken</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IBM Research</institution>
          ,
          <country country="IL">Israel</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Saarland University</institution>
          ,
          <addr-line>Campus D3 2, 66123 Saarbrücken</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Technical University of Munich, TUM School of Computation, Information and Technology</institution>
          ,
          <addr-line>Garching</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of St. Gallen</institution>
          ,
          <addr-line>Rosenbergstrasse 30, 9000 St. Gallen</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>paluno (The Ruhr Institute for Software Technology), University of Duisburg Essen</institution>
          ,
          <addr-line>Essen</addr-line>
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Autonomous business processes (ABPs), i.e., self-executing workflows leveraging AI/ML, have the potential to improve operational eficiency, reduce errors, lower costs, improve response times, and free human workers for more strategic and creative work. However, ABPs may raise specific concerns including decreased stakeholder trust, dificulties in debugging, hindered accountability, risk of bias, and issues with regulatory compliance. We argue for eXplainable ABPs (XABPs) to address these concerns by enabling systems to articulate their rationale. The paper outlines a systematic approach to XABPs, characterizing their forms, structuring explainability, and identifying key BPM research challenges towards XABPs.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        An autonomous business process (ABP) is the next generation of AI-Augmented Business Process
Management System (ABPMS) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which is a self-executing ABPMS that leverages advanced technologies
such as Artificial Intelligence (AI) and Machine Learning (ML) to operate with minimal to no human
intervention. ABPs can sense and respond to various inputs, reason, make decisions, and adapt to
changing circumstances in real time, all without relying on manual triggers or continuous oversight.
Think of it like a self-driving car for your business operations. Instead of human workers controlling all
aspects, ABPM systems use sensors, data analysis, and intelligent algorithms to navigate and achieve
their objectives. ABPs ofer the potential to improve operational eficiency, reduce errors, lower costs,
improve response times, and free human workers for more strategic and creative work.
      </p>
      <p>The notion of ABPs was elaborated during the 2025 AutoBiz Dagstuhl seminar1. The main goal of
this seminar was to compile a research agenda toward the realization of ABP systems. Jointly with the
seminar participants, we discussed and developed core concepts, challenges and research directions.
Specifically, after a series of stimulating taks by experts, participants split into working groups to further
LGOBE
PMAI’25
∗Corresponding author.
(B. Weber)</p>
      <p>CEUR</p>
      <p>ceur-ws.org
discuss individual topics of the research agenda, including ”framed autonomy”, ”self-modification”,
”conversational actionability”, and ”explainability”. The results of these breakout-groups were presented
to all seminar participants, and their feedback used to improve the findings.</p>
      <p>
        This paper reports on key findings concerning the topic ”explainability”, elaborating and sharpening
the notion of eXplainable ABPs (XABPs) [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. We argue that XABPs will help address important
concerns in the context of ABPs, including the following:
• ABPs may erode trust among stakeholders – including process owners, business analysts,
endusers, and customers – who may be hesitant to rely on or adopt AI-based process recommendations
or automated decisions if they cannot understand the rationale behind them.
• The opacity of ABPs may make it dificult to debug process models, as well as identify potential
failures, or understand why a process might be under-performing.
• Using ABPs may hinder accountability; if an ABP leads to a failure or an unfair outcome, the
inability to explain its underlying decisions makes it challenging to assign responsibility or
implement corrective actions.
• ABPs may perpetuate hidden biases of their underlying AI and ML components. Such biases may
lead to discriminatory or unfair process outcomes, which can be dificult to detect and mitigate.
• Demonstrating the compliance of ABPs with regulatory frameworks, such as the EU’s GDPR and
AI Act, requires an increasing level of transparency, particularly in high-risk domains like finance,
healthcare, and human resources, which are common areas for BPM applications.
      </p>
      <p>
        XABPs are particularly relevant when ABPs are realized in the form of Agentic BPM systems. An
Agentic BPM system is an advanced approach to managing and automating complex business workflows
by integrating autonomous AI agents. Unlike traditional BPM or Robotic Process Automation (RPA)
systems that follow rigid, predefined rules and workflows, agentic BPM leverages AI to enable systems
to make independent decisions, adapt to changing conditions, and learn from experience with minimal
human intervention. Here, explainability ofers a central mechanism through which agents can articulate
the rationale behind their behavior. As such, explainability becomes a first-class citizen in the realization
of Agentic BPM systems, supporting agent autonomy from two perspectives:
• Enabling agents to independently resolve misalignments in other agents’ behavior.
• Reducing human intervention by making agent behavior understandable and transparent.
Employing state-of-the-art explainable AI (XAI) techniques [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] for XABPs pose several limitations:
      </p>
      <sec id="sec-1-1">
        <title>1. Inability to express business process model constraints [4]. 2. Failure to capture the richness of contextual situations that afect process outcomes [ 5]. 3. Inability to reflect causal execution dependencies among activities in the business process [ 6]. 4. Explanations are often nonsensical or not interpretable for human users [ 7].</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Characterization and Needs of XABPs</title>
      <p>We start with a generic conceptualization of explainability and then refine this to particular concerns in
the BPM setting.</p>
      <sec id="sec-2-1">
        <title>2.1. Fundamental Explainability Concepts</title>
        <p>
          In its simplest form, the explanantia produced by the explainer should provide information about the
causes of the explained phenomenon (explanandum) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The content of the explanation must align
withwwbwo.tahdapthtivee-nsyastteumrse.orogf the explanandum and the needs of the explainee. 7
        </p>
        <p>XABPs involve a range of human and system actors that either generate or consume explanations.
Some actors – especially autonomous systems or agents – may fulfill both roles, such as generating
explanations for others while also using explanations for self-reflection or system adaptation.
Interactions among the explainee with the explanation can follow diferent modes, ranging from one-shot
explanations to conversational or multi-round interactions.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Explanandum</title>
        <p>Figure 2 shows the key types of aspects of an explanandum that may be explained, as elaborated below.
Process Instance Explanation: ”Why did this specific process execution take the path and
produce the result it did?”
• Process Flow - Why a specific sequence of activities, decisions, and events was followed in the
business process.</p>
        <p>Example: “Why did the invoice approval was issued only after the invoice has already been sent?”
• Decision Points – Why certain paths or outcomes were chosen during process execution.</p>
        <p>Example: “Why was a customer’s request escalated instead of being resolved at Tier 1?”
• Resource Assignment – Why specific tasks were assigned to certain roles or individuals.</p>
        <p>Example: “Why was this case handled by the senior team?”
• Outcome Justification Why a specific result occurred.</p>
        <p>Example: “Why was this loan application rejected by the process?”
Process Model Explanation: ”Why is the process structured the way it is?”
• Model Structure: Why are certain activities, decisions, or flows included?</p>
        <p>E.g., “Why do we have a credit history check as a decision point?”</p>
        <p>Explanandum
"What is explained?"</p>
        <p>Process Instance</p>
        <p>Process Flow • Decision Points • …
"Why is a specific sequence followed?"</p>
        <p>Process Model</p>
        <p>Model Structure • Policy Compliance
"Why is the process structured this way?"</p>
        <p>AI Component</p>
        <p>Model Behavior • AI Decisions
"Why did the AI make this recommendation?"</p>
        <p>Framed Autonomy Constraints
Design Autonomy • Delegation Rules • AI Authority • …
"Why is the system allowed to behave this way?"
• Policy Compliance: Whether and how policies shaped the model or its execution</p>
        <p>E.g., “Was the data retention policy followed?”
AI Component Explanation: ”Why did an AI component make this recommendation or
decision?”</p>
        <p>
          Note, that this here very much refers to explainable AI (XAI). In more detail:
• AI Decision: ”Why did the AI component predict deviations or prescribe proactive adaptations?” [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]
        </p>
        <p>E.g., “Why was an alarm raised for process event   ?”
• AI Model Behavior: ”Why does the AI model have certain characteristics or properties?” E.g.,
“Why does the LSTM prediction model have a Mean Absolute Error (MAE) of only .35 for the
given process domain?”
Framed Autonomy Explanation: ”Why is the system or process allowed to behave as it does?”
• Design Autonomy: “Why can the process bypass manual review?”
• Delegation Rules: “Why do Tier 1 agents have approval authority?”
• AI Authority: “Why can the AI act without a human in the loop?”
• Escalation Thresholds: “Why is escalation triggered only after 3 attempts?”
• Compliance Limits: “Why is this exception allowed under the GDPR?”</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Explainer</title>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Explainee</title>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Explanans</title>
        <p>(a) Human Explainees: humans consuming explanations</p>
        <p>Example Role</p>
        <p>Customer applying for a
wwwlo.aandaptive-systems.org</p>
        <p>Needs from Explanations
Understand decis3ions
about them (e.g., rejections,
delays)
Agent handling loan verifi- Know what task to do next
cation and why
Operations lead, shift man- Monitor KPIs, react to
ager anomalies, adapt resources
Person modeling the pro- Improve eficiency, detect
cess bottlenecks, validate rule</p>
        <p>logic
Internal or external audi- Ensure traceability, legality,
tors policy adherence
Example Styles
Simple, outcome-focused,
natural language
Step-by-step task rationale,
alerts, real-time updates
Dashboards, alerts,
summaries, what-if analysis
Process mining results,
causal analysis,
counterfactuals
Audit trails, rule execution
logs, exception reports
(b) System Explainees: self-reflective systems consuming explanations
Actor Type
The System Itself
Connected Systems
Agentic Systems</p>
        <p>Role in the Ecosystem
Autonomous BPM or AI
component
CRM, ERP, or DMS
components
Autonomous BPM realized
as AI agent</p>
        <p>Needs from Explanations Example Techniques
Self-monitoring, internal di- Logs, symbolic reasoning,
agnosis, reconfiguration anomaly detection
Data or process synchro- API contracts, structured
nization with semantic clar- events, semantic metadata
ity
Proactive, collaborative, in- Internal resoning process,
teractive shared knowledge
• Feature Attribution: Assigns contribution (credit or blame) to input features. Examples: SHAP,</p>
        <p>LIME, Saliency Maps
• Example-Based: Uses similar or contrasting examples to justify a decision. Examples: k-NN,</p>
        <p>Prototypes, Counterfactuals
• Rule-Based Derives symbolic or logical rules from data or models. Examples: Decision Trees, Rule</p>
        <p>Lists, Association Rules
• Model Simplification: Approximates complex models with interpretable surrogates. Examples:</p>
        <p>
          Surrogate Decision Trees, Linear Proxies
• Counterfactual: Explains what minimal input change would alter the outcome [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Example: “If
income were $5,000 higher, the outcome would have changed.”
• Visual Explanations: Uses visual indicators to represent decision logic or model behavior. Examples:
        </p>
        <p>Heatmaps, Partial Dependence Plots
Time of Explanation Generation: ”When is the explanation produced?”</p>
        <p>The timing of an explanation determines its role in the lifecycle of decision-making systems.
Explanations may be generated before, during, or after system execution:</p>
        <p>Explanans
"How is the explanation provided?"</p>
        <p>Explanation Mechanism
Feature Attribution • Example-Based •</p>
        <p>Rule-Based • Counterfactual •
Model Simplification • Visual Explanations</p>
        <p>Time of Generation
Ex-ante • Run-time • Post-hoc</p>
        <p>Interaction Mode
One-shot • Conversational</p>
        <p>Presentation Format
Visual explanations • Verbal explanations</p>
        <p>Explanation Quality
Clarity • Accuracy • Completeness
• Relevance • Usefulness • Actionability • …
• Ex-ante Explanations (Before Execution): Provided before the system executes or makes a decision
to validate models or justify decisions before deployment.
• Run-time Explanations (During Execution): Delivered while the process is running to support
human-in-the-loop oversight or adaptive user feedback.
• Post-hoc Explanations (After Execution): Generated after the process completes its actions or
decisions in order to audit, debug, or help users understand outcomes.</p>
        <p>Presentation Format of Explanation: ”How is the explanation presented to the user?”</p>
        <p>
          Thwewcwh.aodsaepntivpe-rseyssteenmtsa.otirogn method has a direct efect on user comprehension and, therefore, on the
success of the explanations [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>• Visual explanations: Heatmaps, charts, dashboards, saliency maps
• Verbal explanations: Natural-language output, written rules, factual/counterfactual statements
Interaction with Explainee: ”How does the user interact with the explanation?”</p>
        <p>
          Interaction of the explainee with the explanation refers to the mode and extent of user involvement
in the explanation process:
• One-shot explanations: Explanation provided once, passively
• Query-based explanations: Explanation provided on-demand, actively
• Multi-round / Conversational: Interactive, iterative, potentially adaptive dialogue [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]
Explanation Quality: ”How to assess the quality of explanations?”
        </p>
        <p>Explanation quality may be assessed along two complementary dimensions:
• Technical quality: This relates to the technical properties of the explanation method itself.
Examples include fidelity aka. faithfulness aka. soundness (which measures how accurately the
explanation reflects the reasoning or behavior of the explanans), and stability (an explainer should
provide similar explanations for similar input or minor perturbations of the input).
• User-centric quality: This relates to how the explanation is perceived by humans (in the role of
explainees). Examples include usefulness (which quantifies how well it helps the explainee to
solve a problem, understand a concept better, or apply the knowledge in a new situation) and
meaningfulness (explanation is relevant to the specific explainee and the question or topic at hand
and avoids unnecessary tangents or irrelevant information that could confuse the explainee).</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Realizing ABPs as Agentic BPM Systems – An illustrative Example</title>
      <p>We illustrate a concrete realization of ABPs and the proposed conceptualization of explainability in the
form of an Agentic BPM system, a specific type of agent-centric ABP utilizing LLM-based agents. While
6
ABPs operate independently to perform tasks, Agentic BPM systems go further by purposefully pursuing
goals, reasoning about their actions, and explaining or adapting their behavior in interaction with others.
This Agentic BPM system realizes the process of onboarding new vendors as part of a procurement
BPM system (see Figure 5). To this end, new Vendors (the explainee in this case) provide an application
(see Figure 6 for an example) to the Vendor Evaluator (the explainer). The Vendor Evaluator is realized
using the CrewAI framework as shown in Listing 1. CrewAI is an open-source Python framework
designed for orchestrating multi-agent AI systems2. The Vendor Evaluator receives the Application
and provides a score along with a structured explanation such as the one depicted in Listing 2 back to
the Vendor. Our focus here is on showing how explainability can be embedded by design, while the
scoring capability follows an LLM-as-a-judge pattern, which, in more robust implementations, is likely
to be replaced by a dedicated agent tool. As shown, the explanation elaborates on the key concepts
introduced in this work – particularly the notions of explainer (the Vendor Evaluator agent), explainee
(the Vendor), explanandum, and explanans.</p>
      <p>Explainee</p>
      <p>Explainer</p>
      <p>Explanandum</p>
      <p>Evaluation</p>
      <p>Explanans
Listing 1: The Vendor Evaluator agent realized via the CrewAI framework extended with explainability
capabilities</p>
      <p>Vendor Application: GreenBox Logistics
Vendor Name: GreenBox Logistics
Proposal for: Supply Chain Optimization Software Deployment
Pricing: $1.4M over 12 months.</p>
      <p>Timeline: Estimated deployment in 14 months.</p>
      <p>Technical Compliance: Core features described. No reference to GDPR compliance or ISO
certifications. Cloud hosting region not specified.</p>
      <p>Reputation: Moderate reviews on industry platforms; one major contract terminated early in
2022 due to delivery delays. References from two mid-sized clients included.</p>
      <p>Attachments: General feature overview, timeline Gantt chart, and two short testimonials.
},
"feature_contributions": { "technical_compliance": -2.25, "delivery_timeline": -1.25 },
"contribution_summary": "Compliance and timeline account for 87% of score gap."
},
"recommendation": {
"summary": "Do not proceed without major revisions.",
"required_improvements": ["Add ISO certification", "Specify hosting", "Shorten timeline"]</p>
      <p>Listing 2: A JSON explanation example as generated by the Vendor Evaluator Agent</p>
    </sec>
    <sec id="sec-4">
      <title>4. Challenges for Explainable ABPs</title>
      <p>concerns.</p>
      <sec id="sec-4-1">
        <title>4.1. Challenges Related to Explainee</title>
        <p>We structure the challenges along the four main explainability concepts as well as along overarching
Challenge 1: How to specify preferences regarding explanations? Specifying preferences for
explanations presents multifaceted challenges. First, input mechanisms must efectively capture
preferences through various channels, whether explicitly declared upfront, interactively elicited through
dialogue, or implicitly inferred from user behavior. Systems must accommodate both static preferences
that remain consistent and those that dynamically adapt to changing contexts, while supporting the
natural evolution of preferences as the explainee’s understanding develops. Second, inevitable
preference conflicts need to be navigated. This involves carefully balancing competing dimensions, such as
detail versus conciseness and speed versus accuracy. This requires finding trade-ofs without sacrificing
critical explanatory qualities.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Challenges Related to Explanandum</title>
        <p>Challenge 2: What explanation subjects are needed for ABPMs? Explainability related to AI
components are rather well understood - not so for ABPM. From our understanding, explanation
subjects such as process instance, process models, and framed autonomy constraints are interesting
and relevant. However, a more mature taxonomy of explanation types might evolve.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Challenges Related to Explainer</title>
        <p>Challenge 3: Which techniques are needed for the explainer to generate explanations? As
mentioned in Section 1, employing state-of-the-art explainable AI (XAI) techniques for XABPs has
several limitations. We thus need to develop new and enhanced explainability techniques that specifically
target ABPs. Examples include what-if analyses, and process outcome analyses. Particularly, these
techniques should take causality into account. In the future, it should be clarified how existing BPM
techniques can be integrated, e.g., visualization, and be exploited for creating explanations.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Challenges Related to Explanans / Explanantia</title>
        <p>
          Challenge 4: How may one articulate actionable explanations (e.g., to other agents) to preserve
autonomy? While the explanans is constructed to make an explanation informative about the
circumstances that may led to the situation being inquired (i.e., the explanandum), in the context of
XABPs, the explanans may also adopt an actionable style—indicating to the explainee which corrective or
mitigating actions could be taken to alter the state of the explanandum, particularly without escalating
the situation to any external agent. In this way, the explainee may be able to autonomously act
upon the condition at hand. However, further work is needed to devise a systematic approach that
enables the explainer to determine the most efective content for the explainee, to elicit such corrective
action—taking into account both the explanandum and the behavioral intentions of the explainee.
Challenge 5: When to generate explanations (generation time) and how long to preserve them
[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]? The question here is whether explanations should be generated upfront, whenever possible, or
whether we can or should be more conscious about the generation time of the explanation. Another
question is when to discard outdated explanations.
        </p>
        <p>Challenge 6: How can explanations automatically adapt their form to suit the identity of the
explainee? The question is how the explanation can be presented in a way that is easily understandable
for the explainee, e.g., leads to low cognitive load for human explainees, and answers the explanation
needs of the explainee. This could also be motivated by organizational motivation and goals.
Challenge 7: How can we accommodate explanations that consider (why) certain behaviors
did not occur? Explaining non-occurring behavior is more challenging than explaining occurring
behavior and requires capturing or acquiring knowledge about non-occurring behavior. Causal analysis
might be helpful here.</p>
        <p>Challenge 8: How may we synthesize a variety of perspectives (e.g., data, contextual,
exogenous) into the explanation? The first challenge here is to collect and create data sets that cover
diferent perspectives and are of suficient quality. It is essential to be able to link the synthesized data
to process instances. Moreover, providing explanations on synthesized data might also require selecting
and filtering the data adequately again to provide adequate explanations.</p>
        <p>Challenge 9: How to identify causal explanations? Causality vs. correlation: Not every correlation
between two variables has a causal explanation. It is therefore important to distinguish between
spurious correlations and causality. This classical distinction is well known, but must also be observed
in the context of explainable ABPMN. The explainer can provide the explainee with information about
the degree of certainty of the explanation ofered.</p>
        <p>Challenge 10: How do explanations evolve over time based on feedback or changing context?
Explanations might have to be updated based on changing context and feedback, e.g., if sensor data
starts to deviate. The first question is how to detect that an explanation that (partly) takes into account
the sensor data has to be updated? Another question is when to present the updated explanation to the
user, i.e., directly after a changing context was detected or at another, possibly better fitting moment?
This question is related to the question of explanation update frequency. Here, the challenge is to find
the sweet spot between keeping explanations up to date and not confusing the explainee. Finally, we
have to think about when and how to provide full versus incremental explanation updates.
Challenge 11: How does the realization of the ‘frame’ in ABPMSs afect the one of
explainability? i.e., with autonomous agents, it may be the means for the agents to share with other agents
the rationale for their own behavior.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Overarching Challenges</title>
        <p>Challenge 12: How may one assess the quality of the explanations? Evaluating explanation
quality presents a fundamental challenge requiring both empirical and theoretical approaches. From an
empirical perspective the question needs to be answered how can we efectively measure explanation
quality when objective and subjective dimensions must both be considered? Objective measures
include, for example, factual accuracy and completeness, while user-centered aspects cover, for example,
comprehensibility, usefulness, efectiveness and eficiency (e.g., see [ 14]), as well as being actionable.
Challenge 13: Which kind of datasets are needed to serve as explainability benchmarks?
Benchmarking is a typical approach to evaluating system performance. We expect that such an idea
can also promote the development of the field of accountability. However, benchmarking typically
relies on adequate benchmark data. In principle, such data can be generated in a laboratory setting. But
adequate data are also needed for benchmarking explainability systems in the field.
Challenge 14: How to ensure that explanations do not reveal information that may be
privacysensitive, reveal business-critical IPR, or make it easier to undermine the security of the
system? In essence, the challenge lies in providing enough information to satisfy the need for
explainability without compromising other crucial aspects of the business, such as data privacy, competitive
advantage, and system security.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Related Work</title>
      <sec id="sec-5-1">
        <title>Our work relates mainly to following streams of research:</title>
        <p>
          Autonomous business processes: In general, the idea of automation can be traced back for centuries.
Also, particular the automation of business processes is not new, e.g., (robotic) process automation
[15, 16]. ABPs particularly build on the idea of using AI for the augmentation of BPM systems [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] and
push them to the next level. Note that, in contrast to sequential decision processes, business processes
have important characteristics: a business process is distributed and non-sequential, and the activities
of a process are not usually fully ordered in time, but are only partially ordered by causality [17, 18].
These characteristics are not usually considered in general work on explainability, despite being of
major importance.
        </p>
        <p>
          Explainable AI: Even though explainable AI (XAI) is a relatively recent topic, its historical development
can be traced back to several roots, namely, expert systems, machine learning &amp; recommender systems,
and neuro-symbolic learning &amp; reasoning [19, 20]. The need and ideas for XAI in the domain of BPM
is also identified and diferent proposals were made (e.g., see recent literature surveys [
          <xref ref-type="bibr" rid="ref13 ref2">13, 2, 21, 22,
23</xref>
          ]). More particular, explanation approaches exist for process outcome prediction, e.g. [24], process
monitoring [25], uncertainty quantification of processes [ 26], causal processes [
          <xref ref-type="bibr" rid="ref6">6, 27, 28, 29</xref>
          ], explanation
patterns [30], explainable user interfaces [31], explanation aware processes [
          <xref ref-type="bibr" rid="ref4 ref5">5, 4</xref>
          ], explainable decision
models [32], and GenAI for process explainability [
          <xref ref-type="bibr" rid="ref7">7, 33</xref>
          ]. Our paper aims to consolidate and integrate
these earlier ideas and directions and to leverage them for the next level of BPM, i.e., ABP.
        </p>
        <p>Fairness, accountability, and transparency: Over the past few years, a growing community has emerged
around the topics of fairness, explainability, and transparency, as evidenced by the ACM Conference on
Fairness, Accountability, and Transparency (ACM FAccT). Our work presented here is certainly strongly
related to these topics. However, work in this research stream, e.g. [34, 35], does not currently focus on
the process perspective of designing BPM systems and does not considers the important characteristics
of processes mentioned before. We assume that these diferent research streams will be much better
integrated in the future.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>An autonomous business process (ABP) represents a paradigm shift towards self-executing workflows
driven by AI and ML Yet ABPs introduce challenges related to trust, transparency, accountability, bias,
and regulatory compliance within BPM. To address these issues, this paper introduced the notion of
explainable ABPs (XABPs), which can articulate the rationale behind their actions and underlying
models. Current explainable AI (XAI) techniques fall short in capturing the complexities of the BPM
setting. We therefore introduced a set of challenges to stimulate further research on XABPs.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT and Gemini to: Grammar and spelling
check, Paraphrase and reword, Improve writing style. Further, the authors used Claude for Figures 2–4
to: Generate an initial draft of the diagrams. After using these tools, the authors reviewed and edited
the content as needed and take full responsibility for the publication’s content.
and application of a novel local explanation approach for predictive process monitoring, in:
Interpretable AI: A Perspective of Granular Computing, Springer, 2021, pp. 1–18.
[14] A. Metzger, J. Laufer, F. Feit, K. Pohl, A user study on explainable online reinforcement learning
for adaptive systems, ACM Trans. Auton. Adapt. Syst. 19 (2024) 15:1–15:44. doi:10.1145/3666005.
[15] J. Mendling, G. Decker, R. Hull, H. A. Reijers, I. Weber, How do machine learning, robotic process
automation, and blockchains afect the human factor in business process management?, Commun.</p>
      <p>Assoc. Inf. Syst. 43 (2018) 19. URL: https://aisel.aisnet.org/cais/vol43/iss1/19.
[16] C. Czarnecki, P. Fettke (Eds.), Robotic Process Automation: Management, Technology, Applications,</p>
      <p>De Gruyter Oldenbourg, 2021.
[17] H. Kourani, S. J. van Zelst, D. Schuster, W. M. P. van der Aalst, Discovering partially ordered
workflow models, Inf. Syst. 128 (2025) 102493. doi: 10.1016/J.IS.2024.102493.
[18] P. Fettke, W. Reisig, Discrete models of continuous behavior of collective adaptive systems, in:
Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning
- 11th Int. Symp., ISoLA 2022, Proceedings, Part III, LNCS, Springer, 2022, pp. 65–81.
[19] D.-H. Ruben, Explaining Explanation, 2nd ed., Routledge, 2012.
[20] R. Confalonieri, L. Coba, B. Wagner, T. R. Besold, A historical perspective of explainable artificial
intelligence, WIREs Data Mining Knowl. Discov. 11 (2021). doi:10.1002/WIDM.1391.
[21] D. A. Neu, J. Lahann, P. Fettke, A systematic literature review on state-of-the-art deep learning
methods for process prediction, AI. Rev. 55 (2022) 801–827. doi:10.1007/S10462- 021- 09960- 8.
[22] S. Weinzierl, S. Zilker, S. Dunzer, M. Matzner, Machine learning in business process management:
A systematic literature review, Expert Systems with Applications 253 (2024) 124181. doi:10.1016/
J.ESWA.2024.124181.
[23] M. Stierle, J. Brunk, S. Weinzierl, S. Zilker, M. Matzner, J. Becker, Bringing light into the darkness
A systematic literature review on explainable predictive business process monitoring techniques,
in: ECIS 2021, 2021. URL: https://aisel.aisnet.org/ecis2021_rip/8.
[24] A. Stevens, J. D. Smedt, Explainability in process outcome prediction: Guidelines to obtain
interpretable and faithful models, Eur. J. Oper. Res. 317 (2024). doi:10.1016/J.EJOR.2023.09.010.
[25] M. Harl, S. Weinzierl, M. Stierle, M. Matzner, Explainable predictive business process monitoring
using gated graph neural networks, J. Decis. Syst. 29 (2020). doi:10.1080/12460125.2020.1780780.
[26] N. Mehdiyev, M. Majlatow, P. Fettke, Augmenting post-hoc explanations for predictive process
monitoring with uncertainty quantification via conformalized monte carlo dropout, Data Knowl.</p>
      <p>Eng. 156 (2025) 102402. doi:10.1016/J.DATAK.2024.102402.
[27] T. Narendra, P. Agarwal, M. Gupta, S. Dechu, Counterfactual reasoning for process
optimization using structural causal models, in: LNBIP, volume 360, 2019. URL: https://doi.org/10.1007/
978-3-030-26643-1_6.
[28] A. J. Alaee, M. Weidlich, A. Senderovich, Data-driven decision support for business processes:
Causal reasoning and discovery, in: BPM Forum, Springer, 2024, pp. 90–106. doi:10.1007/
978- 3- 031- 70418- 5_6.
[29] F. Fournier, L. Limonad, I. Skarbovsky, Towards a benchmark for causal business process reasoning
with LLMs, in: BPM-W, Springer, 2025, pp. 233–246. doi:10.1007/978- 3- 031- 78666- 2_18.
[30] A. Buliga, M. Vazifehdoostirani, L. Genga, X. Lu, R. M. Dijkman, C. D. Francescomarino, C. Ghidini,
H. A. Reijers, Uncovering patterns for local explanations in outcome-based predictive process
monitoring, in: BPM 2024, volume 14940 of LNCS, Springer, 2024, pp. 363–380. doi:10.1007/
978- 3- 031- 70396- 6\_21.
[31] A. Füßl, V. Nissen, S. H. Heringklee, An explanation user interface for a knowledge graph-based
XAI approach to process analysis, in: CAiSE-W 2024, volume 521 of LNBIP, Springer, 2024, pp.
72–84. doi:10.1007/978- 3- 031- 61003- 5\_7.
[32] A. Goossens, U. Maes, Y. Timmermans, J. Vanthienen, Automated intelligent assistance with
explainable decision models in knowledge-intensive processes, in: BPM-W 2022, Münster, volume
460 of LNBIP, Springer, 2022, pp. 25–36. doi:10.1007/978- 3- 031- 25383- 6\_3.
[33] L. Limonad, F. Fournier, H. Mulian, G. Manias, S. Borotis, D. Kyrkou, Selecting the right LLM for
eGov explanations, 2025. arXiv:2504.21032.
[34] T. Speith, A review of taxonomies of explainable artificial intelligence (xai) methods, in:
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22,
Association for Computing Machinery, 2022, p. 2239–2250. doi:10.1145/3531146.3534639.
[35] B. Kuehnert, R. Kim, J. Forlizzi, H. Heidari, The “who”, “what”, and “how” of responsible AI
governance: A systematic review and meta-analysis of (actor, stage)-specific tools, in: Proceedings of the
2025 ACM Conference on Fairness, Accountability, and Transparency, ACM, 2025, p. 2991–3005.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fournier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Limonad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Marrella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rehse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Accorsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Giacomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fahland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Rosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Völzer</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Weber</surname>
          </string-name>
          ,
          <article-title>AI-augmented business process management systems: A research manifesto</article-title>
          ,
          <source>ACM Trans. Manag. Inf. Syst</source>
          .
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <volume>11</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          :
          <fpage>19</fpage>
          . doi:
          <volume>10</volume>
          .1145/3576047.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehdiyev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Majlatow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fettke</surname>
          </string-name>
          ,
          <article-title>Interpretable and explainable machine learning methods for predictive process monitoring: A systematic literature review</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2312</volume>
          .
          <fpage>17584</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          ,
          <article-title>Peeking inside the black-box: A survey on explainable artificial intelligence (xai)</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          ). doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2018</year>
          .
          <volume>2870052</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Amit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fournier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gur</surname>
          </string-name>
          , L. Limonad,
          <article-title>Model-informed lime extension for business process explainability</article-title>
          ,
          <source>in: PMAI@IJCAI'22</source>
          ,
          <string-name>
            <surname>CEUR</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Amit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fournier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Limonad</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Skarbovsky</surname>
          </string-name>
          ,
          <article-title>Situation-aware explainability for business processes enabled by complex events</article-title>
          , in: BPM-W
          <year>2022</year>
          , volume
          <volume>460</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>57</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -25383-6\_5.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Fournier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Limonad</surname>
          </string-name>
          , I. Skarbovsky,
          <string-name>
            <surname>Y. David,</surname>
          </string-name>
          <article-title>The why in business processes: Discovery of causal execution dependencies</article-title>
          , Künstliche
          <string-name>
            <surname>Intelligenz</surname>
          </string-name>
          (
          <year>2025</year>
          ).
          <source>doi:10.1007/s13218-024-00883-4.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Fahland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fournier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Limonad</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Skarbovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J. E.</given-names>
            <surname>Swevels</surname>
          </string-name>
          ,
          <article-title>How well can large language models explain business processes as perceived by users?</article-title>
          ,
          <source>Data &amp; Knowledge Engineering</source>
          <volume>157</volume>
          (
          <year>2025</year>
          )
          <article-title>102416</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.datak.
          <year>2025</year>
          .
          <volume>102416</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lipton</surname>
          </string-name>
          ,
          <source>Causation and Explanation</source>
          , Oxford University Press,
          <year>2010</year>
          , pp.
          <fpage>619</fpage>
          -
          <lpage>631</lpage>
          . doi:
          <volume>10</volume>
          .1093/ oxfordhb/9780199279739.003.0030.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Metzger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rothweiler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pohl</surname>
          </string-name>
          ,
          <article-title>Automatically reconciling the trade-of between prediction accuracy and earliness in prescriptive business process monitoring</article-title>
          ,
          <source>Inf. Syst</source>
          .
          <volume>118</volume>
          (
          <year>2023</year>
          )
          <article-title>102254</article-title>
          . doi:
          <volume>10</volume>
          .1016/J.IS.
          <year>2023</year>
          .
          <volume>102254</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Metzger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pohl</surname>
          </string-name>
          ,
          <article-title>Counterfactual explanations for predictive business process monitoring</article-title>
          , in: M.
          <string-name>
            <surname>Themistocleous</surname>
          </string-name>
          , M. Papadaki (Eds.),
          <source>EMCIS</source>
          <year>2021</year>
          , volume
          <volume>437</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>399</fpage>
          -
          <lpage>413</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -95947-0\_
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Malandri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mercorio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mezzanzanica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nobani</surname>
          </string-name>
          ,
          <article-title>Convxai: a system for multimodal interaction with any black-box explainer</article-title>
          ,
          <source>Cogn. Comput</source>
          .
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>613</fpage>
          -
          <lpage>644</lpage>
          . doi:
          <volume>10</volume>
          .1007/ S12559-022-10067-7.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Metzger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bartel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Laufer</surname>
          </string-name>
          ,
          <article-title>An AI chatbot for explaining deep reinforcement learning decisions of service-oriented systems</article-title>
          ,
          <source>in: ICSOC</source>
          <year>2023</year>
          , volume
          <volume>14419</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>323</fpage>
          -
          <lpage>338</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -48421-6\_
          <fpage>22</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehdiyev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fettke</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence for process mining: A general overview</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>