<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Design Science Contributions to the LLM-based Augmentation of the BPM Lifecycle Using Metamodel-based Knowledge Graphs as Domain-specific Context Mediators</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Damaris - Naomi Dolha</string-name>
          <email>damaris.dolha@econ.ubbcluj.ro</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Babeş-Bolyai University</institution>
          ,
          <addr-line>Teodor Mihali 58-60, Cluj-Napoca 400591</addr-line>
          ,
          <country country="RO">Romania</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ER2025: Companion Proceedings of the 44th International Conference on Conceptual Modeling: Industrial Track, ER Forum</institution>
          ,
          <addr-line>8th SCME, Doctoral Consortium, Tutorials</addr-line>
          ,
          <institution>Project Exhibitions</institution>
          ,
          <addr-line>Posters and Demos</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>229</fpage>
      <lpage>244</lpage>
      <abstract>
        <p>Large Language Models (LLMs) have emerged as transformative tools across various domains, including Business Process Management (BPM). Recent research agendas suggest that LLMs ofer significant potential to augment the phases of the BPM lifecycle. However, integrating LLMs into BPM tools presents technical choices and challenges, not only with respect to how the process logic is interpreted, but also to potential domain-specific contextualization that may be engineered on top of standard process descriptions. This PhD work, currently at the end of its first year, investigates the LLM-BPM lifecycle interaction, with a focus on design-time tasks such as process analysis and redesign. This requires the integration of a Business Process Model and Notation (BPMN) tool with public LLM services to enable the experimental probing of content exchanges between the two; moreover, our conceptualization scope will go beyond standard BPMN, involving layers of domain-specificity added to process descriptions - as found in numerous BPMN extensions, and in multi-perspective modeling methods that employ BPMN for the process perspective. The work will follow the Design Science methodology to implement an experimentation testbed that will enable conversations with LLMs through a BPMN modeling environment - not only about BPMN content, but also pertaining to domain-specific extensions added via metamodeling (DSMLs based on BPMN). Both the BPMN content and domain-specific contexts will be exposed as knowledge graph snippets through a GraphRAG pipeline leveraging diagram-to-RDF converters. In the empirically-focused stages of DSR this tool is intended to help with assessing (a) the performance benefits of the approach against process interpretation by AI visual inspection (computer vision) or by parsing standard XML serializations; (b) the LLM answer sensitivity to metamodeling design decisions (wording and metamodel structure). The ADOxx metamodeling platform will be used for tool and pipeline implementation, and the RAGAs framework will be used to assess the quality of generative content (process queries, modeling recommendations). To probe LLM sensitivity, experiments will vary factors such as prompting strategies, semantic subgraph extraction patterns, metamodeling patterns and terminology.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Business Process Management</kwd>
        <kwd>Context engineering</kwd>
        <kwd>Domain-specific Modeling</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>Graph RAG</kwd>
        <kwd>Knowledge Graphs</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Organizations increasingly adopt AI approaches to analyze and refine their internal processes,
underscoring a need for comprehensive AI-driven frameworks to manage complexity in business process
architectures. In this context, [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] defines Business Process Management (BPM) as a holistic discipline
that, besides enabling the control, improvement and automation of process models also aligns processes
with strategic objectives, thereby enhancing operational eficiency and organizational agility. However,
M. Rosemann [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] argues that BPM research must evolve beyond exploitative BPM, which has
traditionally focused on optimizing existing processes, towards explorative BPM, which seeks to innovate
processes through emerging technologies. In this regard, one wave of transformative advancements is
that of Large Language Models (LLMs) opening new avenues - not only for classic Natural Language
      </p>
      <p>
        Processing (NLP) tasks in relation to BPM, but also for reimagining the BPM lifecycle phases; a recent
vision paper [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] applied this lens to revisit the lifecycle and proposed a research agenda that inspired
this thesis work.
      </p>
      <p>Besides empirically probing the ability of public LLMs to interpret process descriptions, the integration
of LLMs into BPM tools also raises artifact-oriented challenges that must be tackled through a Design
Science Research (DSR) approach - i.e. investigating design prescriptions for schemas, formats, protocols
and knowledge structures that should be used for streamlining content between an LLM and a BPMN
tool, for domain-specific context engineering and for assessing the sensitivity of LLM response quality
to various content strategies and context patterns.</p>
      <p>Considering this requirement for engineering-empirical duality, this PhD work aims to develop a
twofold Design Science contribution:
(a) Firstly, a testbed (modeling tool) to facilitate experimentation on the above aspects, through a
BPMN-based DSML (domain-specific modeling language) that captures not only standard BPMN
descriptions but also domain-specific extensions beyond the standard, editable by metamodeling
means; interoperation between this modeling tool and LLMs will be based on RDF graphs to enable
a model-driven Graph RAG (Graph Retrieval-Augmented Generation) pipeline that will support,
via RDF fragments, process-awareness and domain-awareness for improving LLM answers quality
and relevance in domain-specific terms;
(b) Secondly, to employ this testbed for experimentation with bidirectional conversations (i.e. process
queries and modeling suggestions). The planned experiments aim to evaluate the performance
benefits of this model-driven Graph RAG pipeline against a baseline of common process inspection
approaches - visual diagram inspection (via computer vision associated with some LLM services)
or ingestion of standard BPMN XML serializations annotated with unstructured domain-specific
context documents. To formulate prescriptive knowledge as expected in DSR, the experimentation
strategy aims to assess the sensitivity of LLM responses to various design decisions and
interoperability strategies involved in the Graph RAG pipeline: the metamodel terminology, metamodel
design patterns, prompting strategies, "relevant context" subgraph extraction strategies.</p>
      <p>
        The intrinsic gap between the unstructured and probabilistic outputs of LLMs and the rigorously
structured demands of business process modeling standards necessitates reconciliation approaches and
a good understanding of their limits. Although LLMs can be instructed using diferent strategies to
generate and analyze conceptual models, their outputs often require some post-processing to ensure
adherence to formal modeling conventions [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. While prompt engineering alone can leverage pre-trained
LLMs without any fine-tuning, just by wording choices and prompting protocols - it has limitations
in fully communicating formal syntax and semantics of established BPM standards [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. To address
this, recent research agendas [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] highlighted the need for LLM integration with semantic structuring
mechanisms - KGs, upper ontologies or reasoning rulebases - to ensure that process interpretability
is not compromised by the probabilistic nature of LLMs. Therefore, the interplay between LLMs and
the BPM lifecycle calls for new integration patterns and hybridization architectures. As argued by
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a hybrid approach seems more promising than solely training chatbots with specialized process
repositories, preferably leveraging existing modeling knowledge.
      </p>
      <p>In this light, this doctoral research - currently at the end of its rfist year - aims to investigate to
what extent can LLM-powered augmentations support specific phases of the BPM lifecycle, by
leveraging knowledge graphs as conceptual mediators? To delimit a realistic scope for the work,
we focus on the phases of: Process analysis - via process queries and simulation; and Process (re)design
via domain-specific modeling recommendations. This scope is further refined into research questions
in the next section.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Research Questions</title>
      <p>
        Considering the scoping established above, several research questions are formulated below based on
the DSR taxonomy of RQs proposed by [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]:
      </p>
      <p>
        RQ1 (artifact-focused). How can we implement LLM-powered process analysis in a domain-specific
BPMN tool, so that the LLM can answer both in terms of the standard BPMN conceptualization and the
non-standard domain specificity added to BPMN ? This will go beyond the current investigation of LLMs
ability to enable conversation with BPMN-compliant content, towards exploring whether that capability
extends to open-ended domain-specific contextualization of process descriptions (i.e., DSMLs based on
BPMN) in terms of various semantic patterns - new taxonomies, new attributes, new relationships to
new concepts. Many process-centric DSMLs are BPMN extensions realized by well established means
by enriching the taxonomies of tasks, flows, document objects etc. (see some examples in [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ]),
by adding entirely new model types that are meaningfully connected to legacy BPMN [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. It cannot be
expected that such metamodel extensions are available in the training phase of commercial or public
LLMs therefore domain-specificity must be added by hybridization mechanisms - such as RAG or
ontology-guided prompting [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. We aim to distance this work from the existing approach proposed
in the BPMN-LLM framework of [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and instead leverage as a technical ingredient semantic graphs
(obtained from diagram patterns [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]) and the Graph RAG integration patterns1 generally reported as
superior to traditional chunking-based RAG [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], since they employ semantic strategies for context
engineering and extraction.
      </p>
      <p>
        This technical decision is founded on the possibility of capturing the domain-specificity of a DSML
through KGs obtained from the RDF enrichment of standard BPMN process diagrams. Even if not yet
supported by a standard, such BPMN-to-RDF convertors have been available in several experimental
or didactic modeling tools for some years now [
        <xref ref-type="bibr" rid="ref18">18, 19</xref>
        ]. Our demonstrator will employ the ADOxx
metamodeling platform2 to extend the Bee-Up implementation of BPMN. Therefore, the engineering
efort will leverage semantic graphs as mediators between a BPMN-based DSML and LLM services,
thus providing a technology-specific operationalization of the "mediator role" of conceptual models
advocated by recent agendas in conceptual modeling research [20] and semantics-driven systems
engineering [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Besides acting as a demonstrator for the proposed DSR treatment, the outcome of this
RQ will further provide a testbed for the experimentation oriented RQs:
      </p>
      <p>RQ2 (empirically-focused). How does this implementation based on Graph RAG perform in
comparison to more straightforward alternatives - i.e., an AI agent performing visual diagram inspection,
or ingesting standard XML process serializations? In preliminary results, the first year of this PhD
project already published some results in [21, 22] focusing on comparisons of LLM answer quality
between BPMN content provided as standard XML and similar content provided as tool-specific RDF
graphs, using the OpenAI integration of Ontotext GraphDB3 as interoperability channel. Results suggest
a benefit of open-endedness and relationship traceability in RDF graphs compared to the closed-world
assumption of standard XML schemas.</p>
      <p>RQ3 (empirically-focused). How do metamodeling design decisions - from terminological decisions
(i.e., wording of metamodel constructs) to knowledge modeling decisions - impact the LLM ability to
interpret and reason on business processes in the domain-specific context of choice?</p>
      <p>
        Regarding the domain-specificity involved in all the above RQs - i.e., the application domain for
the DSR demonstrator - there are numerous choices of BPMN extensions available in the literature, see
[
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. Since we require open source implementations that can be both edited (for RQ3) and extended
with interoperability components (for RQ2), we are leveraging the open resources made available
through the OMILAB Community of Practice [23], where a large catalog of domain-specific modeling
tools have been implemented over the years and are editable on the ADOxx metamodeling environment,
including numerous domain-specific flavors of BPMN (as needed for our RQ1). Out of the numerous
options, our current focus is on a BPMN extension for ESG (Environmental-Social-Governance)
1https://www.ontotext.com/knowledgehub/fundamentals/what-is-graph-rag/
2https://adoxx.org/
3https://graphdb.ontotext.com/
policies [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] - a wider tooling project4 spanning several PhD projects, where this PhD also participates in
the domain analysis and tool development eforts. Our results are however expected to be generalizable
and will also be tested with alternative specificities, to cover a diversity of metamodel and diagram
patterns that have been catalogued in past works in both functional and formal terms [24, 25].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Literature Survey on LLMs for BPM</title>
      <p>
        A recent manifesto [26] extends traditional Business Process Management Systems (BPMS) by
integrating AI into every phase of the lifecycle, resulting in AI-Augmented Business Process Management
Systems (ABPMSs). Unlike conventional BPM - where tasks are executed by humans or software
following predetermined logic - ABPMSs are supposed to manifest some level of autonomy in deciding
task execution. To mitigate potential trust concerns for users due to AI’s opaque decision-making
processes, the authors envision a lifecycle that distinguishes between basic (frame, enact, perceive,
reason) and advanced phases (explain, adapt, improve), aimed at making AI decisions more transparent
and understandable. This is in line with similar preoccupations in the systems engineering field, e.g.,
the hybridization in semantics-driven systems engineering [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The earlier agenda of [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] pinpoints how
LLMs impact BPM, from process identification and discovery to monitoring, and how they are expected
to be integrated into commercial products. The proposal of [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] narrows the focus on three phases of the
BPM lifecycle - namely, process discovery, analysis and redesign - examining how chatbots can support
conversational process modeling. Their results show that chatbots perform well in assisting with task
extraction, paraphrasing and refining process descriptions, although they still face challenges in fully
"understanding" process logic and precise meaning from process descriptions. Such developments
have led to a growing body of work exploring the use of LLMs in distinct areas of BPM practice,
notably process generation, process querying, process discovery. The following section indicates some
representative contributions and challenges in each of these segments, followed by a shift to our focus
on multi-modal knowledge streamlining - visual, knowledge graphs and prompt engineering.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Prominent BPM Use Cases Employing LLMs</title>
        <p>
          Process querying. An important use case of LLMs within the BPM paradigm is generating natural
language explanations in response to user queries about their business processes. This tends to replace
the tradition of process querying methods [27], in similar sense to how various querying standards
(Cypher, SPARQL, SQL) are being considered as mediators for natural language inquiries. In this regard,
[
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] proposes and evaluates a RAG and LLM fine-tuning architecture, devoting a significant portion of
their study to the efect of diferent chunking strategies on the accuracy of LLM-generated outputs.
Recent work also explores how generative chatbots can support process querying by interactively
guiding users through BPMN models. For instance, [28] reports that LLM-powered chatbots can answer
questions about available tasks and process decisions, even though they struggle with complex control
lfows and gateways. When it comes to trustworthiness and clarity of AI-generated explanations, [ 29]
demonstrates that adding process and causal knowledge to LLM prompts improve the results of such
metrics. Explainability is also present from the very first prompt in [ 30], where ChatGPT5 (using
GPT-4o) is asked to describe uploaded BPMN diagrams - before any other steps are taken - noting that
ChatGPT has the ability to visually inspect process models and recognize the BPMN symbols. In our
work, we stay away from visual communication of process designs - first of all, because not all business
process details manifest visually; secondly, because BPMSs rely on model interchange for deterministic
communication of design decisions to inform use cases where stochastic extrapolation is not acceptable
(e.g., automation); finally, because arbitrary domain-specificity added to BPMN would not be recognized
visually, while having the same domain-specificity in semantic graphs makes it interpretable.
4presented in the Research Project Exhibition at CAiSE 2025:
https://github.com/claudenirmf/caise-rpe-2025/blob/main/caiserpe-01.pdf
5https://openai.com/index/chatgpt/
        </p>
        <p>
          Process generation. From extracting process elements and relationships directly from textual
narratives [31], recent research also demonstrates that even early LLMs versions (such as OpenAI’s
GPT-3.5/4) have the potential to generate and even refine, in certain cases, business process diagrams
from textual descriptions [
          <xref ref-type="bibr" rid="ref4">4, 32, 33, 34, 35</xref>
          ] and voice inputs [36]. Nonetheless, some outputs require
external/human refinement to meet semantic correctness and adherence to modeling conventions, a
challenge also noticed in the framework introduced by [37]. Here, the authors further note the need
for more interactive feedback mechanisms to improve the generated BPMN models. [
          <xref ref-type="bibr" rid="ref19">38</xref>
          ] reinforces
the idea that LLMs alone are insuficient for process modeling in a context-agnostic setting. This
study highlights that an efective use of these technologies requires a governance structure, where
human process analysts and AI systems iteratively refine each other’s work. An LLM-powered chatbot
is integrated with an existing modeling environment, pointing to the need of setting up testbeds
with well-identified points for tweaking factors that potentially impact experimentation outcomes - a
mission toward which this thesis also aligns through its DSR approach. This thesis does not pursue
the goal of generating standards-compliant business process models, but rather to assist modelers
through recommendations of domain-specific elements to be attached to process models, considering
the adopted domain-specificity represented by a BPMN-centric DSML.
        </p>
        <p>
          Process discovery. At the base of data-driven BPM, [
          <xref ref-type="bibr" rid="ref20">39</xref>
          ] highlights that process mining is evolving
toward AI-augmented process execution. According to [
          <xref ref-type="bibr" rid="ref21">40</xref>
          ], Large Process Models (LPM) - essentially
neuro-symbolic, AI-augmented systems - reconceptualize process mining as an interactive,
human-inthe-loop dialogue mediated by natural language, rather than a static, tool-driven extraction activity.
Preliminary experiments with GPT-4 (where process models are created from operational data) show
that LLMs can interpret process data, answer queries, detect anomalies and suggest improvements
through natural language [
          <xref ref-type="bibr" rid="ref22">41</xref>
          ]. Within this phase, major concerns are detecting anomalies and fairness
assessments [
          <xref ref-type="bibr" rid="ref23">42</xref>
          ], with results supporting the idea that commercial LLMs are generally capable to
support analytics tasks involved in process discovery. This thesis does not aim currently to pursue a
process mining research stream, although we are open to the possibility that the modeling assistance
recommendation use case may branch out towards assessing the analytics capabilities of LLMs.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Prompt Engineering and Knowledge Graphs in BPM</title>
        <p>
          In order to get the best of LLMs within BPM, studies such as [
          <xref ref-type="bibr" rid="ref24 ref25 ref26">43, 44, 45</xref>
          ] confirm the critical role of
prompt engineering, particularly in complex tasks requiring structured responses. Prompt engineering
was investigated for BPM applications [
          <xref ref-type="bibr" rid="ref26">45</xref>
          ] to improve semantic completeness in process models - and in
the context of KGs [
          <xref ref-type="bibr" rid="ref27">46</xref>
          ] to enhance knowledge extraction and reasoning. Several challenges remain open
to investigation, especially with the fast evolution of LLM capabilities: (a) prompt sensitivity, where
minor variations in wording significantly alter outputs [
          <xref ref-type="bibr" rid="ref25 ref27">44, 46</xref>
          ] - an approach where this thesis will
compare the sensitivity to terminological choices and schemas governing graphical process models; (b)
domain adaptation dificulties, as general-purpose prompting struggles to capture nuance and requires
specialized strategies - an issue investigated in this work by a metamodeling approach to injecting
domain-specificity to process descriptions; and (c) prompting strategy complexity [
          <xref ref-type="bibr" rid="ref26">45</xref>
          ].
        </p>
        <p>
          These challenges underscore that instruction-based prompts alone are insuficient in ensuring reliable
LLM-generated outputs. Knowledge injection techniques have been proposed as a means to improve
assertions while decreasing hallucinated outputs [
          <xref ref-type="bibr" rid="ref28">47</xref>
          ]. Moreover, in [32], the Process Knowledge Graph
serves as a centralized, dynamic repository of process-related knowledge, for context-aware workflow
generation, ensuring that AI-driven process recommendations align with evolving organizational
requirements. The paper [
          <xref ref-type="bibr" rid="ref29">48</xref>
          ] further emphasizes the need for structured semantic grounding through
KGs. However, to maximize the efectiveness of KGs, it is essential to filter them by subgraph extraction
mechanisms to build relevant context patterns - the presence of noisy triples can mislead LLMs even
when efective prompting techniques are used [
          <xref ref-type="bibr" rid="ref30">49</xref>
          ]. A popular Graph RAG pattern employs metadata
ifltering based on user queries, so that only pertinent information is retrieved [
          <xref ref-type="bibr" rid="ref31">50</xref>
          ]. The integration
of AI-driven functionalities in prominent BPM commercial tools like ADONIS6 and SAP Signavio7
exemplifies industry-grade approaches that rely on fine-tuning, whereas we aim to investigate the inner
workings of LLM-BPM tool interactions with KGs as mediators, potentially enabling an open-endedness
to domain-specificity added via metamodeling.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Research Methodology</title>
      <p>
        The DSR process [
        <xref ref-type="bibr" rid="ref32 ref33">51, 52</xref>
        ] is particularly suited for artifact-oriented research. A specific tuning of it will
manifest in the Design &amp; Development phase which will be delegated to a traditional modeling tool
engineering methodology - AMME (Agile Modeling Method Engineering) [
        <xref ref-type="bibr" rid="ref34">53</xref>
        ]. This is due to the intended
use of the ADOxx metamodeling platform to add domain-specificity to the BPMN implementation
available in the Bee-Up tool8 and of the ADOxx-to-RDF diagram converters available from previous
projects [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Figure 1 shows the architectural vision of the model-driven Graph RAG pipeline, with (a)
the Bee-Up extension as a domain-specific enrichment of BPMN (the ESG-flavored implementation in
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] being a primary candidate in the initial DSR iteration) and (b) Python orchestrating the Graph RAG
pipeline elements, with Ontotext GraphDB and OpenAI as KG store and as LLM service, respectively.
While a fast evolution of LLM services is expected, the domain-specificity of choice will always be a
non-standard conceptualization, while BPMN is already fairly known to public LLMs.
      </p>
      <p>
        In the Evaluation phase, this work will contribute a protocol for evaluating the LLM response/content
quality relative to ground truths and in relation to several varying design factors - metamodel patterns
and terminology, prompting complexities and subgraph extraction strategies. Correctness criteria
will be based on the RAGAs [
        <xref ref-type="bibr" rid="ref35">54</xref>
        ] framework, which defines computable metrics such as Response
Relevancy (how well the answer aligns with the user’s question intent while penalizing irrelevant
information), Factual Correctness (factual overlap against a ground truth) and others. Performance
and cost of the pipeline relative to OpenAI or similar services will also be considered as part of the
evaluation. There is no plan to evaluate the actual DSML involved in experimentation, since the
thesis is not a language/notation development project. We are interested in building a domain-specific
context engineering flow between visual process modeling and LLM agents/services, and domains
will be switched for comparative experimentation of semantic patterns, to formulate a notion of "LLM
readiness" for BPMN-based languages.
6https://www.boc-group.com/en/blog/bpm/adonis-and-ai/
7https://www.signavio.com/process-ai/
8https://bee-up.omilab.org/activities/bee-up/
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Preliminary Eforts</title>
      <p>
        The first steps developed in the first year of the PhD program [ 21, 22] looked at how OpenAI’s 2024
models (GPT 3-4) interpreted BPMN models ingested as RDF graphs exported from Bee-Up (via
Ontotext GraphDB) versus standard XML diagrams exported from the Signavio toolkit. Experimental
results suggest that semantic graphs are more suitable to be interpreted and navigated along chains of
relationships even when using a non-standard terminology, possibly due to the intricate network of
XML cross-references involved in answering questions that must combine navigation of connectors,
containments, data annotations or inter-model hyperlinks (e.g., subprocess links). RDF
representations as Turtle "sentences" more efectively preserve all properties, whereas with XML, the generative
content is more prone to hallucinate, even extrapolating business narratives from labels present in
diagrams. Experiments were however limited in design conditions and imprecise in concern separation,
as well as in evaluating the LLM response quality, while LLMs are quickly evolving and improving
on all evaluated aspects. During such experimentation eforts we arrived at the metrics ofered by the
RAGAs framework (and Python library for computing them) - to be further involved in measuring
the quality of generative outputs. A more systematic assessment protocol will evolve from these early
steps, also incorporating the variation factors we are interested in - i.e., sensitivity of LLM responses
to metamodeling design decisions (wording/terminology as well as metamodel patterns) and
communication strategies (prompting structure, the subgraph patterns around a graph node of interest as a
replacement for "chunks" in traditional RAG). Prompting strategies are being designed along the TELeR
prompt taxonomy [
        <xref ref-type="bibr" rid="ref24">43</xref>
        ] to probe the capabilities of reasoning and traceability, first limited to the BPMN
standard, then extended with non-standard domain-specific taxonomies, looking towards managing
BPMN repositories as knowledge graphs, a recent proposition of [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        In terms of content subjected to the GraphRAG pipeline, current work focused only on basic BPMN
for which we defined a collection of minimalist workflow patterns involving a diversity of relationships
present in BPMN, considering most used BPMN usage trends from [
        <xref ref-type="bibr" rid="ref36">55</xref>
        ]. In parallel, the ESG extension
to BPMN reported in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] started development, as a distinct DSML engineering project to be adopted as
an application case in the future work (as also suggested by the report in the Research Project Exhibition
of CAiSE 2025: https://github.com/claudenirmf/caise-rpe-2025/blob/main/caise-rpe-01.pdf).
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Future Work</title>
      <p>This work follows a DSR process to build a knowledge pipeline between BPMN environments and
LLM services, with a focus on how domain-specific process contextualizations can contribute to that
interaction. Instead of relying on AI capabilities for visual inspection or for interpretation of XML
standard serializations, we are moving to semantic graphs as a mediator between metamodel-enriched
BPMN and LLMs interpreting those enrichments. For this, the development of a Graph RAG pipeline to
ensure streamlined interoperability between a metamodeling environment and an LLM is a priority of
the next phase of the project. This will leave the second half of the PhD program for experimentation
and benchmarking of LLMs’ generative content quality in relation to various design decisions of BPMN
extensions, and how they are exposed to LLMs in use cases based on querying or modeling assistance.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This PhD work is supervised by Prof. Dr. Robert Andrei Buchmann, at University Babeş-Bolyai,
Romania, in the Information Systems domain.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[19] OMiLAB NPO, The Bee-Up Modeling Tool, n.d. URL: https://bee-up.omilab.org/activities/bee-up/.
[20] J. Recker, R. Lukyanenko, M. Jabbari, B. M. Samuel, A. Castellanos, From Representation to
Mediation: A New Agenda for Conceptual Modeling Research in a Digital World, MIS Quarterly
45 (2021) 269–300. doi:10.25300/MISQ/2021/16027.
[21] D. N. Dolha, R. A. Buchmann, Experiments with natural language queries on RDF vs.
XMLserialized BPMN diagrams, in: Proceedings of KES 2024, volume 246, 2024, pp. 3246–3255.
doi:10.1016/j.procs.2024.09.315.
[22] D. N. Dolha, R. A. Buchmann, Generative AI for BPMN Process Analysis: Experiments with
Multi-modal Process Representations, in: Proceedings of BIR 2024, volume 529 of LNBIP, Springer,
2024, pp. 19–35. doi:10.1007/978-3-031-71333-0_2.
[23] The OMiLAB Community, Development of Conceptual Models and Realization of Modelling
Tools Within the ADOxx Meta-Modelling Environment: A Living Paper, in: Domain-Specific
Conceptual Modeling, Springer, 2022, pp. 23–40. doi:10.1007/978-3-030-93547-4_2.
[24] R. A. Buchmann, D. Karagiannis, Pattern-based Transformation of Diagrammatic Conceptual
Models for Semantic Enrichment in the Web of Data, in: Proceedings of KES 2015, volume 60,
2015, pp. 150–159. doi:10.1016/j.procs.2015.08.114.
[25] H.-G. Fill, T. Redmond, D. Karagiannis, Formalizing Meta Models with FDMM: The ADOxx Case,
in: Proceedings of ICEIS 1012, volume 141 of LNBIP, Springer, 2013, pp. 429–451. doi:10.1007/
978-3-642-40654-6_26.
[26] M. Dumas, et al., AI-augmented Business Process Management Systems: A Research Manifesto,</p>
      <p>ACM Trans. Manage. Inf. Syst. 14 (2023). doi:10.1145/3576047.
[27] A. Polyvyanyy, Process Querying: Methods, Techniques, and Applications, in: Process Querying</p>
      <p>Methods, Springer, 2022, pp. 511–524. doi:10.1007/978-3-030-92875-9_18.
[28] L. F. Lins, N. Nascimento, P. Alencar, T. Oliveira, D. Cowan, Comparing Generative Chatbots
Based on Process Requirements: A Case Study, in: Proceedings of IEEE Big Data 2023, IEEE, 2023,
pp. 4664–4673. doi:10.1109/BigData59044.2023.10386251.
[29] D. Fahland, F. Fournier, L. Limonad, I. Skarbovsky, A. J. E. Swevels, How well can a large language
model explain business processes as perceived by users?, Data &amp; Knowledge Engineering 157
(2025) 102416. doi:10.1016/j.datak.2025.102416.
[30] V. Niculescu, M.-C. Chisăliţă-Cre t,u, C.-C. Osman, A. Sterca, Model-Driven Development Using
LLMs: The Case of ChatGPT, in: Proceedings of ENASE 2025, SciTePress, 2025, pp. 328–339.
doi:10.5220/0013484400003928.
[31] P. Bellan, M. Dragoni, C. Ghidini, Extracting Business Process Entities and Relations from Text
Using Pre-trained Language Models and In-Context Learning, in: Proceedings of EDOC 2022,
volume 13585 of LNCS, Springer, 2022, pp. 182–199. doi:10.1007/978-3-031-17604-3_11.
[32] A. Beheshti, J. Yang, Q. Z. Sheng, B. Benatallah, F. Casati, S. Dustdar, H. R. M. Nezhad, X. Zhang,
S. Xue, ProcessGPT: Transforming Business Process Management with Generative Artificial
Intelligence, in: Proceedings of ICWS 2023, IEEE CS, 2023, pp. 731–739. doi:10.1109/ICWS60048.
2023.00099.
[33] H. Kourani, A. Berti, D. Schuster, W. M. P. van der Aalst, ProMoAI: Process Modeling with
Generative AI, in: Proceedings of IJCAI 2024, ACM Press, 2024, pp. 8708–8712. doi:10.24963/
ijcai.2024/1014.
[34] J. Silva, Q. Ma, J. Cabot, P. Kelsen, H. A. Proper, Application of the Tree-of-Thoughts Framework
to LLM-Enabled Domain Modeling, in: Proceedings of ER 2024, volume 15238 of LNCS, Springer,
2025, pp. 94–111. doi:10.1007/978-3-031-75872-0_6.
[35] M. Grohs, L. Abb, N. Elsayed, J. R. Rehse, Large Language Models Can Accomplish Business
Process Management Tasks, in: Proceedings of BPM 2023 Workshops, volume 492 of LNBIP,
Springer, 2024, pp. 453–465. doi:10.1007/978-3-031-50974-2_34.
[36] J. Köpke, A. Safan, Introducing the BPMN-Chatbot for Eficient LLM-Based Process Modeling, in:
Proceedings of BPM 2024 co-located events, CEUR-WS, 2024. URL: https://ceur-ws.org/Vol-3758/
paper-15.pdf.
[37] H. Kourani, A. Berti, D. Schuster, W. M. P. van der Aalst, Process Modeling with Large Language</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Rosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mendling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Reijers</surname>
          </string-name>
          , Fundamentals of Business Process Management, 2nd ed., Springer,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -56509-4.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kohlborn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Mueller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Poeppelbuss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Roeglinger</surname>
          </string-name>
          ,
          <article-title>Interview with Michael Rosemann on ambidextrous business process management, Bus</article-title>
          .
          <source>Proc. Manage. J</source>
          .
          <volume>20</volume>
          (
          <year>2014</year>
          )
          <fpage>634</fpage>
          -
          <lpage>638</lpage>
          . doi:
          <volume>10</volume>
          . 1108/BPMJ-02-2014-0012.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vidgof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bachhofner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mendling</surname>
          </string-name>
          ,
          <article-title>Large Language Models for Business Process Management: Opportunities and Challenges</article-title>
          ,
          <source>in: Proceedings of BPM 2023 Forum</source>
          , volume
          <volume>490</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -41623-
          <issue>1</issue>
          _
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.-G.</given-names>
            <surname>Fill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fettke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Köpke</surname>
          </string-name>
          ,
          <article-title>Conceptual Modeling and Large Language Models: Impressions From First Experiments With ChatGPT</article-title>
          , EMISAJ
          <volume>18</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .18417/emisa.18.3.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Busch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rochlitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Leopold</surname>
          </string-name>
          , Just Tell Me:
          <article-title>Prompt Engineering in Business Process Management</article-title>
          ,
          <source>in: Proceedings of BPMDS EMMSAD</source>
          <year>2023</year>
          , volume
          <volume>479</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -34241-
          <issue>7</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Buchmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Eder</surname>
          </string-name>
          , H.-G. Fill,
          <string-name>
            <given-names>U.</given-names>
            <surname>Frank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Karagiannis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Laurenzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mylopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Plexousakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <article-title>Large language models: Expectations for semantics-driven systems engineering</article-title>
          ,
          <source>Data &amp; Knowledge Engineering</source>
          <volume>152</volume>
          (
          <year>2024</year>
          )
          <article-title>102324</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.datak.
          <year>2024</year>
          .
          <volume>102324</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Klievtsova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. V.</given-names>
            <surname>Benzin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kampik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mangler</surname>
          </string-name>
          , S. Rinderle-Ma, Conversational Process Modelling:
          <article-title>State of the Art, Applications, and Implications in Practice</article-title>
          ,
          <source>in: Proceedings of BPM 2023 Forum</source>
          , volume
          <volume>490</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>336</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -41623-1_
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N. H.</given-names>
            <surname>Thuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drechsler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Antunes</surname>
          </string-name>
          ,
          <source>Construction of Design Science Research Questions, Communications of the AIS</source>
          <volume>44</volume>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .17705/1CAIS.
          <fpage>04420</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Corradini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fornari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pettinari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Re</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tiezzi</surname>
          </string-name>
          ,
          <article-title>A BPMN-Based Approach for IoT Systems Engineering</article-title>
          , in: Fluidware, Internet of Things, Springer,
          <year>2024</year>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>105</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>031</fpage>
          -62146-
          <issue>8</issue>
          _
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chis</surname>
          </string-name>
          , , A.
          <string-name>
            <surname>-M. Ghiran</surname>
          </string-name>
          ,
          <article-title>BPMN Extension for Multi-Protocol Data Orchestration</article-title>
          , in: Domain-Specific Conceptual Modeling, Springer,
          <year>2022</year>
          , pp.
          <fpage>639</fpage>
          -
          <lpage>656</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -93547-4_
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>C.-C. Osman</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.-M. Ghiran</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          <string-name>
            <surname>Buchmann</surname>
          </string-name>
          ,
          <article-title>Towards a Knowledge Management Capability for ESG Accounting with the Help of Enterprise Modeling and Knowledge Graphs, in: Proceedings of PoEM 2024 co-located events</article-title>
          ,
          <source>CEUR-WS</source>
          ,
          <year>2024</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3855</volume>
          /forum12.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S</given-names>
            <surname>,. Uifălean</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Buchmann</surname>
          </string-name>
          ,
          <string-name>
            <surname>Low-Code Browser</surname>
          </string-name>
          Front
          <article-title>-End Automation Using RDF Graphs and a Domain-Specific Language for UX Representation</article-title>
          ,
          <source>in: Proceedings of RCIS</source>
          <year>2025</year>
          , volume
          <volume>547</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2025</year>
          , pp.
          <fpage>140</fpage>
          -
          <lpage>155</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -92474-
          <issue>3</issue>
          _
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chis</surname>
          </string-name>
          , ,
          <article-title>A Modeling Method for Work Systems Knowledge Capture and Traceability</article-title>
          ,
          <source>in: Proceedings of CAiSE</source>
          <year>2025</year>
          , volume
          <volume>557</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2025</year>
          , pp.
          <fpage>239</fpage>
          -
          <lpage>246</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>031</fpage>
          -94590-8_
          <fpage>29</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>V. I.</given-names>
            <surname>Iga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. C.</given-names>
            <surname>Silaghi</surname>
          </string-name>
          ,
          <article-title>Ontology-Based Dialogue System for Domain-Specific Knowledge Acquisition</article-title>
          ,
          <source>in: Proceedings of ISD</source>
          <year>2023</year>
          , AIS,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .62036/ISD.
          <year>2023</year>
          .
          <volume>46</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>M. L. Bernardi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Casciani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Cimitile</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Marrella</surname>
          </string-name>
          ,
          <article-title>Conversing with Business Process-Aware Large Language Models: The BPLLM Framework</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Intell</surname>
          </string-name>
          . Inf. Syst.
          <volume>62</volume>
          (
          <year>2024</year>
          )
          <fpage>1607</fpage>
          -
          <lpage>1629</lpage>
          . doi:
          <volume>10</volume>
          . 1007/s10844-024-00898-1.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cinpoeru</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.-M. Ghiran</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Harkai</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          <string-name>
            <surname>Buchmann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Karagiannis</surname>
          </string-name>
          , Model-Driven
          <source>Context Configuration in Business Process Management Systems: An Approach Based on Knowledge Graphs, in: Proceedings of BIR</source>
          <year>2019</year>
          , volume
          <volume>365</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>189</fpage>
          -
          <lpage>203</lpage>
          . doi:
          <volume>10</volume>
          . 1007/978-3-
          <fpage>030</fpage>
          -31143-8_
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17] T. C. Nem t,oc, A.
          <string-name>
            <surname>-M. Ghiran</surname>
          </string-name>
          ,
          <article-title>Natural Language Querying of Invoice Data Using RAG and GraphRAG: Leveraging LLMs for Financial Document Insights</article-title>
          ,
          <source>in: Proceedings of CAiSE 2025 Workshops</source>
          , volume
          <volume>556</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2025</year>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>80</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -94931-
          <issue>9</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bachhofner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kiesling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Revoredo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Waibel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Polleres</surname>
          </string-name>
          ,
          <article-title>Automated process knowledge graph construction from BPMN models</article-title>
          ,
          <source>in: Proceedings of DEXA</source>
          <year>2022</year>
          , volume
          <volume>13426</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>47</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -12423-
          <issue>5</issue>
          _
          <fpage>3</fpage>
          .
          <fpage>978</fpage>
          -3-
          <fpage>031</fpage>
          -61007-3_
          <fpage>18</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ziche</surname>
          </string-name>
          , G. Apruzzese,
          <article-title>LLM4PM: A Case Study on Using Large Language Models for Process Modeling in Enterprise Organizations</article-title>
          ,
          <source>in: Proceedings of BPM 2024 Forum</source>
          , volume
          <volume>527</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>472</fpage>
          -
          <lpage>483</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -70445-1_
          <fpage>35</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chapela-Campa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumas</surname>
          </string-name>
          ,
          <article-title>From process mining to augmented process execution</article-title>
          ,
          <source>Software and Systems Modeling</source>
          <volume>22</volume>
          (
          <year>2023</year>
          )
          <fpage>1977</fpage>
          -
          <lpage>1986</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10270-023-01132-2.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kampik</surname>
          </string-name>
          , et al.,
          <article-title>Large Process Models: A Vision for Business Process Management in the Age of Generative AI</article-title>
          ,
          <source>Künstliche Intelligenz</source>
          <volume>39</volume>
          (
          <year>2025</year>
          )
          <fpage>81</fpage>
          -
          <lpage>95</lpage>
          . doi:
          <volume>10</volume>
          .1007/s13218-024-00863-8.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuster</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          , Abstractions, Scenarios, and
          <article-title>Prompt Definitions for Process Mining with LLMs: A Case Study</article-title>
          ,
          <source>in: Proceedings of BPM 2023 Workshops</source>
          , volume
          <volume>492</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>427</fpage>
          -
          <lpage>439</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -50974-2_
          <fpage>32</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kourani</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          ,
          <article-title>PM-LLM-Benchmark: Evaluating Large Language Models on Process Mining Tasks</article-title>
          ,
          <source>in: Proceedings of ICPM 2024 Workshops</source>
          , volume
          <volume>533</volume>
          <source>of LNBIP</source>
          , Springer,
          <year>2025</year>
          , pp.
          <fpage>610</fpage>
          -
          <lpage>623</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -82225-4_
          <fpage>45</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Karmaker</surname>
          </string-name>
          , D. Feng,
          <article-title>TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks</article-title>
          ,
          <source>in: Proceedings of EMNLP</source>
          <year>2023</year>
          , ACL,
          <year>2023</year>
          , pp.
          <fpage>14197</fpage>
          -
          <lpage>14203</lpage>
          . URL: https://aclanthology. org/
          <year>2023</year>
          .findings-emnlp.
          <volume>946</volume>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schulhof</surname>
          </string-name>
          , et al.,
          <source>The Prompt Report: A Systematic Survey of Prompt Engineering Techniques</source>
          (
          <year>2024</year>
          ). arXiv:
          <volume>2406</volume>
          .
          <fpage>06608</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ayad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alsayoud</surname>
          </string-name>
          ,
          <article-title>Prompt engineering techniques for semantic enhancement in business process models</article-title>
          ,
          <source>Bus. Proc. Manage. J</source>
          .
          <volume>30</volume>
          (
          <year>2024</year>
          )
          <fpage>2611</fpage>
          -
          <lpage>2641</lpage>
          . doi:
          <volume>10</volume>
          .1108/BPMJ-02-2024-0108.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>V. I. R.</given-names>
            <surname>Iga</surname>
          </string-name>
          , G. C.
          <article-title>Silaghi, LLMs for Knowledge-Graphs Enhanced Task-Oriented Dialogue Systems: Challenges and Opportunities</article-title>
          ,
          <source>in: Proceedings of CAiSE 2024 Workshops</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>168</fpage>
          -
          <lpage>179</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -61003-5_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>A.</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iannelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Truong</surname>
          </string-name>
          ,
          <article-title>Knowledge Injection to Counter Large Language Model (LLM) Hallucination</article-title>
          , in
          <source>: Proceedings of ESWC 2023 Satellite Events</source>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>182</fpage>
          -
          <lpage>185</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -43458-7_
          <fpage>34</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>B.</given-names>
            <surname>Reitemeyer</surname>
          </string-name>
          , H.-G. Fill,
          <article-title>Applying Large Language Models in Knowledge Graph-based Enterprise Modeling: Challenges and Opportunities (</article-title>
          <year>2025</year>
          ). arXiv:
          <volume>2501</volume>
          .
          <fpage>03566</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Dong,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>X. Huang,</surname>
          </string-name>
          <article-title>KnowGPT: Knowledge Graph based Prompting for Large Language Models</article-title>
          ,
          <source>in: Proceedings of NIPS</source>
          <year>2024</year>
          , volume
          <volume>37</volume>
          , Curran Associates Inc.,
          <year>2025</year>
          , pp.
          <fpage>6052</fpage>
          -
          <lpage>6080</lpage>
          . URL: https://neurips.cc/virtual/2024/poster/95299.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [50] GraphRAG, GraphRAG Metadata Filtering,
          <year>2024</year>
          . URL: https://graphrag.com/reference/graphrag/ metadata-filtering/.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Wieringa</surname>
          </string-name>
          ,
          <source>Design Science Methodology for Information Systems and Software Engineering</source>
          , Springer,
          <year>2014</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -43839-8.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pefers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tuunanen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Rothenberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <source>A Design Science Research Methodology for Information Systems Research, J. Manage. Inf. Syst</source>
          <volume>24</volume>
          (
          <year>2007</year>
          )
          <fpage>45</fpage>
          -
          <lpage>77</lpage>
          . doi:
          <volume>10</volume>
          .2753/ MIS0742-1222240302.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>D.</given-names>
            <surname>Karagiannis</surname>
          </string-name>
          ,
          <article-title>Agile modeling method engineering</article-title>
          ,
          <source>in: Proceedings of PCI 15</source>
          , ACM Press,
          <year>2015</year>
          , pp.
          <fpage>5</fpage>
          -
          <lpage>10</lpage>
          . doi:
          <volume>10</volume>
          .1145/2801948.2802040.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>S.</given-names>
            <surname>Es</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>James</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Espinosa</given-names>
            <surname>Anke</surname>
          </string-name>
          , S. Schockaert,
          <article-title>RAGAs: Automated evaluation of retrieval augmented generation</article-title>
          ,
          <source>in: Proceedings of EACL</source>
          <year>2024</year>
          , ACL,
          <year>2024</year>
          , pp.
          <fpage>150</fpage>
          -
          <lpage>158</lpage>
          . doi:
          <volume>10</volume>
          .18653/ v1/
          <year>2024</year>
          .eacl-demo.
          <volume>16</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>I.</given-names>
            <surname>Compagnucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Corradini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fornari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Re</surname>
          </string-name>
          ,
          <article-title>A Study on the Usage of the BPMN Notation for Designing Process Collaboration</article-title>
          , Choreography, and Conversation Models,
          <source>Bus. Inf. Syst. Eng</source>
          .
          <volume>66</volume>
          (
          <year>2024</year>
          )
          <fpage>43</fpage>
          -
          <lpage>66</lpage>
          . doi:
          <volume>10</volume>
          .1007/s12599-023-00818-7.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>