<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>October</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Proposal For Handling Query Ambiguity For Process Mining Tasks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lucas Fortunato Das Neves</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chrysoula Zerva</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Gianola</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INESC-ID/Instituto Superior Técnico, Universidade de Lisboa</institution>
          ,
          <addr-line>Lisbon</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Instituto de Telecomunicações/Instituto Superior Técnico, Universidade de Lisboa</institution>
          ,
          <addr-line>Lisbon</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>26</volume>
      <issue>2025</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Process mining plays a critical role in extracting insights from organizational processes by analyzing event log data. However, it requires the engagement of domain experts and process analysts. While Large Language Models have enabled conversational agents for process mining tasks -reducing dependence on analysts their adoption introduces new challenges. Users without an understanding of process mining often formulate ambiguous or ill-defined queries due to limited specific knowledge, hindering accurate analysis. To address this, we propose multiple approaches that can be further studied to reduce the ambiguity by relying on AI techniques like Retrieval-Augmented Generation and Chain-of-Thought. By reducing ambiguity, human interaction with conversational agents becomes more intuitive, further bridging the existing gap. We also introduce datasets Query-PM-LLM and AmbQuery-PM-LLM, which can be used as benchmarks for future conversational agents capable of solving process mining tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Business Process Management</kwd>
        <kwd>Process Mining</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>Conversational Agents</kwd>
        <kwd>Ambiguity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Business processes are structured “chains of events, activities and decisions” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], where activities
are performed by some actors, and decisions can involve these actors and possible object artifacts.
They are the backbone of how organizations deliver value, whether it be producing goods, ofering
services, or achieving internal objectives. Business Process Management (BPM) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is a well-established
research field whose main objective is to help organizations achieve their organizational goals. BPM
provides techniques, formal methods, and tools to analyze, design, implement, monitor, and optimize
business processes for greater eficiency and alignment with business objectives. To efectively map
and communicate processes, organizations leverage process modeling languages, such as the BPMN
standard.1 Despite the advantages associated with modeling business processes, there exists a disconnect
between how processes are documented and how they are actually executed, i.e., since the modeling
may fail to capture inherent deviations of real-world workflows. Consequently, organizations also rely
on data to uncover the actual process execution, leveraging automated techniques from process mining.
      </p>
      <p>
        Process mining [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] combines data science and process science to automatically extract from event
logs information for the identification of ineficiencies, bottlenecks, and deviations. It can be both
backward-looking, such as uncovering the root cause of a problem, and forward-looking, like predicting
processing times or suggesting improvements [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. One can exploit process mining tools and reasoners
to infer process behaviors given the event log. Reasoners are tools that perform automated logical
reasoning in order to solve process mining tasks. These include, but are not limited to, process discovery
- based on the event log, discover a process - and conformance checking - comparing a given model and
an event log to determine if there are any discrepancies between them: for both tasks various approaches
exploiting symbolic reasoning have been proposed, such as using SAT [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] solving, Satisfiability Modulo
Theories [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and planning [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Large Language Models (LLMs) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] have revolutionized AI, enabling advanced understanding and
generation of human language, and facilitating interaction with humans in a seamless, more
userfriendly manner. Hence, to allow domain experts to make better use of business processes without the
assistance of process analysts and to bridge the gap among their diferent competencies, conversational
agents that can solve specific process mining tasks have already been developed [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        The reliance on conversational agents leads to another set of challenges, since the inherent ambiguity
of natural language can result not only in misinterpretation of user intents, but also in hallucinations or
biased responses, further complicating the analysis [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. For example, if a user prompts a model with
"Could you enumerate the activities that occur in the Sepsis Cases - Event Log?", the model knows it has
to provide a list of activities, however, if the user prompts the same model with "Tell me what happens
in the sepsis data.", the model might be uncertain about the type of the expected answer (i.e., the user
might desire the list of activities that happen in the Sepsis Cases - Event Log or a list of conformant
traces that exemplify typical behavior). This parallel between the inherent flexibility and potential
for multiple interpretations of natural language and formal systems has garnered interest even from
philosophers, since they have always prioritized logical representation and precise meaning2.
      </p>
      <p>Aimed at disambiguating queries provided by non-expert users who may lack the technical knowledge,
we propose a set of methodologies that can increase the robustness of conversational agents that assist
in process mining tasks. This robustness will come from the deployment of AI techniques that can
help disambiguate user queries, either by relying on additional information – Retrieval-Augmented
Generation Large Language Models(RAG-LLM) – or relying on reasoning – Using Chain-of-Thought(CoT).
In Section 3, we outline diferent pathways that generally start by identifying ambiguity once the user
query is received, proceed to disambiguating the query if necessary, and finish by providing an answer.
Given the multitude of ways LLMs are being applied in process mining and considering the multiple
approaches we outline to disambiguate questions provided by users, the expected contributions are:
- EC1: A way to disambiguate the user query by relying on additional information through the use of</p>
      <p>RAG-LLM.
- EC2: Using CoT to reason from ambiguous query identification to desired answer.
- EC3: new datasets that can serve as a benchmark for future work involving tailored NLP tasks in the
context of process mining.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>This Section presents a comprehensive overview of related work, beginning with contemporary issues
surrounding ambiguities in conversational scenarios. Those are followed by relevant research regarding
techniques like CoT and RAG-LLM capabilities. This Section ends by highlighting other work related to
LLMs that can solve process mining tasks.</p>
      <p>
        Ambiguity in LLM. Prior to the advent of LLMs, multiple works focused on addressing the challenge
of ambiguity in natural language requirements for software development had already been developed
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. With the introduction of LLMs, the issue of ambiguity gained even more importance,
considering that they have to deal with the ambiguities that arise in conversation scenarios. Aimed at solving
ambiguities in conversations based on questions and answers, Abg-CoQA [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] focuses on identifying
ambiguities (Ambiguity Detection), following up with clarification questions (Clarification Question
Generation) - questions whose answers will eliminate ambiguity and allow a given model to answer
the initial question that was previously ambiguous - and returning a final answer (Clarification-based
Question Answering). While that paper presents an interesting approach to deal with ambiguity in
conversational scenarios and therefore inspiring the present work to make use of additional questions
to eliminate ambiguity, it does not focus on domain-specific contexts such as process mining. In our
2https://plato.stanford.edu/eNtRIeS/ambiguity/
methodology, we suggest a system component that, instead of asking clarification questions, can provide
refined questions and let the user select the one that better represents their initial intent.
RAG &amp; CoT. RAG allows LLMs to account for updated information and deal with hallucinations by
integrating external knowledge sources, such as document repositories (e.g., Wikipedia) and search
engines (e.g., Bing), outperforming state-of-the-art results in open-domain question answering as
highlighted in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. However, challenges with indiscriminate context use have led to advancements
like RQ-RAG [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], which refines queries for more efective retrieval through rewriting, decomposition,
and ambiguity clarification. Instead of disambiguating a query that will later be fed to the retriever
component, as is done in RQ-RAG [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], we suggest using the retrieved context itself to disambiguate
the query, providing non-ambiguous options to the user. In parallel, CoT mimics human reasoning by
breaking problems down into multiple sequential steps in order to solve them, mirroring the cognitive
strategy of breaking down complex tasks into manageable, intermediate thoughts3. This is useful
in the mathematical context. For instance, the application of CoT reasoning with bounded-depth
autoregressive transformers to solve mathematical and dynamic programming problems is explored in
[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The work of [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] explores enhancing the reasoning capabilities of LLMs using CoT prompting, a
method that generates natural language rationales to guide problem-solving and leverages few-shot
learning. Empirical evaluations of this study demonstrate CoT ’s efectiveness in benchmarks like GSM8K
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], CSQA [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], and symbolic tasks; and gains as the model size increases. It can be hypothesized
that CoT can efectively deal with ambiguity where the reasoning steps guide the transition from an
ambiguous input to a refined and actionable response.
      </p>
      <p>
        LLMs &amp; Process Mining. Significant eforts have been dedicated to generating process models
from textual descriptions, such as ProMoAI [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] and BPMN-ChatBot [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. In the field of process
mining, conversational agents like C-4PM [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] have been developed. When C-4PM [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] receives a
user query, it identifies the user’s intent and then transforms natural language input into a Linear
Temporal Logic on Finite Traces (LTLf) formula [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] compatible with Declare4py using the help of GPT
model. Declare4py, grounded on the Declare language [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], a declarative process modeling language,
is a Python library that uses constraint-based specifications to analyze a process. There is also work
[
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] focused on assessing LLMs’ understanding of process behavior and meaning of activities. [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]
is oriented towards accomplishing three process mining tasks that leverage LLMs’ general process
knowledge and understanding of activity semantics: classifying process traces as anomalous or not,
evaluating and classifying pairwise activity relations within traces, and predicting the next activity,
incorporating semantic understanding. Both [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] still do not address the inherent ambiguity
of natural language nor the ambiguity that can arise from limited knowledge within a given domain,
which is extremely relevant if the goal is to give non-experts access to agents that solve process mining
tasks.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>This section outlines the methodology we suggest for further research to address ambiguity in process
mining queries. In Section 3.1, we specify the scope of ambiguity on which we focus and describe
the creation of two datasets, Query-PM-LLM and AmbQuery-PM-LLM, based on the Sepsis Cases
— Event Log, to support NLP tasks in process mining. In Section 3.2, we present a RAG component
designed to retrieve relevant information and disambiguate queries. In Section 3.3, we propose a CoT
component to facilitate query disambiguation through structured reasoning.</p>
      <sec id="sec-3-1">
        <title>3https://www.ibm.com/think/topics/chain-of-thoughts</title>
        <sec id="sec-3-1-1">
          <title>3.1. Datasets for NLP tasks in Process Mining</title>
          <p>
            Considering the lack of tailored NLP tasks and benchmarking datasets in the field of process mining, as
highlighted by [
            <xref ref-type="bibr" rid="ref26">26</xref>
            ], we prepared a dataset (Query-PM-LLM) based on the Sepsis Cases — Event Log4.
This event log has over 1,000 cases with a total of 15,000 events, which required us to use a compact
abstraction of it that is added to the dataset of the type &lt;input, answer&gt;, where the input comprises
the event log abstraction and a non-ambiguous query. This abstraction, whose use was inspired by
[
            <xref ref-type="bibr" rid="ref27">27</xref>
            ], corresponds to process flow (abstraction of directly-follows graph) and process variants (diferent
sequences of activities within a process, each representing a set of traces that follow the same pattern),
and both can be obtained using pm4py—a process mining library.
          </p>
          <p>
            From the non-ambiguous queries present in that dataset, we used ChatGPT and Grok to generate
ambiguous queries. Although multiple types of ambiguity can be considered, those that are most
relevant in the context of conversational agents and specific domains, such as process mining, relate to:
- Ambiguities directly associated with natural language. In the work by [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ], four types of ambiguity
are highlighted, with two particularly relevant in this context: coreference resolution, which involves
identifying what a pronoun refers to in a sentence; and answer types, which pertain to the uncertainty
regarding the nature of the desired answer (e.g., when asked about a book, not sure if it is about
identifying the title or genre of it).
- Ambiguities associated with domain knowledge, as highlighted in the context of Natural Language
Requirements by [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ], which aims to detect pragmatic ambiguity (ambiguity that arises from the
reader’s background knowledge).
          </p>
          <p>We produced a second dataset (AmbQuery-PM-LLM) of the type &lt;input, answer&gt;, where the input
comprises the abstraction and a generated ambiguous query, and answer corresponds to the answer
of the respective non-ambiguous query. In Section 3.2 and Section 3.3, we motivate future work by
outlining methods incorporating RAG and CoT to solve the ambiguities present in AmbQuery-PM-LLM
and lead to non-ambiguous questions in this field, similar to those of Query-PM-LLM.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.2. RAG Component</title>
          <p>Building upon previous eforts mentioned in Section 2, it would be beneficial to enhance the user
query handling process by first retrieving relevant external information. This could include event logs
pertinent to the question or domain-specific information, such as industry standards or descriptions of
concepts from the perspective of the domain, which helps reduce ambiguity. Following this retrieval, and
as presented in Figure 1, the system could classify the user query as either ambiguous or non-ambiguous.
For queries deemed non-ambiguous, the system could directly provide an answer. Conversely, for
ambiguous queries, the system could leverage the retrieved information to map the query to potential
non-ambiguous alternatives, prompting the user to select the one that is closely related to their initial
intent (i.e., asking the user "Did you mean to ask Question X or Question Y?"). As the model receives
the selected non-ambiguous query, it can directly provide an answer. Here, the space for research
and exploration would arise by evaluating which kind of information retrieved could yield better
disambiguations.</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>3.3. CoT Component</title>
          <p>We further suggest experimenting with CoT in this field, considering that process mining tasks often
involve multi-step logical reasoning, and query disambiguation can benefit from CoT. If the user query
is already identified as ambiguous, which can be explored following the RAG methodology previously
suggested, one can go from an ambiguous query to a non-ambiguous query and final answer.
Dataset. Dataset with reasoning steps that lead from the query to the answer (&lt;input, non-ambiguous
query, answer&gt;, where the input corresponds to process abstraction and ambiguous query). Such a
dataset can be created from AmbQuery-PM-LLM.</p>
          <p>Another option which would not require the integration of other techniques like RAG as presented in
Figure 1, corresponds to including ambiguity identification in the reasoning steps, leading to a dataset of
the type: &lt;input, CoT Steps, answer&gt;, where CoT Steps = &lt;ambiguity identification, ambiguity association,
non-ambiguous query&gt;, ambiguity identification confirms the existence of uncertainty and ambiguity
association localizes the source of ambiguity within the question. The original datasets could also be
used here in the same manner that was suggested for the other CoT approach.</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>3.4. Summary of Approaches and Their Integration</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>In summary, three research pathways can be explored:</title>
        <p>- RAG: singularly explore the development of a RAG component as described in Section 3.2, using
RAG for ambiguity detection, disambiguation, suggestion of non-ambiguous queries, and answer
generation.
- RAG + CoT: use RAG for ambiguity detection, disambiguation, suggestion of non-ambiguous queries,
and add CoT to use ambiguous and non-ambiguous queries for answer generation.
- CoT: use CoT to go from ambiguous query to answer generation, where the intermediate reasoning
steps correspond to ambiguity identification, ambiguity association, and non-ambiguous query.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Evaluation</title>
      <sec id="sec-4-1">
        <title>4.1. Metrics</title>
        <p>In this Section we present the results of one of the approaches previously outlined in Section 3.4.
We implemented the approach corresponding to the exclusive use of CoT reasoning that goes from
ambiguous query to answer generation.</p>
        <p>
          Semantic Entropy. To quantitatively assess the level of ambiguity in user queries and the efectiveness
of using CoT for disambiguation, we employ semantic entropy as a key metric. Semantic entropy,
introduced by [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], provides a measure of uncertainty in natural language generation that accounts for
linguistic invariances, focusing on the diversity of meanings. Semantic entropy is computed as follows:
For a given query, multiple generations are sampled from a given LLM model. These generations are
then clustered into semantic equivalence classes using a bidirectional entailment classifier (DeBERTa
large model [29] fine-tuned with MNLI task 5), which groups outputs that mutually entail each other
(i.e., one can be inferred from the other, capturing shared meanings). The entropy is then estimated
over the probability distribution of these clusters. Formally, if  represents the set of semantic clusters
and Equation 1 is the semantic likelihood for cluster  given input , then semantic entropy is given by
Equation 2.
        </p>
        <p>( | ) = ∑︁ ( | )</p>
        <p>∈
sem() = −
∑︁ ( | ) log ( | )
(1)
(2)
∈</p>
        <p>In this specific experiment to compute semantic entropy, the model that we used for generation
corresponds to Meta-Llama-3-8B-Instruct6.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Results</title>
        <p>The use of CoT drastically reduced the entropy on AmbQuery-PM-LLM, as can be seen in Table 1,
indicating that the reasoning steps contributed to properly guide the model to generate answers based
on the abstraction of the event log and disambiguate the user query when necessary.</p>
        <p>Interestingly, the Average Semantic Entropy on Query-PM-LLM, while lower than the respective
value for AmbQuery-PM-LLM, still is relatively high (0.996) suggesting that not only ambiguous
queries, but also non-ambiguous ones can benefit from reasoning steps.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>Dataset
Query-PM-LLM
AmbQuery-PM-LLM
AmbQuery-PM-LLM + CoT</p>
      <p>Average Semantic Entropy</p>
      <p>Number of Questions
This paper tackles the significant challenge of query ambiguity in process mining tasks facilitated by
LLMs. By suggesting the use of RAG and CoT techniques, we provide a pathway to resolve ambiguity
and enhance the quality of interactions with conversational agents. Furthermore, we show that CoT
positively contributes to the reduction of uncertainty.</p>
      <p>The introduction of the Query-PM-LLM and AmbQuery-PM-LLM datasets further establishes
a foundation for benchmarking and improving conversational agents. As the field evolves, future
research can leverage our datasets and methodologies to create even more robust conversational agents,
ultimately transforming how organizations understand and improve their processes.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the ‘OptiGov’ project, with ref. n. 2024.07385.IACDC (DOI:
10.54499/2024.07385.IACDC), fully funded by the ‘Plano de Recuperação e Resiliência’ (PRR) under the
investment ‘RE-C05-i08 - Ciência Mais Digital’ (measure ‘RE-C05-i08.m04’), framed within the financing
5https://huggingface.co/microsoft/deberta-large-mnli
6https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
agreement signed between the ‘Estrutura de Missão Recuperar Portugal’ (EMRP) and Fundação para a
Ciência e a Tecnologia, I.P. (FCT) as an intermediary beneficiary. This work was also partly supported
by Portuguese national funds through Fundação para a Ciência e a Tecnologia, I.P. (FCT), under projects
UID/50021/2025 and UID/PRR/50021/2025.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used Grammarly in order to: Grammar and spelling
check. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed and
take(s) full responsibility for the publication’s content.
in natural language generation, 2023. URL: https://arxiv.org/abs/2302.09664. arXiv:2302.09664.
[29] P. He, X. Liu, J. Gao, W. Chen, Deberta: Decoding-enhanced bert with disentangled attention, 2021.</p>
      <p>URL: https://arxiv.org/abs/2006.03654. arXiv:2006.03654.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Rosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mendling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Reijers</surname>
          </string-name>
          , Fundamentals of Business Process Management,
          <source>Second Edition</source>
          , Springer,
          <year>2018</year>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>662</fpage>
          -56509-4. doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>662</fpage>
          -56509-4.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Weske</surname>
          </string-name>
          ,
          <source>Business Process Management - Concepts</source>
          ,
          <source>Languages, Architectures, 2nd Edition</source>
          , Springer,
          <year>2012</year>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -28616-2. doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>642</fpage>
          -28616-2.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          ,
          <source>Process Mining - Data Science in Action, Second Edition</source>
          , Springer,
          <year>2016</year>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>662</fpage>
          -49851-4. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -49851-4.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          , J. Carmona (Eds.),
          <source>Process Mining Handbook</source>
          , volume
          <volume>448</volume>
          <source>of Lecture Notes in Business Information Processing</source>
          , Springer,
          <year>2022</year>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -08848-3. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -08848-3.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Francescomarino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ghidini</surname>
          </string-name>
          ,
          <article-title>Predictive process monitoring</article-title>
          , in: W.
          <string-name>
            <surname>M. P. van der Aalst</surname>
          </string-name>
          , J. Carmona (Eds.),
          <source>Process Mining Handbook</source>
          , volume
          <volume>448</volume>
          <source>of Lecture Notes in Business Information Processing</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>320</fpage>
          -
          <lpage>346</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -08848-3_
          <fpage>10</fpage>
          . doi:
          <volume>10</volume>
          . 1007/978-3-
          <fpage>031</fpage>
          -08848-3\_
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Boltenhagen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chatain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carmona</surname>
          </string-name>
          ,
          <article-title>Optimized SAT encoding of conformance checking artefacts</article-title>
          ,
          <source>Computing</source>
          <volume>103</volume>
          (
          <year>2021</year>
          )
          <fpage>29</fpage>
          -
          <lpage>50</lpage>
          . URL: https://doi.org/10.1007/s00607-020-00831-8. doi:
          <volume>10</volume>
          . 1007/S00607-020-00831-8.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Felli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rivkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Winkler</surname>
          </string-name>
          ,
          <article-title>Data-aware conformance checking with SMT, Inf</article-title>
          . Syst.
          <volume>117</volume>
          (
          <year>2023</year>
          )
          <article-title>102230</article-title>
          . URL: https://doi.org/10.1016/j.is.
          <year>2023</year>
          .
          <volume>102230</volume>
          . doi:
          <volume>10</volume>
          .1016/J. IS.
          <year>2023</year>
          .
          <volume>102230</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>M. de Leoni</surname>
            , G. Lanciano,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Marrella</surname>
          </string-name>
          ,
          <article-title>Aligning partially-ordered process-execution traces and models using automated planning</article-title>
          ,
          <source>in: Proceedings of the Twenty-Eighth International Conference on Automated Planning and Scheduling</source>
          ,
          <string-name>
            <surname>ICAPS</surname>
          </string-name>
          <year>2018</year>
          ,
          <article-title>Delft, The Netherlands</article-title>
          , June 24-29,
          <year>2018</year>
          , AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>321</fpage>
          -
          <lpage>329</lpage>
          . URL: https://aaai.org/ocs/index.php/ICAPS/ICAPS18/paper/view/ 17739.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>CoRR abs/1706</source>
          .03762 (
          <year>2017</year>
          ). URL: http://arxiv.org/abs/1706.03762. arXiv:
          <volume>1706</volume>
          .
          <fpage>03762</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fontenla-Seco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Winkler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Penín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J. B.</given-names>
            <surname>Diz</surname>
          </string-name>
          ,
          <article-title>The Droid You're Looking For: C-4PM, a Conversational Agent for Declarative Process Mining</article-title>
          ., in: BPM (Demos / Resources Forum),
          <year>2023</year>
          , pp.
          <fpage>112</fpage>
          -
          <lpage>116</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>U.</given-names>
            <surname>Jessen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sroka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fahland</surname>
          </string-name>
          ,
          <article-title>Chit-chat or deep talk: Prompt engineering for process mining</article-title>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2307.09909. arXiv:
          <volume>2307</volume>
          .
          <fpage>09909</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Keluskar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhattacharjee</surname>
          </string-name>
          , H. Liu,
          <article-title>Do llms understand ambiguity in text? a case study in openworld question answering</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2411.12395. arXiv:
          <volume>2411</volume>
          .
          <fpage>12395</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrari</surname>
          </string-name>
          , G. Lipari,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gnesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. O.</given-names>
            <surname>Spagnolo</surname>
          </string-name>
          ,
          <article-title>Pragmatic ambiguity detection in natural language requirements</article-title>
          ,
          <source>in: 2014 IEEE 1st International Workshop on Artificial Intelligence for Requirements Engineering (AIRE)</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/AIRE.
          <year>2014</year>
          .
          <volume>6894849</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esuli</surname>
          </string-name>
          ,
          <article-title>A NLP approach for cross-domain ambiguity detection in requirements engineering</article-title>
          ,
          <source>Automated Software Engineering</source>
          <volume>26</volume>
          (
          <year>2019</year>
          )
          <fpage>559</fpage>
          -
          <lpage>598</lpage>
          . URL: https://link.springer.com/ content/pdf/10.1007/s10515-019-00261-7.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Reddy,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alikhani</surname>
          </string-name>
          , Abg-coQA:
          <article-title>Clarifying ambiguity in conversational question answering</article-title>
          ,
          <source>in: 3rd Conference on Automated Knowledge Base Construction</source>
          ,
          <year>2021</year>
          . URL: https://openreview.net/forum?id=
          <fpage>SlDZ1o8FsJU</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piktus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Petroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Karpukhin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Küttler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yih</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rocktäschel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Riedel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kiela</surname>
          </string-name>
          ,
          <article-title>Retrieval-augmented generation for knowledge-intensive NLP tasks</article-title>
          , CoRR abs/
          <year>2005</year>
          .11401 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2005</year>
          .11401. arXiv:
          <year>2005</year>
          .11401.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>C.-M. Chan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Luo</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Xue</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Fu</surname>
          </string-name>
          ,
          <article-title>Rq-rag: Learning to refine queries for retrieval augmented generation</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2404.00610. arXiv:
          <volume>2404</volume>
          .
          <fpage>00610</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>G.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Towards revealing the mystery behind chain of thought: A theoretical perspective</article-title>
          , in: A.
          <string-name>
            <surname>Oh</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Naumann</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Globerson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Saenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hardt</surname>
          </string-name>
          , S. Levine (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>36</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2023</year>
          , pp.
          <fpage>70757</fpage>
          -
          <lpage>70798</lpage>
          . URL: https://proceedings.neurips.cc/paper_files/paper/2023/ ifle/dfc310e81992d2e4cedc09ac47ef13e-Paper-Conference.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuurmans</surname>
          </string-name>
          , M. Bosma, b. ichter,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Chainof-thought prompting elicits reasoning in large language models</article-title>
          , in: S. Koyejo,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Belgrave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Oh (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>35</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2022</year>
          , pp.
          <fpage>24824</fpage>
          -
          <lpage>24837</lpage>
          . URL: https://proceedings.neurips.cc/ paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>K.</given-names>
            <surname>Cobbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kosaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bavarian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Plappert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tworek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nakano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schulman</surname>
          </string-name>
          , Training verifiers to solve math word problems,
          <source>CoRR abs/2110</source>
          .14168 (
          <year>2021</year>
          ). URL: https://arxiv.org/abs/2110.14168. arXiv:
          <volume>2110</volume>
          .
          <fpage>14168</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Talmor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Herzig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lourie</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Berant,</surname>
          </string-name>
          <article-title>CommonsenseQA: A question answering challenge targeting commonsense knowledge</article-title>
          , in: J.
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Doran</surname>
          </string-name>
          , T. Solorio (Eds.),
          <source>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <source>Association for Computational Linguistics</source>
          , Minneapolis, Minnesota,
          <year>2019</year>
          , pp.
          <fpage>4149</fpage>
          -
          <lpage>4158</lpage>
          . URL: https://aclanthology.org/N19-1421/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          -1421.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kourani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Berti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuster</surname>
          </string-name>
          , W. M. van der Aalst,
          <article-title>Promoai: Process modeling with generative ai</article-title>
          , in: K. Larson (Ed.),
          <source>Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, International Joint Conferences on Artificial Intelligence Organization</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>8708</fpage>
          -
          <lpage>8712</lpage>
          . URL: https://doi.org/10.24963/ijcai.
          <year>2024</year>
          /1014. doi:
          <volume>10</volume>
          .24963/ijcai.
          <year>2024</year>
          /1014, demo Track.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kopke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Safan</surname>
          </string-name>
          ,
          <article-title>Eficient LLM-Based conversational process modeling (</article-title>
          <year>2024</year>
          ). URL: https: //drive.google.com/file/d/1EVl0SVKeyTnsw6pb59WgvL1K8Y88weRk/view.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>G. De Giacomo</surname>
            ,
            <given-names>M. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Vardi</surname>
          </string-name>
          ,
          <article-title>Linear temporal logic and linear dynamic logic on finite traces (</article-title>
          <year>2013</year>
          )
          <fpage>854</fpage>
          -
          <lpage>860</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>C. D. Ciccio</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>Declarative process specifications: Reasoning, discovery, monitoring</article-title>
          , in: Process Mining Handbook, volume
          <volume>448</volume>
          <source>of Lecture Notes in Business Information Processing</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>152</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -08848-
          <issue>3</issue>
          _4. doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>031</fpage>
          -08848-3\_4.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rebmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          , G. Glavaš, H. van der Aa,
          <article-title>Evaluating the ability of llms to solve semantics-aware process mining tasks</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2407.02310. arXiv:
          <volume>2407</volume>
          .
          <fpage>02310</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuster</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          , Abstractions, scenarios, and
          <article-title>prompt definitions for process mining with llms: A case study</article-title>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2307.02194. arXiv:
          <volume>2307</volume>
          .
          <fpage>02194</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kuhn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Farquhar</surname>
          </string-name>
          ,
          <article-title>Semantic uncertainty: Linguistic invariances for uncertainty estimation</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>