<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Introducing the BPMN-Chatbot for Efficient LLM-Based Process Modeling</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Julius</forename><surname>Köpke</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics Systems</orgName>
								<orgName type="institution">University of Klagenfurt</orgName>
								<address>
									<addrLine>Universitätsstraße 65-67</addrLine>
									<postCode>9020</postCode>
									<settlement>Klagenfurt am Wörthersee</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Aya</forename><surname>Safan</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics Systems</orgName>
								<orgName type="institution">University of Klagenfurt</orgName>
								<address>
									<addrLine>Universitätsstraße 65-67</addrLine>
									<postCode>9020</postCode>
									<settlement>Klagenfurt am Wörthersee</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Introducing the BPMN-Chatbot for Efficient LLM-Based Process Modeling</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">82592CA79F6593775CC1FAF8E6A5B47E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:03+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Large Language Models</term>
					<term>LLM</term>
					<term>Conversational Process Modeling</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Generative AI and Large Language Models (LLMs) have recently gained enormous interest in the BPM domain. Various research groups have experimented with extracting process information from text using LLMs. In this paper, we introduce a publicly available web-based tool for the automatic and interactive generation of BPMN process models using text or voice input. In contrast to existing tools, it is heavily optimized for generating high-quality models while keeping the costs (number of tokens) low. In our experiments, the tool achieved higher average correctness while using up to 94% fewer tokens compared to an alternative tool.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Recently, the BPM community has started experimenting with LLMs to extract process information and to generate process models from natural text with approaches such as <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>. These works indicate promising capabilities of LLMs for this task.</p><p>With ProMoAI <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>, there is also an online tool 1 available in the literature. The tool not only allows the initial generation of a process model from text. It also offers a feedback loop for refining the generated model. ProMoAI indicates a high potential of LLMs for this task. However, the applied prompting strategy and intermediate formats impose significantly high usage fees. When we first tried the tool, we generated a 3-step process with two iterations of the feedback loop. This small experiment costed 0.8 USD in OpenAI API fees using GPT-4. While the current GPT-4o model is more cost-effective, we argue that such systems should be optimized to efficiently use LLM resources to reduce costs while maintaining high-quality output. We propose that an optimized system can potentially democratize process modeling or at least make process modeling accessible to a broader audience. We have, therefore, developed our own approach in <ref type="bibr" target="#b7">[8]</ref> that focuses on the modeling costs in terms of the required number of tokens.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Introducing the BPMN-Chatbot</head><p>The BPMN-Chatbot implements our efficient approach for LLM-based process modeling in <ref type="bibr" target="#b7">[8]</ref>. It is publicly available on the tool's homepage 2 , which also includes a video demonstration and further resources. julius.koepke@aau.at (J. Köpke); aya.safan@aau.at (A. Safan) 0000-0002-6678-5731 (J. Köpke)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Usage Scenario</head><p>The BPMN-Chatbot mimics the look and feel of a classical messenger application. However, the output is rendered in the form of BPMN process diagrams. A screenshot of the tool is shown in Fig. <ref type="figure" target="#fig_1">1</ref>. First, the user provides a process description using text or voice input (see Fig. <ref type="figure" target="#fig_1">1 [A]</ref>). The tool then generates a business process and shows it graphically. In an optional feedback loop, the user can provide feedback on the model and obtain an updated version. During the feedback loop, the user can navigate between different model versions using arrow symbols to provide feedback on a specific version (see Fig. <ref type="figure" target="#fig_1">1 [B]</ref>).</p><p>Finally, the generated model can be downloaded as a BPMN-XML File (see Fig. <ref type="figure" target="#fig_1">1</ref> [C]). For evaluation purposes, the tool also includes a survey component for eliciting user feedback after the modeling session.</p><p>Example Fig. <ref type="figure" target="#fig_1">1</ref> shows a screenshot of the tool after one iteration of the feedback loop. In particular, the user first asked the BPMN-Chatbot to create a process model for the following scenario: When an order is received, we first check it, then we collect the items, and finally, we send the package to the customer. The system then generated a purely sequential model. In the next iteration, the user asked the tool to refine the model with the comment: If the items are not on stock, we reject the order. The resulting process shows the correct usage of a gateway to address the comment.</p><p>[A]</p><p>[B]</p><p>[C] </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Architecture</head><p>The tool is implemented in the form of a React single-page web application. An architectural sketch is shown in Fig. <ref type="figure" target="#fig_2">2</ref>. We focus here on the core components Prompt Generation and Model2Model Translation.</p><p>The prompt generation component creates messages that are sent to the LLM API (currently OpenAI). The Model2Model Translation component is used to transform an intermediate process format returned by the LLM to standard BPMN that is then visualized via a rendering component that uses the bpmn.js<ref type="foot" target="#foot_0">3</ref> library. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.1.">Prompt Generation</head><p>This is the core component of the tool, which is responsible for the low numbers of required tokens. To keep the number of tokens low, we aim for prompts with minimal overhead on top of the user input. To optimize the feedback loop costs, we only send the feedback comment and the referenced process model to the LLM. This approach is further enhanced by using an efficient intermediate JSON representation <ref type="bibr" target="#b7">[8]</ref> for processes returned from (and sent to) the LLM. It covers the core set of BPMN elements (also used by other approaches <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>), providing block-structured process models We have iteratively optimized our prompt in preliminary experiments on process descriptions disjoint from our evaluation data sets. This process has led to the following prompt shown in Listing 1.</p><p>You are a business process modeling expert. I will provide you with a textual description of a business process. Generate a JSON model for the process. Analyze and identify key elements: 1. Start and end events. 2. Tasks and their sequence. 3. Gateways (xor or parallel) and an array of "branches" containing tasks. For xor gateways, there is a condition for the decision point and each branch has a condition label. 4. Loops: involve repeating tasks until a specific condition is met. Nested structure: The schema uses nested structures within gateways to represent branching paths. Order matters: The order of elements in the "process" array defines the execution sequence. When analyzing the process description, identify opportunities to model tasks as parallel whenever possible for optimization (if it does not contradict the user intended sequence). Use clear names for labels and conditions. Aim for granular detail (e.g., instead of "Task 1: Action 1 and Action 2", use "Task 1: Action 1" and "Task 2: Action 2"). Sometimes you will be given a previous JSON solution with user instructions to edit.</p><p>Listing 1: Instructions prompt for process generation. <ref type="bibr" target="#b7">[8]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2.">Model Transformation: Intermediate Model to BPMN</head><p>This component is responsible for transforming the intermediate JSON process models generated by the LLM to standard BPMN XML models. On the one hand, this is a straightforward mapping of model elements of our nested intermediate process format to the graph-structured BPMN format. However, since the intermediate format does not contain any graphical representation information, the coordinates and size of all BPMN shapes and edges are calculated and inserted by this module. This step is highly relevant for providing a good user experience in the feedback loop: A small change in the process model should also only lead to a small change in the graphical representation. Thanks to the block-structured input processes, we can compute a planar graph that deterministically includes the coordinates of the elements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Maturity of the Tool</head><p>The tool has been evaluated in <ref type="bibr" target="#b7">[8]</ref>. The evaluation data is available online <ref type="foot" target="#foot_1">4</ref> . Two experiments were conducted:</p><p>1. A comparison against ProMoAI <ref type="bibr" target="#b6">[7]</ref> and a compatible prompting pattern of <ref type="bibr" target="#b3">[4]</ref> assessing the correctness and the required numbers of tokens. 2. An evaluation with visitors of a national science fair in Austria "Lange Nacht der Forschung" (Klagenfurt, May 24th., 2024).</p><p>Experiment 1 was based on a subset of 7 processes of the PET Dataset <ref type="bibr" target="#b8">[9]</ref>. In the experiment, the tool provided a substantial reduction of tokens of up to 94% compared to ProMoAI and showed correctness values of on average, 95% compared to 86% of the best competitor.</p><p>In the second experiment at the science fair, visitors used the tool to model processes of their own choice; 76 process models were created, and 40 visitors participated in a preliminary technology acceptance test. The experiments showed strong indications of the tool's overall usefulness and the high quality of the generated models in the feedback loop.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion and Future Works</head><p>With the BPMN-Chatbot, we have presented a highly efficient tool for LLM-based conversational process modeling. It allows users to interactively design processes using text or voice input. It provides substantial cost reductions (up to 94%) while achieving even better levels of correctness compared to an alternative tool and a prompting strategy from the literature. We argue that such tools may drastically change the way how processes are created in the near future.</p><p>For future works, we intend to publish the tool as an open-source software that allows users to easily integrate their own prompting strategies and intermediate formats via plugins for evaluating their own approaches with users. We argue that user tests are strongly needed to evaluate the feedback loop's capabilities. We would like to take the opportunity to gather feedback from experts at the demo session. Furthermore, we plan to extend the tool to support more elements of the BPMN Meta-Model, such as pools, lanes, and data objects.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Proceedings of the Best BPM Dissertation Award, Doctoral Consortium, and Demonstrations &amp; Resources Forum co-located with 22nd International Conference on Business Process Management (BPM 2024), Krakow, Poland, September 1st to 6th, 2024. * Corresponding author.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Screenshot of the tool after one process model refinement.</figDesc><graphic coords="2,79.96,324.52,435.37,227.11" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: BPMN Chatbot prototype architecture.</figDesc><graphic coords="3,72.00,65.61,451.29,194.23" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">https://bpmn.io/toolkit/bpmn-js/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">https://github.com/BPMN-Chatbot/bpmn-chatbot-archive</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Conceptual modeling and large language models: Impressions from first experiments with chatgpt</title>
		<author>
			<persName><forename type="first">H</forename><surname>Fill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fettke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Köpke</surname></persName>
		</author>
		<idno type="DOI">10.18417/EMISA.18.3</idno>
	</analytic>
	<monogr>
		<title level="j">Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page">3</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Large language models can accomplish business process management tasks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Grohs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Abb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Elsayed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-R</forename><surname>Rehse</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-50974-2_34</idno>
	</analytic>
	<monogr>
		<title level="m">Business Process Management Workshops</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">De</forename><surname>Weerdt</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Pufahl</surname></persName>
		</editor>
		<meeting><address><addrLine>Nature Switzerland; Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="453" to="465" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Conversational process modelling: state of the art, applications, and implications in practice</title>
		<author>
			<persName><forename type="first">N</forename><surname>Klievtsova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-V</forename><surname>Benzin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kampik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mangler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rinderle-Ma</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-41623-1_19</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Business Process Management</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="319" to="336" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Conversational process modeling: Can generative ai empower domain experts in creating and redesigning process models?</title>
		<author>
			<persName><forename type="first">N</forename><surname>Klievtsova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-V</forename><surname>Benzin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kampik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mangler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rinderle-Ma</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2304.11065v2.arXiv:2304.11065" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Modeling meets large language models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Forell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schüler</surname></persName>
		</author>
		<idno type="DOI">10.18420/modellierung2024-ws-003</idno>
	</analytic>
	<monogr>
		<title level="m">Modellierung 2024 Satellite Events</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Process modeling with large language models</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kourani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Berti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M P</forename><surname>Van Der Aalst</surname></persName>
		</author>
		<ptr target="https://arxiv.org/pdf/2403.07541.arXiv:2403.07541" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Kourani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Berti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M P</forename><surname>Van Der Aalst</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2403.04327</idno>
		<title level="m">Promoai: Process modeling with generative ai</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Efficient llm-based conversational process modeling</title>
		<author>
			<persName><forename type="first">J</forename><surname>Köpke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Safan</surname></persName>
		</author>
		<ptr target="https://sites.google.com/view/nlp4bpm2024" />
	</analytic>
	<monogr>
		<title level="m">NLP4BPM Workshop at BPM 2024 (to appear in Business Process Management Workshops 2024</title>
				<meeting><address><addrLine>Krakow, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Pet: An annotated dataset for process extraction from natural language text tasks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bellan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Van Der Aa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dragoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ghidini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">P</forename><surname>Ponzetto</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-25383-6_23</idno>
	</analytic>
	<monogr>
		<title level="m">Business Process Management Workshops</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Cabanillas</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><forename type="middle">F</forename><surname>Garmann-Johnsen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Koschmider</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="315" to="321" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
