<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>BPMN-Chatbot++: LLM-Based Modeling of Collaboration Diagrams with Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aya Safan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Julius Köpke</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Klagenfurt, Department of Informatics Systems</institution>
          ,
          <addr-line>Universitätsstraße 65-67, 9020 Klagenfurt am Wörthersee, Austria https://</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Generative AI and large language models (LLMs) have shown promising capabilities in generating business process models from textual descriptions and interactive user feedback. In this paper, we present an extended version of the BPMN-Chatbot, a conversational modeling tool that now supports BPMN Collaboration diagrams with multiple pools, lanes, message flows, and data objects. The tool combines LLM-based generation with symbolic AI in the form of classical model checking to detect and explain modeling errors. A preliminary evaluation demonstrates strong user acceptance and consistently high quality of the generated models.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Large Language Models</kwd>
        <kwd>LLM</kwd>
        <kwd>Conversational Process Modeling</kwd>
        <kwd>Model Checking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Extending the BPMN-Chatbot</title>
      <p>The BPMN-Chatbot has been extended to support BPMN collaborations with multiple pools and lanes.
The tool now also supports event-based gateways for passive decisions, timer events, and data flow
through data objects and data stores. The extended tool is publicly available on our homepage1, along
with a video demonstration and additional resources.</p>
      <sec id="sec-2-1">
        <title>2.1. Architecture</title>
        <p>The architecture of the extended BPMN-Chatbot tool is shown in Figure 1. The tool is implemented
as a React single-page web application. We focus here on three core components: Prompt Generation,
Model2Model Translation, and Model Checkers.</p>
        <p>1. User
Input</p>
        <p>BPMN Chatbot React App
Model2Model
Translator</p>
        <p>Prompt</p>
        <p>Generator
6. BPMN 5. JSON</p>
        <p>2. User Input</p>
        <p>UI Components</p>
        <p>GET Checkers
Request List</p>
        <p>Checkers
Service
Registry</p>
        <p>Register
7. POST 8. Checker
Request Response
Checker
Service
Instance
3. POST
Request
4. JSON
Response</p>
        <p>LLM API</p>
        <sec id="sec-2-1-1">
          <title>2.1.1. Prompt Generation</title>
          <p>This component is responsible for generating structured prompts based on user input and system
context, which are then sent as API calls to the LLM. We still use an intermediate JSON format to
represent the process model. The extended JSON schema is provided to the LLM through a function
definition included in the tools. While our earlier approach successfully employed zero-shot prompting,
it was insuficient to address the complexity of BPMN collaboration modeling, leading us to adopt
few-shot prompting. Our prompt includes a brief role description and minimal examples illustrating
correct usage of specific BPMN elements in our JSON format.</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>2.1.2. BPMN-XML Generation</title>
          <p>The Model2Model Translator is our custom-built component that converts the intermediate
blockstructured process model, represented in a JSON format, into a BPMN XML representation for rendering,
exporting, and model checking. It deterministically assigns graphical coordinates to flow elements based
on the block-structured control flow within each pool. Nodes are positioned using relative coordinates,
which are then adjusted by lane-specific vertical ofsets. Sequence and message flows are added based
on node positions. To reduce visual clutter, data objects and associations are included in the XML but
edges are not rendered; instead, they are indicated by annotations. Figure 4 shows an example diagram
rendered by the tool.</p>
        </sec>
        <sec id="sec-2-1-3">
          <title>2.1.3. Model Checkers</title>
          <p>Model checkers are implemented as independent services that analyze BPMN models and detect
potential issues. Each registers itself in a central service registry at startup and deregisters at shutdown,
enabling dynamic discovery. When a checker analyzes the model, it returns issues labeled as errors
or warnings, along with optional metadata about relevant elements. Each issue includes two types of
explanations: one for the user and one tailored for the LLM. The LLM-specific explanation includes the
issue description along with system instructions that define the LLM’s expert role on the specific error
type. When the user selects the "auto-fix" option, this explanation is forwarded to the LLM, enabling it
to automatically resolve the issue. The current setup includes two checkers: one for assessing safeness
and soundness of BPMN Collaborations (based on the S³ checker [? ]), and one for validating data flow
(based on the viadee Process Application Validator, vPAV [? ]). Both are implemented as Spring Boot
services, wrapping the underlying checkers and generating tailored user and LLM explanations.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Usage Scenario</title>
        <p>The tool ofers a configurable settings panel where users can adjust parameters such as the underlying
LLM model, temperature, instruction prompt, and the set of supported BPMN elements. Users can also
browse and select from available model checkers. These options allow the tool to be tailored to specific
use cases or used with default settings.</p>
        <p>To start the modeling process, the user begins by providing a textual or voice description of a process,
which is processed by an LLM to generate and visualize an initial BPMN process model. Users can
then provide direct feedback or invite model checkers into the chat. These checkers run in parallel and
return any identified errors or warnings as chat messages. When no issues are reported, the generated
model is confirmed to be correct, as shown in Figure 2. Otherwise, users can highlight relevant model
elements in the diagram and either write their own feedback or use the auto-fix functionality.</p>
        <p>Figure 3 shows an example with three messages from two model checkers. The first two messages,
both generated by the collaboration checker, report a soundness error due to dead tokens. The second
message elaborates on this issue by explaining that a message is conditionally sent in one pool but
unconditionally received in another pool. The checker recommends restructuring the model by placing
the message catch event after an event-based gateway. The third message, issued by the data flow
checker, presents a warning that a data object is written to but never read, and suggests removing it.</p>
        <p>In this scenario, the user chooses to apply the auto-fix proposed in the second message, which
addresses the collaboration error. The revised model shown in Figure 4 resolves the identified issue by
modifying the gateway structure as suggested. The final model can be exported as a BPMN-XML file,
and the complete session state can be saved and reloaded for continued modeling at a later stage.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Maturity of the Tool</title>
      <p>In [? ], the original version of the BPMN-Chatbot was compared against ProMoAI, alongside a technology
acceptance experiment conducted with a broad audience at a science fair, demonstrating high quality of
the generated models and overall tool usefulness.</p>
      <p>For the extended tool, an evaluation is in progress. Preliminary data already indicate strong user
acceptance and superior syntactic correctness and model quality compared to a baseline prompt
producing XML directly. To establish the baseline, we adapted our original prompt, which generates
models in our intermediate JSON format, to instead request direct BPMN XML from the LLM.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and Future Works</title>
      <p>To the best of our knowledge, this extension of the BPMN-Chatbot presents the first LLM-based tool
for BPMN collaboration modeling with multiple pools, lanes, message flows, and data objects. The
tool supports a feedback loop, customizable prompts, adjustable LLM parameters, and the selection of
supported BPMN constructs. Therefore, it provides a publicly available infrastructure for conducting
user experiments to evaluate not only the capabilities of LLMs to generate process models but also to
examine the impact of diferent usage patterns, prompting strategies, and user groups on the quality of
the generated models.</p>
      <p>Additionally, the integration of model checkers shifts the modeler’s focus from domain-independent
issues to the semantics of the model. This integration also enables a deeper analysis of recurring
patterns and typical modeling errors generated by LLMs. Moreover, the tool opens opportunities to
investigate user interaction patterns and how LLMs can leverage implicit knowledge to ofer more
efective and context-aware support throughout the modeling process.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>Generative AI was not used for preparing the text of this paper. The presented tool, as described in
Sect. 2.1 uses generative AI for generating process models in a JSON format. However, the graphical
representation of the generated JSON in the form of BPMN Collaboration diagrams, as shown in Figures
2, 3, and 4 is not based on generative AI.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>