<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Stakeholder-specific Jargon-based Representation of Multimodal Data within Business Process</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Aleksandar</forename><surname>Gavric</surname></persName>
							<email>aleksandar.gavric@tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="department">Business Informatics</orgName>
								<orgName type="institution">TU Wien</orgName>
								<address>
									<addrLine>Favoritenstrasse 9-11/194-3</addrLine>
									<postCode>1040</postCode>
									<settlement>Vienna</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dominik</forename><surname>Bork</surname></persName>
							<email>dominik.bork@tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="department">Business Informatics</orgName>
								<orgName type="institution">TU Wien</orgName>
								<address>
									<addrLine>Favoritenstrasse 9-11/194-3</addrLine>
									<postCode>1040</postCode>
									<settlement>Vienna</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Henderik</forename><forename type="middle">A</forename><surname>Proper</surname></persName>
							<email>henderik.proper@tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="department">Business Informatics</orgName>
								<orgName type="institution">TU Wien</orgName>
								<address>
									<addrLine>Favoritenstrasse 9-11/194-3</addrLine>
									<postCode>1040</postCode>
									<settlement>Vienna</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Stakeholder-specific Jargon-based Representation of Multimodal Data within Business Process</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A81D619517BAA539FD8B1D131AF77501</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:24+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Process Models</term>
					<term>Transformer models</term>
					<term>Multimodal Evidence</term>
					<term>Process Representation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Stakeholders can struggle to understand and engage with process models due to a mismatch between the technical language used and their own domain-specific jargon and personal communication styles. The paper explores the application of transformer-based architectures to enhance the representation of process models and additional multimodal process data by tailoring them to the language of stakeholders. We present an approach that personalizes process model representations through two types of paraphrasers: one that aligns with domainspecific jargon and another that adapts to individual stakeholder styles. We developed a golden dataset from process model-stakeholder interaction simulation and a silver dataset using large language models to train and validate our approach. Initial findings suggest that these methods could enhance stakeholder engagement and contribute to better teaching of process mining and procedural thinking.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Process mining focuses on extracting insights from event logs to discover, monitor, and improve actual processes by analyzing the flow of activities within an organization <ref type="bibr" target="#b0">[1]</ref>. Beerepoot et al. <ref type="bibr" target="#b1">[2]</ref> have highlighted that the struggle between human involvement and task automation in managing work processes points to the significant impact that resolving these challenges will have on knowledgeintensive work. Despite its potential, one of the significant challenges in process mining is effectively communicating the insights gained from these analyses to stakeholders <ref type="bibr" target="#b2">[3]</ref>, who often come from diverse backgrounds with varying levels of familiarity with the technical and domain-specific language. The precision required in process models leads to the use of jargon, which, while transparent to domain experts, can be confusing or opaque to others. This communication barrier can hold back the adoption of process mining insights, limiting their impact on decision-making and process improvement.</p><p>For instance, let's imagine a project manager, Sarah, who oversees the implementation of a new digital healthcare system in a large hospital. Sarah has extensive experience in project management, but her familiarity with healthcare-specific jargon is limited. She works closely with a team of doctors, nurses, and IT specialists, each fluent in their domain language. During meetings, the medical professionals often discuss processes in terms that are second nature to them-terms like "EHR integration, " "clinical workflows, " and "patient pathway optimization. " To them, these phrases precisely capture the complexity of the processes involved in patient care. However, to Sarah, who lacks a clinical background, these discussions often feel like a foreign language, making it difficult for her to grasp the important details of the system she's managing. Now, let's imagine another scenario where Sarah is working on a similar project, but this time, the communication has been tailored to her level of understanding. Instead of using healthcare-specific jargon, the process models are described in more general project management terms. For instance, rather than discussing "EHR integration," the conversation revolves around "aligning the digital system with existing hospital processes." Instead of "clinical workflows, " they talk about "task sequences in patient care. " In this scenario, Sarah could feel more confident and engaged in the project because the information is presented in a way that resonates with her background and expertise. This contrast between jargon-heavy communication and language tailored to the listener's experience is not just a hypothetical situation-it's an identified challenge in many industries <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, especially those that rely on complex processes and specialized knowledge, such as healthcare, finance, manufacturing, or education. The problem of jargon-laden communication is intensified in process mining, where the interpretation and representation of process models are essential for understanding and improving organizational workflows. Process mining involves extracting knowledge from event logs to visualize and analyze processes. However, when these process models are presented in technical or domain-specific language that stakeholders may not understand, the benefits of process mining can be significantly diminished. Existing methods have made progress in addressing similar challenges by processing language <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref> or using visual aids <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref> to make process models more accessible. Natural language processing (NLP) technologies, particularly those based on transformer architectures, have also shown promise in generating more understandable text by leveraging vast amounts of contextual information. These approaches <ref type="bibr" target="#b9">[10]</ref>, however, often remain one-size-fits-all solutions, lacking the personalization needed to engage stakeholders who may have varying levels of familiarity with the subject matter.</p><p>This paper proposes an approach that leverages transformer-based architectures to create personalized representations of process models. Our goal is to bridge the communication gap by developing two types of paraphrasing: one that aligns with the domain-specific jargon used by experts and another that adapts to the individual communication styles of different stakeholders. But language is only part of the equation. In complex fields like healthcare, education, and industrial operations, process models can benefit from integrating multimodal evidence-combining text, images, data visualizations, and even video to provide a comprehensive understanding of the processes involved. Integrating this multimodal data into personalized process models adds another layer of complexity and offers an opportunity to enhance the discovery of learning patterns within process mining <ref type="bibr" target="#b10">[11]</ref>. By understanding how different stakeholders interact with these multimodal representations, we can gain insights into their learning processes, which can improve how we teach and implement process mining techniques <ref type="bibr" target="#b9">[10]</ref>.</p><p>In the sections that follow, we will discuss the related work (Section 2), detail our methodology for developing (Section 3) and the personalized paraphrases, explore the implications of our findings for the future of process mining and stakeholder communication (Section 4), and conclude with closing remarks (Section 5). Through this work, we hope to contribute to the ongoing efforts to make complex processes more accessible, understandable, and actionable for all stakeholders involved.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>This section provides an overview of the most relevant research in process modeling and machine learning techniques, particularly in the domain of paraphrasing and semantic transformation of processrelated representations, contextualizing our work within the broader landscape of NLP-based process model management.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Paraphrasing and Semantic Transformation in Process Models</head><p>One key challenge in business process model management is ensuring that models are interpretable and usable by various stakeholders, each with varying expertise and domain-specific knowledge. Early work by Leopold et al. <ref type="bibr" target="#b11">[12]</ref> addressed this issue by introducing automated techniques for transforming business process models into natural language descriptions. Their approach laid the groundwork for subsequent research by demonstrating that computerized tools could effectively bridge the gap between formal process models and natural language, albeit with limited adaptability to different domains or stakeholder needs. Recent advancements in transformer-based models, such as BERT <ref type="bibr" target="#b12">[13]</ref> and GPT <ref type="bibr" target="#b13">[14]</ref>, have opened new avenues for paraphrasing and semantic transformation tasks. These models have been applied to various domains, including text summarization, translation, and paraphrasing, but their application to process models is still an emerging field.</p><p>In a recent work <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b14">Kourani et al. (2024)</ref> leverage the capabilities of Large Language Models (LLMs) to represent process models in the context of Business Process Management (BPM). This study introduces a novel framework that harnesses LLMs to enhance the interpretability of complex process models, addressing challenges that arise as organizations scale and processes become increasingly intricate.</p><p>Our work builds on these advancements by applying transformer models specifically designed to paraphrase process models across multiple domains and stakeholder profiles, addressing the limitations of earlier approaches in handling domain-specific jargon and personalized communication.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Domain-Specific Language Models</head><p>The use of domain-specific language models has been explored in several contexts, particularly in medical <ref type="bibr" target="#b15">[16]</ref> and legal <ref type="bibr" target="#b16">[17]</ref> domains, where the accuracy of language processing is critical due to the specialized terminology involved. These studies highlight the importance of tailoring language models to specific domains to improve performance. For instance, Lee et al. <ref type="bibr" target="#b15">[16]</ref> developed BioBERT, a variant of BERT pre-trained on biomedical text, which significantly outperformed general-purpose models on tasks like named entity recognition and relation extraction in the medical domain. Our work draws inspiration from these works by training custom transformers on process model representations specific to different domains. This approach allows our models to effectively generate paraphrases that are accurate and contextually relevant, aligning with each domain's specialized terminology and communication styles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Hybrid Approaches to Process Model Translation</head><p>The literature has also explored hybrid approaches that combine rule-based methods with machinelearning techniques. For example, Friedrich et al. <ref type="bibr" target="#b17">[18]</ref> developed a hybrid approach to automatically generate textual descriptions of process models by combining rule-based transformations with statistical methods. While effective, these approaches often require extensive domain knowledge to implement and are less adaptable to new or evolving domains. Our work diverges from these traditional hybrid methods by leveraging entirely data-driven transformer models, which learn the nuances of process model paraphrasing directly from training data. Zerbato et al. (2023) develop methodological guidance <ref type="bibr" target="#b18">[19]</ref> to assist novice analysts during their analysis and build an empirical basis for process mining, laying the foundation for the development of user-centered support. Our work aims to contribute to question development in process mining and interactive modeling, addressing areas where support is still lacking.</p><p>Overall, our work extends the existing literature on process model paraphrasing by introducing transformer-based models explicitly tailored to domain-specific jargon and stakeholder communication styles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Adaptive Transformer-Based Framework for Process Model Communication</head><p>We propose a transformer-based architecture to address the challenges of effectively communicating process models to stakeholders with varying levels of domain expertise (see Fig. <ref type="figure" target="#fig_0">1</ref>). This architecture consists of two core components: a jargon-specific paraphraser (proc2jargon) and a personalized paraphraser (proc2ownw); both are designed to translate process models and multimodal data into text that is accessible and meaningful to different audiences. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Jargon-Specific Paraphraser (proc2jargon)</head><p>proc2jargon generates text that aligns with the specialized domain jargon. This model leverages pre-trained knowledge combined with the specifics of the process model and any associated multimodal data to produce outputs that maintain the technical rigor and precision expected by domain experts.</p><p>Let 𝐷 𝑗 represent the domain-specific jargon dictionary, and X 𝑝𝑚 denote the input process model, which includes both the textual and multimodal data features. The model's task is to generate a sequence Y 𝑗 = (𝑦 𝑗1 , 𝑦 𝑗2 , … , 𝑦 𝑗𝑛 ) where each 𝑦 𝑗𝑖 ∈ 𝐷 𝑗 . The architecture of proc2jargon can be represented as</p><formula xml:id="formula_0">Y 𝑗 = Transformer jargon (E 𝑝𝑡𝑘 , E 𝑝𝑚 )</formula><p>where E 𝑝𝑡𝑘 is the embedding of pre-trained knowledge, and E 𝑝𝑚 is the embedding of the process model and multimodal data.</p><p>This model aims to maximize the conditional probability 𝑃(Y 𝑗 | X 𝑝𝑚 , E 𝑝𝑡𝑘 ), such that:</p><formula xml:id="formula_1">𝑃(Y 𝑗 | X 𝑝𝑚 , E 𝑝𝑡𝑘 ) = 𝑛 ∏ 𝑖=1 𝑃(𝑦 𝑗𝑖 | X 𝑝𝑚 , E 𝑝𝑡𝑘 , 𝑦 𝑗1 , … , 𝑦 𝑗(𝑖−1) )</formula><p>The conditional probability 𝑃(Y 𝑗 | X 𝑝𝑚 , E 𝑝𝑡𝑘 ) represents the likelihood of generating the sequence of jargon terms Y 𝑗 given the input process model X 𝑝𝑚 and the embedding of pre-trained knowledge E 𝑝𝑡𝑘 . In essence, the model generates each jargon term one by one, ensuring that each term is not only contextually appropriate based on the process model and pre-trained knowledge but also coherent with the previously generated terms in the sequence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Personalized Paraphraser (proc2ownw)</head><p>proc2ownw produces text personalized to individual stakeholders' communication styles and language preferences. This model balances integrating human-model interaction data with pre-trained knowledge and process model inputs to generate outputs that resonate with non-expert stakeholders.</p><p>Let 𝐻 represent the set of human-model interaction embeddings, which encode personalized communication preferences, and let 𝑤 be the weight that determines the influence of these interactions on the model's output. The text sequence Y 𝑝 = (𝑦 𝑝1 , 𝑦 𝑝2 , … , 𝑦 𝑝𝑛 ) generated by proc2ownw is computed as:</p><formula xml:id="formula_2">Y 𝑝 = Transformer personal (𝑤 ⋅ E 𝐻 + (1 − 𝑤) ⋅ E 𝑝𝑡𝑘 , E 𝑝𝑚 )</formula><p>where E 𝐻 is the embedding of human-model interactions, and E 𝑝𝑡𝑘 and E 𝑝𝑚 are as defined previously. The model optimizes the conditional probability 𝑃(Y 𝑝 | 𝐻 , X 𝑝𝑚 , E 𝑝𝑡𝑘 ), expressed as:</p><formula xml:id="formula_3">𝑃(Y 𝑝 | 𝐻 , X 𝑝𝑚 , E 𝑝𝑡𝑘 ) = 𝑛 ∏ 𝑖=1 𝑃(𝑦 𝑝𝑖 | 𝐻 , X 𝑝𝑚 , E 𝑝𝑡𝑘 , 𝑦 𝑝1 , … , 𝑦 𝑝(𝑖−1) )</formula><p>The weights 𝑤 and 1 − 𝑤 are determined based on historical interaction data, allowing the model to adapt over time and refine its outputs for each stakeholder. This formulation allows for dynamic adaptation to the stakeholder's language preferences, producing outputs that are accurate in content and tailored in style. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Multimodal Data Handling</head><p>To incorporate multimodal data, including textual descriptions, images, videos, and structured data (e.g., event logs), our models process input by embedding these different data types into a unified representation space. The multimodal embeddings are combined with the process model embeddings through a fusion function <ref type="bibr" target="#b19">[20]</ref> that integrates these various data modalities. We use the unified representation space to input pre-trained multimodal embeddings as tokens.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Training proc2jargon and proc2ownw models</head><p>The training process begins with the preparation of the dataset. In our case, the dataset includes detailed process instances related to asset disbursement, Solve4X <ref type="bibr" target="#b20">[21]</ref>, where each instance provides a multimodal source of information for the model to learn from. The process model outlines several key activities: the IT staff using an asset management system to manage the issuance of items, performing quality checks, and the eventual handover of assets to clients. Alongside these textual descriptions, the dataset includes multimodal data such as sensor readings. We take the event log and multimodal evidence from Solve4X and create oc-DFG (object-centric Directly Follows Graphs), oc-Petri Net and oc-BPMN (Business Process Model and Notation) model using OC-PM (process mining) tool <ref type="bibr" target="#b21">[22]</ref>. The data is illustrated in Fig. <ref type="figure" target="#fig_1">2</ref>.</p><p>We utilized a supervised learning approach to train the two transformer-based models. The transformer model's attention mechanism plays a crucial role in this process. Multi-head self-attention allows the model to focus on different parts of the input sequence, capturing complex relationships between the elements of the process description. The attention mechanism calculates the weighted sum of values based on the similarity between queries and keys, enabling the model to attend to relevant information from different subspaces. This is implemented using PyTorch and NanoGPT<ref type="foot" target="#foot_0">1</ref> , simplifying the creation of these attention layers. The overall architecture, which includes both encoder and decoder components, is trained on a single NVIDIA A40 GPU, optimizing the model using the Adam optimizer. We pre-trained our model on the openwebtext <ref type="bibr" target="#b22">[23]</ref> dataset and trained on custom Solve4X-based data with a micro-batch size of 12, using gradient accumulation steps of 40, a block size of 1024, 12 layers with 12 attention heads and an embedding size of 768, no dropout during pretraining, no bias in LayerNorm and Linear layers, an AdamW optimizer with a max learning rate of 6e-4, weight decay of 0.1, beta values of 0.9 and 0.95, gradient clipping at 1.0, and a learning rate decay enabled over 600,000 iterations, including 2,000 warmup steps, down to a minimum learning rate of 6e-5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head><p>To guide our exploration of personalized process model representation, we formulate our research question (RQ) as: how can transformer-based architectures be effectively utilized to generate paraphrased process data that adapt to the diverse linguistic preferences of stakeholders? This question addresses the challenge of adapting technical language to improve stakeholder engagement without losing the precision required for accurate decision-making.</p><p>In this section, we provide details on the evaluation of the custom transformer models proc2ownw and proc2jargon by creating simulated stakeholder profiles with distinct personalities and generating domain-specific jargon corpora for five different fields. The evaluation aims to assess the ability of proc2ownw to tailor communication to specific stakeholders and proc2jargon to generate domainspecific language accurately. The experiments use simulated conversations parameterized to cover realistic scenarios.</p><p>We developed three distinct stakeholder personality profiles to test the personalization capabilities of proc2ownw. Each profile has unique communication preferences and background knowledge, which influence how they interact with process models. The profiles were designed to reflect common stakeholders in various professional environments, as detailed in Table <ref type="table" target="#tab_0">1</ref>. These profiles were used to generate simulated conversations, which were then fed into the model to evaluate its ability to adapt its output to different communication styles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Interviewing</head><p>We interview each profile using a set of evaluation questions, with examples provided in the following. Questions follow the discovered steps taken during an exploratory data analysis in a process-oriented experiment <ref type="bibr" target="#b23">[24]</ref> conducted by <ref type="bibr" target="#b23">Daele and Janssenswillen (2023)</ref>. For the test scenarios, we created questions both from the same domain as the training data <ref type="bibr" target="#b20">[21]</ref> and from other domains to assess cross-domain generalization in relation to the domain of the process being taught. For the other domain, we chose data that originate from a large multinational company operating from The Netherlands in the area of coatings and paints <ref type="bibr" target="#b24">[25]</ref>.</p><p>Profile A (Analytical): Human: "Can you explain the process of asset disbursement in detail?" Model: "Certainly. The process begins with IT staff utilizing an asset management system to track each asset. Every step is logged, from retrieval to quality control, ensuring compliance with technical standards. Detailed logs and sensor data verify the accuracy at each stage. "</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Profile B (Collaborative):</head><p>Human: "I think the team needs a clear outline of the steps involved. " Model: "Sure! The process is straightforward: IT staff check out the item in the system, perform a quality check, and hand it over to the client. Everyone on the team can see these steps and collaborate to ensure nothing is missed. "</p><p>Profile C (Strategic): Human: "Give me the high-level overview of asset disbursement. " Model: "The key steps involve tracking inventory in the asset management system, verifying quality, and issuing assets to clients. It's crucial for maintaining operational efficiency and reducing errors. "</p><p>To evaluate proc2jargon, we generated domain-specific jargon corpora for the medical, police, software development, logistics, and accounting domains. These jargon corpora were created using large language model (LLM) <ref type="bibr" target="#b6">[7]</ref> to simulate the typical language used in these domains. This evaluation aimed to test how well proc2jargon could adapt its output to match the specialized language of different fields. We used LLM to generate heavy jargon content for each domain by crafting specific prompts. These prompts were designed to evoke responses filled with domain-specific language, which was then used to build the jargon corpus for proc2jargon.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Examples</head><p>In the following, we provide examples of a tailored explanation for the processes of asset management systems for different professionals.</p><p>For a Medical Doctor (MD): The IT team manages new equipment by logging its details into a management system, conducting thorough quality inspections, tagging each item, and storing it in designated areas. This process ensures accurate tracking and integration into the inventory, like how patient records are carefully maintained and monitored for precise medical management.</p><p>For a Police Officer: IT staff handle new equipment by documenting its details in a system, inspecting it for any issues, tagging it with a barcode, and storing it in secure locations. Every action is logged, akin to how evidence is handled and recorded to ensure proper tracking and security.</p><p>For a Software Engineer: New IT assets are processed by entering their specifications into a management system, verifying their condition through quality checks, tagging them with barcodes, and updating their storage locations. This ensures that all inventory data is accurately synchronized and traceable, much like managing code versions and deployments.</p><p>For a Logistics Representative: IT staff process new assets by recording their details in a digital system, performing quality inspections, labeling the items, and placing them in specified warehouse locations. This organized approach to inventory management ensures precise tracking and efficient storage, similar to coordinating shipments and warehouse operations.</p><p>For an Accountant: The IT department handles new assets by entering their details into an asset management system, verifying their condition, tagging them for identification, and logging their storage locations. This careful documentation and tracking are crucial for maintaining accurate financial records and ensuring proper asset management, similar to tracking expenses and inventory in accounting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Results</head><p>The evaluation reports the effectiveness of both proc2ownw and proc2jargon in adapting to stakeholder preferences and domain-specific language.</p><p>The results demonstrate that both proc2ownw and proc2jargon can generate customized outputs that meet the specific needs of stakeholders and domains. The findings also suggest that while pre-existing social network data can provide a useful foundation, exposure to relevant process models is essential for achieving the highest levels of personalization and accuracy.</p><p>Table <ref type="table" target="#tab_1">2</ref> summarizes the results of training custom models (proc2jargon and proc2ownw) to paraphrase various types of process models. The evaluation involved multiple process model representations, including Object-Centric Event Logs (OCEL), and multimodal evidence (mmevd), Directly Follows Graphs When it comes to process models representation, Table <ref type="table" target="#tab_1">2</ref> shows the percentage accuracy in predicting the next token in the test conversation sets across different process model types, domains, and stakeholder personalities, indicating that the proc2ownw approach generally achieves higher accuracy than proc2jargon, particularly when representing Petri Net model, with an average accuracy of 90.55%. Accuracy refers to the percentage of correct predictions made by the model when forecasting the next token (sub-word unit) in the sequences of the test conversation sets. This accuracy was measured by comparing the predicted process-relevant token against the actual token that appeared next in the sequence. The higher the percentage, the more frequently the model correctly predicted the next token in the conversation. The relevance of each token is manually annotated based on common sense and process data description.</p><p>Beyond process models, when it comes to representation of sub-model process data (event log and multimodal evidence), Table <ref type="table" target="#tab_1">2</ref> shows that models trained with multimodal evidence data (mmevd2{jargon,ownw}) generally perform better, achieving a higher average accuracy compared to those trained with OCEL data (ocel2{jargon,ownw}). This suggests that multimodal data, which includes various forms of input like images and visualizations, enhances the model's ability to adapt to stakeholder preferences and predict the next token more effectively than using OCEL alone.</p><p>Overall, the models demonstrate a robust ability to generate domain-specific, personalized outputs, with an average accuracy of 88.49% across all categories.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this study, we explored how transformer-based architectures can be leveraged to generate paraphrased process data that adapt to the diverse linguistic preferences of stakeholders across various domains. We focused on two custom transformer models, proc2ownw and proc2jargon, to assess their ability to tailor communication effectively to different stakeholder profiles and domain-specific languages. The paper also proposes an approach that extends beyond linguistic personalization to integrate multimodal evidence-combining text and images into process models.</p><p>The evaluation results demonstrate that proc2ownw is particularly effective in adapting to stakeholder communication preferences, especially when tested across distinct personality profiles. On the other hand, proc2jargon successfully generated domain-specific language, as evaluated in fields such as medical, police, software development, logistics, and accounting.</p><p>Overall, our findings suggest that transformer-based models can improve communication in process management by tailoring outputs to both the stakeholder's linguistic preferences and the specific jargon of their domain. Additionally, integrating multimodal data into personalized process models adds another layer of complexity and opens up new opportunities for enhancing stakeholder understanding and engagement. Future work will involve further refining these models to handle more complex communication scenarios, expanding the scope of evaluation to include additional domains and stakeholder profiles, and exploring the full potential of multimodal evidence in process mining, while conducting studies and experiments with stakeholders.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Illustration of our solution that includes (left) a jargon-specific paraphraser (proc2jargon) for domain-specific language and (right) a personalized paraphraser (proc2ownw) tailored to individual stakeholder preferences.</figDesc><graphic coords="4,72.00,65.61,451.27,101.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2:The Solve4X<ref type="bibr" target="#b20">[21]</ref> process data used for training of our framework. We explore different inputs to our framework, in particular: (A) an Object-Centric Event Log (OCEL), (B) multimodal evidence, and (C) a process model example. (The labels in the model example are illustrative and should be read according to<ref type="bibr" target="#b21">[22]</ref>)</figDesc><graphic coords="5,72.00,65.60,451.28,299.71" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Summary of Profile Background Knowledge Background in project management with moderate technical knowledge Profile C Executive-level knowledge, low technical detail required</figDesc><table><row><cell>Profile</cell><cell>Background Knowledge</cell></row><row><cell cols="2">Profile A Advanced technical background in engineering</cell></row><row><cell>Profile B</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Results of Model Training Across Different Process Model Types, Domains, and Personalities.</figDesc><table><row><cell></cell><cell>Inputs</cell><cell></cell><cell></cell><cell></cell><cell>Domain</cell></row><row><cell cols="2">Process Inter. Type</cell><cell cols="2">Prof. Medical</cell><cell>Police</cell><cell>Soft. Dev.</cell><cell>Logistics Account.</cell><cell>Average</cell></row><row><cell>proc2jargon</cell><cell cols="6">ocel2jargon mmevd2jargon ABC 86.92% 91.77% 85.37% 87.83% 92.80% 88.94% ABC 90.55% 87.06% 84.01% 90.30% 87.17% 87.82% dfg2jargon ABC 87.55% 85.01% 85.16% 89.10% 87.88% 86.94% petri2jargon ABC 91.02% 85.86% 87.59% 87.57% 89.85% 88.38% bpmn2jargon ABC 91.04% 85.47% 90.60% 90.78% 85.22% 88.62%</cell></row><row><cell></cell><cell></cell><cell>A</cell><cell cols="4">84.95% 90.88% 88.01% 91.08% 88.70%</cell></row><row><cell></cell><cell>ocel2ownw</cell><cell>B</cell><cell cols="4">90.00% 91.16% 88.08% 91.65% 86.11%</cell><cell>88.67%</cell></row><row><cell></cell><cell></cell><cell>C</cell><cell cols="4">89.21% 87.99% 86.56% 90.32% 85.40%</cell></row><row><cell></cell><cell></cell><cell>A</cell><cell cols="4">88.81% 89.94% 87.00% 86.32% 84.68%</cell></row><row><cell></cell><cell>mmevd2ownw</cell><cell>B</cell><cell cols="4">91.10% 86.97% 92.25% 89.38% 87.81%</cell><cell>89.12%</cell></row><row><cell>proc2ownw</cell><cell>dfg2ownw</cell><cell>C A B C A</cell><cell cols="4">86.20% 91.73% 91.86% 91.64% 91.07% 88.24% 89.60% 91.79% 88.10% 84.54% 85.06% 84.24% 85.47% 89.46% 88.40% 86.23% 84.47% 92.45% 87.46% 85.84% 92.73% 91.41% 89.73% 92.46% 90.05%</cell><cell>87.42%</cell></row><row><cell></cell><cell>petri2ownw</cell><cell>B</cell><cell cols="4">92.05% 90.09% 91.92% 90.13% 84.23%</cell><cell>90.55%</cell></row><row><cell></cell><cell></cell><cell>C</cell><cell cols="4">89.65% 90.16% 91.07% 91.18% 91.37%</cell></row><row><cell></cell><cell></cell><cell>A</cell><cell cols="4">88.95% 84.52% 85.28% 92.26% 88.15%</cell></row><row><cell></cell><cell>bpmn2ownw</cell><cell>B</cell><cell cols="4">88.03% 85.94% 84.67% 92.77% 90.13%</cell><cell>88.42%</cell></row><row><cell></cell><cell></cell><cell>C</cell><cell cols="4">92.60% 87.24% 84.86% 92.25% 88.60%</cell></row><row><cell></cell><cell>Average</cell><cell></cell><cell cols="4">89.04% 88.08% 88.19% 90.10% 87.90% 88.49%</cell></row></table><note>(DFG), Petri nets, and Business Process Model and Notation (BPMN). The models were trained and tested across five domains (medical, police, software development, logistics, and accounting) and three distinct stakeholder personalities, as described earlier.</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/karpathy/nanoGPT</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Process Mining Handbook</title>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Business Information Processing</title>
		<editor>W. M. P. van der Aalst, J. Carmona</editor>
		<imprint>
			<biblScope unit="volume">448</biblScope>
			<date type="published" when="2022">2022</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The biggest business process management problems to solve before we die</title>
		<author>
			<persName><forename type="first">I</forename><surname>Beerepoot</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compind.2022.103837</idno>
	</analytic>
	<monogr>
		<title level="j">Computers in Industry</title>
		<imprint>
			<biblScope unit="volume">146</biblScope>
			<biblScope unit="page">103837</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Process-aware information systems: Lessons to be learned from process mining</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M</forename><surname>Van Der Aalst</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Transactions on Petri Nets and Other Models of Concurrency II</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1" to="26" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Conflicts and communication gaps in the intensive care unit</title>
		<author>
			<persName><forename type="first">T</forename><surname>Fassier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Azoulay</surname></persName>
		</author>
		<idno type="DOI">10.1097/MCC.0b013e32834044f0</idno>
	</analytic>
	<monogr>
		<title level="j">Current Opinion in Critical Care</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="654" to="665" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The gaps of which communication is made</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Peters</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Critical Studies in Media Communication</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="117" to="140" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Just tell me: Prompt engineering in business process management</title>
		<author>
			<persName><forename type="first">K</forename><surname>Busch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rochlitzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Leopold</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Enterprise, Business-Process and Information Systems Modeling</title>
				<meeting><address><addrLine>Switzerland, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="3" to="11" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Chatgpt</forename><surname>Openai</surname></persName>
		</author>
		<ptr target="https://chat.openai.com" />
		<imprint>
			<date type="published" when="2024-08-01">2024. 2024-08-01</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Tiramisù: Making sense of multi-faceted process information through time and space</title>
		<author>
			<persName><forename type="first">A</forename><surname>Alman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Arleo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Beerepoot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Burattin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Di Ciccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Resinas</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10844-024-00875-8</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent Information Systems</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Towards on-the-fly creation of modeling language jargons</title>
		<author>
			<persName><forename type="first">I</forename><surname>Bider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Perjons</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bork</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">17th International Conference on ICT in Education, Research and Industrial Applications</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="142" to="157" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Chen</surname></persName>
		</author>
		<idno>ArXiv abs/2306.13549</idno>
		<title level="m">A survey on multimodal large language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Multimodal process mining</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gavric</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bork</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Proper</surname></persName>
		</author>
		<ptr target="https://model-engineering.info/publications/papers/CBI-MultiModalProcessMining.pdf" />
	</analytic>
	<monogr>
		<title level="m">26th International Conference on Business Informatics</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>in press</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Generating natural language texts from business process models</title>
		<author>
			<persName><forename type="first">H</forename><surname>Leopold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Polyvyanyy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advanced Information Systems Engineering -24th International Conference</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="64" to="79" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Bert: Pre-training of deep bidirectional transformers for language understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference of the North American Chapter of the Association for Computational Linguistics</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Narasimhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Salimans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<ptr target="https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf" />
		<title level="m">Improving language understanding by generative pre-training</title>
				<imprint>
			<publisher>OpenAI</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Leveraging large language models for enhanced process model comprehension</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kourani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Berti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Henrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Kratsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weidlich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Arslan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M P</forename><surname>Van Der Aalst</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2408.08892.arXiv:2408.08892" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Biobert: a pre-trained biomedical language representation model for biomedical text mining</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yoon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>So</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bioinformatics</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="1234" to="1240" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Chalkidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fergadiotis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Malakasiotis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Aletras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Androutsopoulos</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2010.02559</idno>
		<title level="m">Legal-bert: The muppets straight out of law school</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Process model generation from natural language text</title>
		<author>
			<persName><forename type="first">F</forename><surname>Friedrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Puhlmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advanced Information Systems Engineering: 23rd International Conference, CAiSE 2011</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="482" to="496" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Promise: Process mining support for end-users</title>
		<author>
			<persName><forename type="first">F</forename><surname>Zerbato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zimmermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Völzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Weber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CAiSE Research Projects Exhibition</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="60" to="67" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Imagebind: One embedding space to bind them all</title>
		<author>
			<persName><forename type="first">R</forename><surname>Girdhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>El-Nouby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">V</forename><surname>Alwala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Misra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Fehrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Egger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chvirova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wittmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Wördehoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Kratsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Röglinger</surname></persName>
		</author>
		<idno type="DOI">10.6084/m9.figshare.25246291</idno>
		<title level="m">Business Processes in IT Asset Management Multimedia Event Log</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">OC-PM: analyzing object-centric event logs and process models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Berti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M P</forename><surname>Van Der Aalst</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10009-022-00668-w</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal on Software Tools for Technology Transfer</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Gokaslan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Cohen</surname></persName>
		</author>
		<ptr target="http://Skylion007.github.io/OpenWebTextCorpus" />
		<title level="m">Openwebtext corpus</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Identifying the steps in an exploratory data analysis: A processoriented approach</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Daele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Janssenswillen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Process Mining Workshops</title>
				<meeting>ess Mining Workshops</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="526" to="538" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Multi-dimensional event data in graph databases</title>
		<author>
			<persName><forename type="first">S</forename><surname>Esser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Fahland</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2005.14552.arXiv:2005.14552" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
