<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Business Process Improvement with Performance-Based Sequential Experiments</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Suhrid</forename><surname>Satyal</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of New South Wales</orgName>
								<address>
									<country key="AU">Australia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Business Process Improvement with Performance-Based Sequential Experiments</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">EEF71CB9B4D615876DBF25328FEEE50C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T09:07+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Business process management</term>
					<term>AB Testing</term>
					<term>Shadow Testing</term>
					<term>Trace simulation</term>
					<term>Process performance indicators</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Various life-cycle approaches to Business Process Management (BPM) have a common assumption that a process is incrementally improved in the redesign phase. While this assumption is hardly questioned in BPM research, there is evidence from the field of AB testing that improvement concepts often do not lead to actual improvements. In this thesis, we propose a methodology named AB-BPM and a set of supporting techniques that facilitate rapid validation of business processes by conducting sequential experiments. We evaluate our methodology and techniques with real-world and synthetic case studies.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Business process improvement ideas often do not lead to actual improvements.Works on business improvement ideas found that only a third of the ideas observed had a positive impact <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref> This is also illustrated by an anecdote of a European Bank. The bank improved their loan approval process by cutting its turnaround time down from one week to a few hours as a means to boost their business. What happened though was a steep decline in customer satisfaction: customers with a negative notice would complain that their application might have been declined unjustifiably; customers with a positive notice would inquire whether their application had been checked with due diligence. These observations emphasize the need to carefully test improvement hypotheses in practice because the customers and the process participants might not act as anticipated by the process analyst. Contemporary BPM research does not provide techniques and guidelines on quickly testing and validating the supposed improvements in a fair manner. There are two major challenges for such an immediate validation. The first one is methodological. Classical BPM lifecycle approaches build on a labour-intensive analysis of the current process, which leads to the deployment of a redesigned version <ref type="bibr" target="#b1">[2]</ref>. This new version is monitored in production, and if it does not meet performance objectives, it is made subject to analysis again. All this takes time. The second challenge is architectural. Contemporary Business Process Management Systems (BPMSs) enable quick deployment of process improvements, but they do not provide support for validating improvement assumptions. In this research, we address these challenges by integrating business process execution concepts with ideas from modern software engineering practices. We propose an iterative and incremental process improvement methodology named AB-BPM that integrates business process execution concepts with the idea of AB testing. AB-BPM supports the design of AB tests with simulation and shadow tests. First, we develop a simulation technique that estimates the performance of a new process version using historical data of the old version. Since the results of simulation can be speculative, we propose shadow testing as the next step. Our Shadow testing technique partially executes the new version in production alongside the old version in such a way that the new version does not throttle the old version. Finally, we develop techniques for AB testing for redesigned processes with immediate feedback at runtime. AB testing compares two versions of the deployed process by observing users responses to versions A/B, and determines which one performs better. We propose two algorithms, LTAvgR and ProcessBandit, that dynamically adjust request allocation to the versions during the test based on their performance. We evaluate these techniques with real world and synthetic case studies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Problem Statements</head><p>Despite the prevalence of live testing and documentation of the successes, contemporary business process management has not embraced this idea. There are three problems with rolling out new process versions in contemporary business process management practices: P1: New versions of business processes are not tested in production. The traditional BPM lifecycle encourages replacement of old versions with new versions. The newly redesigned version makes the implicit assumption that this version improves the business process. User interactions are not observed before the replacement because the new version was never executed in production. So, this improvement assumption is not validated before committing to the new version. Live testing approaches are shown to have addressed such issues in the context of websites (e.g. <ref type="bibr" target="#b3">[4]</ref>). However, such existing approaches do not address complexities of BPM scenarios such as lengthy execution times and involvement of human workers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>P2: Evaluating new versions of a process in production is risky.</head><p>There is an inherent risk of executing new versions that have not been validated in production. We henceforth refer to this as risk of exposure. Careful offline analysis, as suggested by the BPM lifecycle, can reduce the possibility of producing bad redesigns but the risk of exposure still remains because a redesigned process completely replaces the old version. Alternatively, live testing approaches follow the motto "test fairly, fail fast", which means that there is minimal upfront analysis before testing <ref type="bibr" target="#b0">[1]</ref>. Typically, in techniques like AB Testing, the old and the new versions are deployed simultaneously and half of the user requests are directed to the new version. As such, the risk of exposing inferior versions is high.</p><p>P3: Process executions have to be evaluated with incomplete observations. To reduce risk of exposure, advanced AB testing techniques for websites can observe metrics like click-through rate and dynamically adjust user allocation during the test. This ensures that the better version receives more traffic over time. However, such approaches cannot be used as-is in BPM because of the complexities in performance indicators. Process instances can have unique identifiers, and each execution can have multiple process performance indicators such as duration and cost. Quality of a process version can be assessed by observing performance indicators of the corresponding process instances. First, the performance indicators may be tied to duration. The delays in which these performance indicators are observed influences when and how the allocation of users to the versions are adjusted. Second, not all of the required performance indicators may be observable at the same time. For instance, duration and user satisfaction scores may be obtained at different times with delays of varying length. Furthermore, indicators like user satisfaction may not be observed at all. Finally, the performance of a process can also be influenced by factors such as resource constraints, the environment, and market fluctuation. To adopt advanced AB testing techniques in BPM, we require monitoring support, mechanisms of evaluating process versions in presence of these complications with performance indicators, and request allocation algorithms for the BPM scenario.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Contributions</head><p>This research makes original contributions in the area of business process improvement, business process simulation, and live testing. In this research, we have proposed a business process improvement methodology named AB-BPM <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b7">8]</ref>. It extends the business process lifecycle to provide support for rapid and fair validation of process improvement hypotheses. The AB-BPM methodology is supported by three classes of techniques: simulation, shadow testing and AB testing. Fig. <ref type="figure" target="#fig_0">1</ref> illustrates how an old version (A) is compared with a new version (B) using these techniques.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Simulation</head><p>Our simulation technique takes historical data (event logs) of a process version, the process model of the new version, and estimates of the metrics of some activities in the new version as the inputs. It produces event logs for the new version, which is then used to estimate the performance of the new version. We construct a Transition Simulation Tree (TST), a rooted tree data structure that summarizes decisions and metrics available in an event log of the existing process version.The TST created from the old version is used in conjunction with a BPMS to derive the execution log of the new version <ref type="bibr" target="#b7">[8]</ref>. We demonstrate the efficacy of this approach with case studies from the domain of helicopter pilot licensing and banking.  The risk of exposure is low in simulation because this technique is performed offline. However, the results can be speculative because of the implicit assumptions about customer behavior and the explicit assumptions about the execution of some tasks in the new version.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Shadow Testing</head><p>Shadow testing executes the new version in production but hides the execution from customers. User requests from the production environment are duplicated and executed in both versions. Execution of the new version can affect the performance of the old version during test execution. Therefore, the risk of exposure is higher than simulation. However, the new version is executed online and the results are less speculative. In our approach of shadow testing, we reduce the risk of exposure further by partially simulating the new version and instantiating the new version only when the performance overhead of testing is acceptable. As a result, there is a trade-off between speculativeness and risk of exposure. This is achieved through a new modular BPMS architecture, an overhead management algorithm, and a decision tree for partial simulation <ref type="bibr" target="#b4">[5]</ref>. We evaluate this approach with synthetic and realistic processes from the literature.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>AB Testing</head><p>AB testing executes the two versions in production such that each version serves a portion of customers. This approach has high risk of exposure, but the results of the tests are fair and reliable. To mitigate the risk of exposure, we use multi-armed bandit algorithms that dynamically allocate user requests to the process versions based on their performance. These algorithms observe performance indicators of each version, and shift the user requests towards the better performing version in the given context. Effectively, the best version can be identified by the end of the test. We propose a modular architecture that supports the execution of such algorithms. We design two algorithms, LtAvgR and ProcessBandit, for the BPMS case and propose reward designs for these algorithms <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref> Our AB testing solution observes performance of process execution in the form of Process Performance Indicators (PPIs), convert the PPIs into rewards, updates the rewards in case of incomplete observations (as explained in P3), and dynamically allocates more requests to the version that has better performance. We evaluate this solution with real-world case studies from the domain of building permit approval, banking, and helicopter license approval. We also demonstrate the efficacy of the algorithms with experiments on convergence to optimal solution and scalability using synthetic cases.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Process improvement approaches of AB-BPM</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Bass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhu</surname></persName>
		</author>
		<title level="m">DevOps -A Software Architect&apos;s Perspective</title>
				<imprint>
			<publisher>Addison-Wesley</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note>SEI series in software engineering</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Fundamentals of Business Process Management</title>
		<author>
			<persName><forename type="first">M</forename><surname>Dumas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Reijers</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
	<note>Second Edition</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Breakthrough Business Results With MVT: A Fast, Cost-Free &quot;Secret Weapon&quot; for Boosting Sales, Cutting Expenses, and Improving Any Business Process</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Holland</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Characterizing experimentation in continuous deployment: A case study on Bing</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kevic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Murphy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Beckmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Software Engineering: Software Engineering in Practice Track</title>
				<imprint>
			<publisher>IEEE Press</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="123" to="132" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Shadow testing for business process improvement</title>
		<author>
			<persName><forename type="first">S</forename><surname>Satyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Paik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Ciccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Confederated International Conferences: CoopIS</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="153" to="171" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">AB-BPM: performance-driven instance routing for business process improvement</title>
		<author>
			<persName><forename type="first">S</forename><surname>Satyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Paik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Di</forename><surname>Ciccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Business Process Management, BPM</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="113" to="129" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">AB Testing for process versions with contextual multi-armed bandit algorithms</title>
		<author>
			<persName><forename type="first">S</forename><surname>Satyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Paik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Di</forename><surname>Ciccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Advanced Information Systems Engineering (CAiSE)</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="19" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Business process improvement with the AB-BPM methodology</title>
		<author>
			<persName><forename type="first">S</forename><surname>Satyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Paik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Di</forename><surname>Ciccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mendling</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Information Systems</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
