<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Why in Event Logs for Robotic Process Automation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Antonio Martínez-Rojas</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Informática. Avenida Reina Mercedes</institution>
          ,
          <addr-line>s/n, 41012, Seville</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Seville. Computer Languages and Systems Department. E.T.S. Ingeniería</institution>
        </aff>
      </contrib-group>
      <fpage>42</fpage>
      <lpage>50</lpage>
      <abstract>
        <p>The concept of Robotic Process Automation (RPA) has gained relevant attention in both industry and academia. RPA raises a way of automating mundane and repetitive human tasks requiring less intrusiveness with the IT infrastructure. Besides traditional user interviews and process document analysis, a common practice starts by observing the behavior of humans with the information systems while they perform the process to be automated. This sequence of human interactions with the user interface (i.e., mouse clicks and keystrokes) is stored in logs for later analysis. Analyzing these interactions brings significant benefits when conducting RPA projects. Nonetheless, some decision-based behaviors of humans require additional information to be explained. For example, a human may reject an invoice because some field is missing on a form. However, there is no interaction with that field since such information is not stored in the log. Therefore, this Ph.D. elaborates on a method to obtain additional information based on screenshots collected during the process execution. Some features are extracted from the screenshots to enrich the log, which is later used for classifying human decisions in a machine-and-human-readable form. The proposed method can be applied to generate advanced support in the RPA projects, e.g., producing an enhanced process analysis, supporting the robot development, or generating predictions and simulations. The approach has been validated using synthetic data where promising results were obtained.</p>
      </abstract>
      <kwd-group>
        <kwd>Robotic Process Automation</kwd>
        <kwd>Process Discovery</kwd>
        <kwd>Task mining</kwd>
        <kwd>Decision Model Discovery</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Research problem and motivation</title>
      <p>In the last decade, the industry has embraced Robotic Process Automation (RPA) as a
new process automation level that focuses on tackling structured and repetitive tasks
quickly and eficiently. Thus, a digital workforce is enabled to mimic human employees’
behavior. This approach sharply contrasts with other paradigms for process automation
that consists of the orchestration of application programming interfaces (APIs) of the
software [1]. In turn, RPA implies a lower level of intrusiveness since this type of software
sits on top of the information technology infrastructure of a company instead of being
part of such infrastructure [2, 3]. It is acknowledged that a successful RPA adoption goes
4.0 International (CC BY 4.0).</p>
      <p>CEUR
Workshop
Proceedings
htp:/ceur-ws.org CEUR
IS N1613-073
beyond just simple cost savings but also contributes to improvements in terms of agility
and quality [4, 5, 6].</p>
      <p>Most RPA projects start by observing human workers performing the process, which
is later automated. More precisely, terms like Robotic Process Mining [7], Task Mining
[8], or Desktop Activity Mining [9] have been coined by the RPA community to exploit
the UI logs, i.e., a series of timestamped event logs (e.g., mouse clicks and keystrokes)
obtained by monitoring user interfaces. These methods are very convenient in helping
analysts to identify candidate processes to robotize, their diferent variants, and their
decision points eficiently [ 10]. However, a traditional user interface log is limited to
explaining all human behavior, e.g., a decision may be motivated by a form field even
though it is never directly interacted with. Therefore, human behaviors (i.e., decision
points) are unexplainable by current proposals.</p>
      <p>This problem is accentuated in the context of Business Process Outsourcing (BPO),
where the processes being executed are hosted on external systems. Connections to these
systems are typically made via secure connections through virtualized environments (e.g.,
Citrix or TeamViewer). These types of connections only allow collecting raw images of
the monitored screen, i.e., screenshots, rather than the structure of the information being
processed (e.g., the DOM tree of a website). This context needs some support to manage
screenshots in the lifecycle that existing proposals do not cover.</p>
      <p>
        Therefore, this Ph.D. intends to address these challenges based on the following
premises: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) it is possible to discover processes from a UI log, (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) it is possible to extract
useful features from screenshots, and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) it is possible to extract the reasons why decisions
of the processes are made. In this context, we rely on the following RQ to give rise to
this research: How does image analysis improve RPA support?.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Research plan and methodology</title>
      <p>As the research question is very generic, here we specify it in 4 sub-questions:
RQ1: Are the images displayed on the screen relevant to the analysis of processes with</p>
      <p>Robotic Process Mining (RPM)?
RQ2: What alternatives exist to incorporate screen information into RPM?
RQ3: How can screen information be exploited in the early stages of the RPA life cycle?
RQ4: What efects would it have on the analysis and further stages of the RPA life cycle?</p>
      <p>In answering these questions, this project proposal is planned following the Design
Science methodology [11] and is organized into five main phases (P) subdivided into tasks
(T). The methodology proposes that 3 of the 5 phases should be covered extensively (i.e.,
in-depth).</p>
      <p>• P1. Explicate Problem: A validation is proposed to answer the RQ1, verifying
that the problem is significant for the community and, therefore, an interesting
contribution considering the needs of the scientific community and the industry. A
Delphi [12] study is proposed for this phase.
• P2. Define Requirements (in-depth): Definition of solution requirements, related
to RQ3. (T.2.1) Solution requirements that encompass process discovery in image
environments (i.e., processing, cleaning, and feature extraction based on a set of
screenshots obtained from human monitoring). (T.2.2) The requirements of a
solution that receives as input the output of the previous subtask, to be able to
explain the decision points in a machine-and-human-readable form.
• P3. Design and Develop Artefact (in-depth): Design and development of what
is defined in P2. Which is related to RQ2 and RQ3. (T.3.1) The architecture,
algorithms, and technologies to be used to address the phases of the proposed
method (cf. Section 3) will be defined. This involves studying tools and algorithms
such as ProM, Disco (for process discovery), Canny, Sobel, Scikit-Image, Keras,
Keras-OCR1, Scikit-learn, PyTorch (for image processing), RPA-Logger 2, Spyrix
or Spytech (for user behavior monitoring). (T.3.2) Implementation of the solution
designed in T.3.1.
• P4. Demonstrate Artefact (in-depth): Demonstration of the artifact developed
in P3 taking as reference the protocol defined in [ 13], widely applied in software
engineering. Two experimentations are intended to be developed. (T4.1) Using
synthetic data that allows to refine the artifact. (T4.2) Using real data that allows
us to bring the proposal as close as possible to a final prototype. That is trying to
support a solution to RQ3 and RQ4.
• P5. Evaluate Artefact: Validating the application of the proposal deployed in a
real industrial context and analyzing the feedback from the users. The use case will
be designed to be compatible with use in a BPO environment, which is the clearest
example of the use of virtualized systems. That finally completes the answer to
RQ3 and RQ4.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Approach</title>
      <p>In this section, a method to enable advanced RPA support is described (cf. Fig. 1).
This method proposes an image-based decision model discovery system for virtualized
environments that ofers RPA support. At a glance, the most representative phases of
the approach are:
1. Behavior monitoring to obtain a UI log. This UI log should include a screenshot for
each event, e.g., using a tool such as [14]. This phase is already being extensively
covered in previous investigations [14, 15]. However, the current research would
require an adaptation to capture more sources of information, e.g., images.
2. Discover processes from the UI log to build the process model that best represents
1https://github.com/faustomorales/keras-ocr
2https://gitlab.com/ajramirez/rpa-logger
Behaviour
Monitoring</p>
      <p>Process</p>
      <p>Discovery
UI Log</p>
      <p>Captures</p>
      <p>Decision
point</p>
      <p>Feature
extraction</p>
      <p>
        from
captures
the captured human behavior, e.g., using [10]. The phase makes explicit decision
points but lacks further information regarding how a decision is made.
Similar to the previous phase, the current state-of-the-art already provides suitable
mechanisms to conduct the discovery phase [7, 10]. However, they are required to
be adapted according to the extensions being performed in phase 1.
3. Feature extraction to transform the screenshots into objective and actionable
knowledge, e.g., the presence of specific buttons or text. These features are
automatically included as attributes (i.e., columns) in the events of the UI log. For
this extraction, several proposals exist, such as screen scraping algorithms [16] or
AI techniques, e.g., Keras-OCR. Primarily, this proposal will focus on applying
neural networks following the approach of [17, 18]. We assume that this feature
extraction will greatly increase the horizontal size of the UI log since a large number
of additional columns will be added. Therefore, it is expected that a noise reduction
technique will be necessary to discern relevant information on the screen from that
which is superficial. For example, analyzing the UI designs (i.e., how the UI is
constructed), the user attention (i.e., which parts of the UI are relevant for the
user), or the user behavior (i.e., how the user interacts and navigates through the
UIs).
4. Discovering decision models from the log enriched with the extracted features into
a machine-and-human-readable form. The discovering process is addressed for each
decision point of the process model. Herein, the extended UI log is transformed
into a dataset, which is prepared to train an explainable classifier such as decision
trees. Motivated by the work of [19], what is applied to traditional logs, the UI log
will be converted to a dataset. To do this, each case in the UI log will generate
a line in the dataset that will be labeled with the decision that is made at the
decision point.
5. Enhance discovered process by incorporating new information into the process
model. This requires the development of a new process modeling language for RPA,
or the extension of an existing one, with two main objectives: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) to ofer a better
understandability of the process model for the human, and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) to use the formality
of the language to add technical information to be able to automate or systematize
the RPA support tasks.
6. Provide RPA Support using the new modeling language. Similarly, as SmartRPA
[20] does, but covering those image-based contexts where SmartRPA does not ofer
support. This support can be reflected in the following applications. First, in an
automatic development of robots, based on the extracted process model. Second,
generating predictions about the decisions that robots should make before they
are made, since richer information about the process is available. Third, ofering
simulation scenarios extending the possibility of RPA testing automation outlined
in [21]. And lastly, ofering graphical support to visually represent what are the
features on which the decisions of the process are based.
      </p>
      <p>Although the proposed method and the application of the diferent techniques together
represent a novelty at the research level, there are existing works related to each specific
phase of this proposal. In the case of behavioral monitoring, there are several industrial
keylogger solutions to monitor human behavior [22, 23, 24, 25]. However, they only
store keystrokes and mouse clicks, in contrast to [14] keylogger that additionally extracts
screenshots. In the field of image feature extraction, some existing proposals allow to
identify and classify GUI (Graphical User Interface) components within a screenshot
[26, 17]. GUI components are atomic graphical elements with predefined functionality,
which are displayed within the GUI of a software application [17]. In this Ph.D. specific
knowledge of these areas is applied to obtain enriched logs from processes to be automated.</p>
      <p>Focusing on process discovery proposals related to this work, Agostinelli et al. [20]
and Leno et al. [7] cover the complete RPA lifecycle from event capture to automatic
generation of scripts for process automation and monitoring. Their way to capture data
is based on an Action Logger, which captures only parts of the activity on the system
through plugins. Thus, although they are focused on keyboard and mouse events, they
also capture the DOM tree on events captured through the web browser. Unlike these
approaches, the present work focuses on virtualized environments, where screenshots
are the main source of information and there is no access to deeper elements such as
the DOM tree. Furthermore, it focuses on the early stages of the RPA lifecycle since
it is hypothesized that the more efort put into those stages, the better results will be
obtained in subsequent ones.</p>
      <p>Considering decision model discovery proposals, Rozinat and van der Aalst [19] use
decision trees to analyze the choices made in terms of data dependencies afecting the
routing of a case. However, this approach does not ofer the possibility to show graphically
to a non-expert user why a decision has been made. Moreover, this solution has not
been validated in RPA contexts. Furthermore, Leno et al. [27] present an algorithm
that generates ”association rules” between the events that occurred and the results
or decisions obtained. Nonetheless, the method of capturing information is based on
a plugin, similarly to the aforementioned Action Logger which does not capture the
information that the user generates outside the context of the plugin. In contrast, the
present work relies on capturing the complete activity in the user interface. Thus, all
interaction performed by the user is recorded to support the process discovery phase.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Contributions to BPM Research</title>
      <p>This research contributes to BPM research by providing an entirely image-based approach,
which provides a new source of information for the study of business processes. This
consists of a more efective and comprehensive discovery of human behavior based on the
extraction of features from the screenshots to enrich the UI log to be analyzed. Previously,
there were decision points that were not discovered or whose reasons were wrong discovered,
resulting in erroneous implementations. The latter is mitigated by applying this approach,
which increases the capabilities of the analysis phase and, thus, the subsequent phases of
the RPA lifecycle. Besides that, it also contributes in areas such as process mining or
decision model discovery, where its application is immediate. Subsequently, this approach
increases the current degree of automation, so that automatable processes that were
previously not automatically discoverable, now they are.</p>
      <p>
        In addition, some other areas benefit from this approach such as (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) testing of robots, (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
checking conformance of the process models to be replicated, (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) tracking and monitoring
the execution of robots in production environments for the same purpose, or (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) ensuring
that service level agreements are met.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Project status and challenges</title>
      <p>
        Existing results related to this approach acknowledge its suitability for supporting the
RPA lifecycle. Specifically, in [ 10] a method is proposed to support the analysis of human
behavior in scenarios that highly depends on screen captures. Herein, an algorithm is
proposed to (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) eficiently identify similar activities in a UI log based on the fingerprints
of the screen captures and, (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) discover the underlying process model based on process
mining and noise filtering techniques. Later on, [ 14] formalizes a cross-platform keylogger
with a distributed architecture that can be used to generate and manage the UI logs of
several workers working in the same processes. This logger addresses the needs of the
ifrst phase of the suggested method (cf. Fig. 1) while the image analysis proposal covers
the second one. Diferent algorithms for image recognition are being evaluated in the
third phase (i.e., feature extraction). Although these algorithms belong to the Machine
Learning area, our initial results indicate that they are appropriate for carrying out this
task. However, their suitability depends on the information in the screen capture, e.g.,
the layout like single or double columns, the source like a web form or PDF document,
etc. In addition, we conduct the fourth phase based on previous results. More precisely,
we build upon previous work in the area of Configurable Business Process Models [28]
that generate decision trees for each decision point and, afterward, a questionnaire to
help to make the decision. Currently, our research is being based on a first version of the
framework that supports this method 3 focused on feature extraction and decision model
discovery phases [29]. Promising results are being obtained there, and they seem to be
appropriate for RPA as well.
      </p>
      <p>
        The next identified challenges are: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) generate synthesized data respecting a given
process model in order to validate the proposal, (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) study the user’s attention (e.g. gaze
analysis) for noise reduction in the UI log, to select the relevant information from all the
features extracted from screenshots, and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) perform tests with explainable algorithms
other than trees to compare the results of decision models discovery.
      </p>
      <p>Lastly, these challenges aim to fully automate the RPA lifecycle using new sources
of information like images [30]. This final goal is ambitious and will require a gradual
increase in the organization’s digital maturity, so until the point of total automation is
reached, it will be necessary to consider the paradigm of human-in-the-loop [31] so that
automatic techniques and human intervention coexist.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research is part of the project PID2019-105455GB-C31 funded by MCIN/AEI/
10.13039/501100011033. The author of this work is currently supported by the FPU
scholarship program, granted by the Spanish Ministry of Education and Vocational
Training (FPU20/05984) and by his Ph.D. supervisors, Andrés Jiménez Ramírez and
José González Enríquez.
[10] A. Jimenez-Ramirez, H. A. Reijers, I. Barba, C. Del Valle, A method to improve the
early stages of the robotic process automation lifecycle, in: International Conference
on Advanced Information Systems Engineering, Springer, 2019, pp. 446–461.
[11] P. Johanesson, E. Perjons, Design science, An introduction to Design Science.</p>
      <p>Springer 10 (2014) 978–1.
[12] N. C. Dalkey, The Delphi method: An experimental study of group opinion, Technical</p>
      <p>Report, RAND CORP SANTA MONICA CA, 1969.
[13] P. Brereton, B. Kitchenham, D. Budgen, Z. Li, Using a protocol template for case
study planning, in: 12th International Conference on Evaluation and Assessment in
Software Engineering (EASE) 12, 2008, pp. 1–8.
[14] J. M. López-Carnicer, C. del Valle, J. G. Enríquez, Towards an opensource logger
for the analysis of rpa projects, in: International Conference on Business Process
Management, Springer, 2020, pp. 176–184.
[15] V. Leno, A. Polyvyanyy, M. La Rosa, M. Dumas, F. M. Maggi, Action logger
enabling process mining for robotic process automation, in: Proceedings of the
Dissertation Award, Doctoral Consortium, and Demonstration Track at 17th
International Conference on Business Process Management,(BPM 19), Vienna, Austria,
2019, pp. 124–128.
[16] J. Bisbal, D. Lawless, B. Wu, J. Grimson, Legacy information systems: Issues and
directions, IEEE software 16 (1999) 103–111.
[17] K. Moran, C. Bernal-Cárdenas, M. Curcio, R. Bonett, D. Poshyvanyk, Machine
learning-based prototyping of graphical user interfaces for mobile apps, IEEE
Transactions on Software Engineering 46 (2018) 196–221.
[18] Z. Feng, J. Fang, B. Cai, Y. Zhang, Guis2code: A computer vision tool to
generate code automatically from graphical user interface sketches, in: International
Conference on Artificial Neural Networks, Springer, 2021, pp. 53–65.
[19] A. Rozinat, W. M. van der Aalst, Decision mining in prom, in: International
conference on business process management, Springer, 2006, pp. 420–425.
[20] S. Agostinelli, M. Lupia, A. Marrella, M. Mecella, Automated generation of
executable rpa scripts from user interface logs, in: International Conference on Business
Process Management, Springer, 2020, pp. 116–131.
[21] A. Jiménez-Ramírez, J. Chacón-Montero, T. Wojdynsky, J. Gonzalez Enriquez,
Automated testing in robotic process automation projects, Journal of Software:
Evolution and Process (2020) e2259.
[22] Spyrix Inc, Spyrix. parental &amp; employees monitoring software, Available at
www.spyrix.com, Last accesed May 2022.
[23] Bestxsoftware, Best free keylogger, Available at bestxsoftware.com/es, Last accesed</p>
      <p>May 2022.
[24] Spytech Software and Design, Inc, Spytech, providing computer monitoring solutions
since 1998, Available at www.spytech-web.com/spyagent.shtml, Last accesed May
2022.
[25] Randhawa, A., Blackcat keylogger, Available at https://github.coma/jayrandhawa/</p>
      <p>Keylogger, Last accesed May 2022.
[26] Z. Xu, X. Baojie, W. Guoxin, Canny edge detection based on open cv, in: 2017 13th
IEEE international conference on electronic measurement &amp; instruments (ICEMI),
IEEE, 2017, pp. 53–56.
[27] V. Leno, A. Augusto, M. Dumas, M. La Rosa, F. M. Maggi, A. Polyvyanyy,
Identifying candidate routines for robotic process automation from unsegmented ui
logs, in: 2020 2nd International Conference on Process Mining (ICPM), IEEE, 2020,
pp. 153–160.
[28] A. Jiménez-Ramírez, I. Barba, B. Weber, C. Del Valle, Automatic generation of
questionnaires for supporting users during the execution of declarative business
process models, in: Business Information Systems, Springer International Publishing,
Cham, 2014, pp. 146–158.
[29] A. Martínez-Rojas, A. Jimenez Ramirez, J. Gonzalez Enríquez, H. Reijers, Analysing
variable human actions for robotic process automation, in: International Conference
on Business Process Management,(BPM 22), 2022. (In press).
[30] A. Jiménez-Ramírez, Humans, processes and robots: a journey to hyperautomation,
in: International Conference on Business Process Management, Springer, 2021, pp.
3–6.
[31] R. C. Ruiz, A. J. Ramírez, M. J. E. Cuaresma, J. G. Enríquez, Hybridizing humans
and robots: An rpa horizon envisaged from the trenches, Computers in Industry
138 (2022) 103615.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bichler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Heinzl</surname>
          </string-name>
          , Robotic Process Automation,
          <source>Business &amp; Information Systems Engineering</source>
          <volume>60</volume>
          (
          <year>2018</year>
          )
          <fpage>269</fpage>
          -
          <lpage>272</lpage>
          . doi:
          <volume>10</volume>
          .1007/s12599-018- 0542-4.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Frank</surname>
          </string-name>
          , Introduction To Robotic Process Automation,
          <article-title>Institute for Robotic Process and Automation (</article-title>
          <year>2015</year>
          )
          <fpage>35</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Willcocks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lacity</surname>
          </string-name>
          , A New Approach to Automating Services,
          <source>MIT Sloan Management Review</source>
          <volume>58</volume>
          (
          <year>2016</year>
          )
          <fpage>40</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Asatiani</surname>
          </string-name>
          , E. Penttinen,
          <article-title>Turning robotic process automation into commercial success - Case OpusCapita</article-title>
          ,
          <source>Journal of Information Technology Teaching Cases</source>
          <volume>6</volume>
          (
          <year>2016</year>
          )
          <fpage>67</fpage>
          -
          <lpage>74</lpage>
          . doi:
          <volume>10</volume>
          .1057/jittc.
          <year>2016</year>
          .
          <volume>5</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Capgemini</surname>
          </string-name>
          ,
          <string-name>
            <surname>Robotic Process</surname>
          </string-name>
          Automation -
          <article-title>Robots conquer business processes in back ofices (</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lacity</surname>
          </string-name>
          , L. Willcocks,
          <article-title>What Knowledge Workers Stand to Gain from Automation, Harvard Business Review (</article-title>
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Leno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Polyvyanyy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. La</given-names>
            <surname>Rosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Maggi</surname>
          </string-name>
          ,
          <article-title>Robotic Process Mining Vision</article-title>
          and Challenges,
          <source>Business &amp; Information Systems Engineering</source>
          (
          <year>2020</year>
          ).
          <source>doi:10.1007s12599-020-00641-4.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>L. Reinkemeyer.</surname>
          </string-name>
          , Process Mining in Action. Principles, Use Cases and Outlook, Springer,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Linn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Werth</surname>
          </string-name>
          ,
          <article-title>Desktop activity mining-a new level of detail in mining business processes</article-title>
          , in: Workshops der INFORMATIK 2018-Architekturen, Prozesse, Sicherheit und Nachhaltigkeit, Köllen Druck+ Verlag GmbH,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>