=Paper= {{Paper |id=Vol-2900/WS4Paper2 |storemode=property |title=How Data Models Can Contribute to Linking Real-Life Assets with their Digital Twin – A Case Study in Predictive Maintenance |pdfUrl=https://ceur-ws.org/Vol-2900/WS4Paper2.pdf |volume=Vol-2900 |authors=Moritz von Stietencron,Karl Hribernik,Klaus-Dieter Thoben |dblpUrl=https://dblp.org/rec/conf/iesa/StietencronHT20 }} ==How Data Models Can Contribute to Linking Real-Life Assets with their Digital Twin – A Case Study in Predictive Maintenance== https://ceur-ws.org/Vol-2900/WS4Paper2.pdf
How Data Models Can Contribute to Linking Real-Life Assets with
their Digital Twin – A Case Study in Predictive Maintenance
Moritz von Stietencrona, Karl Hribernika and Klaus-Dieter Thobena,b
a
  BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen, Hochschulring 20,
  28359 Bremen, Germany
b
  University of Bremen, Faculty of Production Engineering, Badgasteiner Straße 1, 28359 Bremen, Germany

                Abstract
                The basic concept of a digital twin as widely used mandates the existence of an original
                counterpart – most commonly dubbed as “real” – which is represented by the digital instance.
                Subsequently, this co-existence of the real and digital twins poses a continuous interoperability
                issue between the physical and digital world. While in theory, the mapping of e.g. physical to
                digital properties is trivial, in practice, it usually is not. This paper presents a case study on
                how this interoperability problem can be addressed by the use of a unified data model for
                predictive maintenance applications, which has been developed in the EU-funded innovation
                action UPTIME.

                Keywords 1
                Paper template, paper formatting, CEUR-WS

1. Problem Statement
   Digital Twins are commonly understood as “digital replications of living as well as non-living
entities” [1] and sometimes also as means to represent intangible entities like processes [2]. While the
definitions are as diverse as the domains to which digital twin concepts are applied to nowadays [3], all
concepts rely on the interaction between the real and digital counterparts. While technology has made
rapid progress over the last decade on the availability and integration of sensors and asset connectivity,
the developments have also created a plethora of closed or semi-closed frameworks through which the
digital twins are fed. This near-inevitable vendor lock provides a great hurdle in the extensive
adoptability of the digital twin technology as it discourages the integration of complementary systems
side by side. Furthermore, the prevailing systems are often built towards a specific digital twin
application (e.g. simulation or visualisation) which hinders cashing in scale effects and the transfer of
applications between domains.

2. Digital Twins and Data Models
    While in public perception, digital twins are most commonly associated with the applications which
are realised based on them (e.g. 3D visualisations), the main component of a digital twin is really the
data it holds and subsequently the architecture and structure it utilises to do so. [3], [4] The different
angles, from which digital twins are designed and implemented, naturally introduce different
architectures. However, the vast majority are built around a more or less simple structure which
identifies the represented asset and corresponds with the respective counterparts of the real twins
structure.
    The asset breakdown structure can be detailed to very different depths, depending on the needs of
the application. The properties and meta data which the digital twin holds about its real counterpart are
then usually mapped onto this core structure. Many implementations of Digital Twins have a concise

Proceedings of the Workshops of I-ESA 2020, 17-11-2020, Tarbes, France
EMAIL: sti@biba.uni-bremen.de (A. 1); hri@biba.uni-bremen.de (A. 2); tho@biba.uni-bremen.de (A. 3)
             ©️ 2020 Copyright for this paper by its authors.
             Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
             CEUR Workshop Proceedings (CEUR-WS.org)
physical object in their focus [5], which defines the asset breakdown structure used for the digital twin.
While this provides clear connections between the digital and physical twins, their components and
states, modelling and including external systems is increasingly difficult. Thus some architectures
foresee the possibility of linking different digital twins and jointing their algorithms, e.g. for co-
simulation [6].
    Like the focus of implementation, also the approach to data storage and modelling varies
significantly. For applications, in which the data to be managed is known precisely and / or which are
tailor made for a specific application, a rigid data model can be applied for maximum efficiency and
ease of use [7]. In cases, where interaction between digital twins is necessary or where integration of
external systems is needed, however a simple tailor-made data model usually does not provide enough
flexibility in itself. Here the usage of standardised modelling languages has been established [8]. This
also ensures some extensibility for future integrations.
    With an increase in complexity of the asset and data structure at the same time also the complexity
of the connection of the two twins grows as an increasing complexity is often followed by increased
data volumes. For especially complex assets or when the digital twin is not only concerned with a single
component but a whole assembly or fleet of products this growing complexity can easily lead towards
a categorisation as big data application. In these cases, the architecture cannot sensibly be assessed for
the isolated digital twin but only as a co-existence of both physical and digital twins. With moving the
physical and digital twins closer together the paradigm of edge vs. cloud processing becomes one of
the core issues [9]. The gap between physical and digital twins does not exclusively generate its
importance from the complexity of the twins and their constraints but is also from the aspired versatility
of the system and its applicability to diverse applications scenarios.

3. Bridging the Gap between Real-life and Digital Twins
    While the different shapes, that the current implementations of digital twins are in, are arguably fit
for purpose to address the respective application and sector, a challenge for progressing the current state
of the art is creating a universally applicable structure for digital twins / representations, which is not
limited to the peculiarities of few application / sector combinations.
    The “UPTIME – Unified Predictive Maintenance” project is creating a predictive maintenance
system aimed at manufacturing and logistics applications utilising the digital twin paradigm. Due to the
objectives of the action the system needs to be applicable to an extremely wide range of application
scenarios. At the same time this puts the gap between physical and digital twins into the spotlight, as
the universal applicability raises the aforementioned issues. In the system definition process the
question of a unified data model which is applicable to a diverse set of digital twins has been identified
as one of the main topics.
    The UPTIME project has initiated the work on an unified data model with a focus on the scope of
information managed in a Predictive Maintenance solution, and how they are related. The resulting
business data model allows both final users and technical partners to understand the logic behind the
solution and associated processes, and how the different required inputs work together to provide the
expected service. The UPTIME Business Data Model is not a technical data model, but rather the logical
view which provides a framework for that technical data model and a visual & simplified way to explain
it. For it six main areas have been identified with their main interactions. Figure 1 gives an overview of
these data themes.
Figure 1: High level UPTIME Business Data Model

   The arrows in the figure represent the key relationships between the distinct groups of data and is
the link to the functional model of the project. The Semantic/Context Model information serves as the
transverse structure for all other information. At the same time, the FMECA analysis serves as the
knowledge base for UPTIME, and drives the key functionalities of Diagnosis, Prognosis, and Advisory
Generation. Finally, the follow-up of Action Implementations can be used to update the models for
more relevant results. The following table iterates the individual data themes which are used in the
model.

Table 1: UPTIME Business Data Model Data Themes
     Data Theme                                         Description
                          Model logically describing the system that is to be monitored through
  Semantic / Context
                         predictive maintenance, down to the different sensors and maintainable
        Model
                                               items and their relationships
   FMECA Analysis          Model describing in detail the different failure modes of the system
                          Inputs collected from the monitored system and relevant information
     Data Inputs
                               systems, to be used in the Diagnosis and Prognosis algorithms
     Diagnosis &            Information used to define the Diagnosis & Prognosis algorithms in
      Prognosis                         accordance to the monitored failure modes

                             Information used to define the decision algorithms for the advisory
 Advisory Generation
                                                         generation

       Action                  Information managed in the context of the implementation of a
   implementation                   maintenance action (if managed within the solution)

   Based on this high level busines data model the individual processes have been mapped into a
detailed data model listing and linking the data classes. While this can partially be translated into a
technical data model, it is an important step in order to be able to link the different data classes to
business processes and merits. Figure 2 gives an overview of the detailed business data model.
Figure 2: Detailed UPTIME Business Data Model


4. Conclusion
    This paper has given an insight into a business-oriented approach to modelling the relevant data of
a predictive maintenance system. In the UPTIME project, this has been deployed to create a unified
technical data model for a wide variety of use cases while retaining a high level of flexibility and
component integration. Together with the modelling of the business processes and customer journeys
this business data model has helped significantly to join the business and technical perspectives of the
project.
    While not in the scope of the UPTIME project future research should be directed at transferring this
approach towards dynamic data environments and models. On the technical level representing such
constructs in ontologies or as graph collections is established, but the business data modelling
perspective is often lacking.

Acknowledgements
   This project has received funding from the European Union's Horizon 2020 research and innovation
programme under grant agreement No 768634. The contents of this paper reflect only the authors’ view
and the Commission is not responsible for any use that may be made of the information it contains.

References
[1] A. El Saddik, „Digital Twins: The Convergence of Multimedia Technologies“, IEEE Multimed.,
    Bd. 25, Nr. 2, S. 87–92, Apr. 2018, doi: 10.1109/MMUL.2018.023121167.
[2] E. ANDALUZ, „The Process Digital Twin: A step toward operational excellence“, Microsoft
    Industry       Blogs,      Okt.     23,       2017.    https://cloudblogs.microsoft.com/industry-
    blog/manufacturing/2017/10/23/the-process-digital-twin-a-step-toward-operational-excellence/
    (zugegriffen Feb. 13, 2020).
[3] W. Kritzinger, M. Karner, G. Traar, J. Henjes, und W. Sihn, „Digital Twin in manufacturing: A
    categorical literature review and classification“, IFAC-Pap., Bd. 51, Nr. 11, S. 1016–1022, Jan.
    2018, doi: 10.1016/j.ifacol.2018.08.474.
[4] M. Grieves und J. Vickers, „Digital twin: Mitigating unpredictable, undesirable emergent behavior
    in complex systems“, in Transdisciplinary perspectives on complex systems, Springer, 2017, S. 85–
    113.
[5] A. Redelinghuys, A. Basson, und K. Kruger, „A six-layer digital twin architecture for a
    manufacturing cell“, in International Workshop on Service Orientation in Holonic and Multi-Agent
    Manufacturing, 2018, S. 412–423.
[6] B. A. Talkhestani u. a., „An architecture of an intelligent digital twin in a cyber-physical production
    system“, -Autom., Bd. 67, Nr. 9, S. 762–782, 2019.
[7] N. Kousi, C. Gkournelos, S. Aivaliotis, C. Giannoulis, G. Michalos, und S. Makris, „Digital twin
    for adaptation of robots’ behavior in flexible robotic assembly lines“, Procedia Manuf., Bd. 28, S.
    121–126, 2019.
[8] G. N. Schroeder, C. Steinmetz, C. E. Pereira, und D. B. Espindola, „Digital Twin Data Modeling
    with AutomationML and a Communication Methodology for Data Exchange“, IFAC-Pap., Bd. 49,
    Nr. 30, S. 12–17, Jan. 2016, doi: 10.1016/j.ifacol.2016.11.115.
[9] K. M. Alam und A. El Saddik, „C2PS: A digital twin architecture reference model for the cloud-
    based cyber-physical systems“, IEEE Access, Bd. 5, S. 2050–2062, 2017.