<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The TTC 2017 Outage System Case for Incremental Model Views</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Georg Hinkel FZI Research Center of Information Technologies Haid-und-Neu-Straße 10-14</institution>
          ,
          <addr-line>76131 Karlsruhe</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>To cope with the increased complexity, physical systems are more and more supported by software systems consisting of multiple subsystems. Usually, each of the subsystems uses standards relevant to the subsystem for interoperability with other tools. Thus, one faces the problem that information about the system as a whole is distributed across multiple models. To solve this problem, model views can be introduced to combine these models and extract application-specific knowledge. As an example, the smart grid is a cyber-physical system where one is interested to detect, manage and prevent system outages. The information necessary to do this is split among the standards IEC 61970/61968, IEC 61850 and IEC 62056. This paper presents a benchmark case and evaluation framework for joining information spread across multiple models into a single view, based on a model-based outage management system for smart grids. Because cyber-physical systems often require very fast response times to changes of underlying models, the benchmark focuses especially on the incremental computation of model views.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>The complexity of todays systems, for example cyber-physical systems, makes it inevitable to divide the system
into multiple subsystems that operate in different domains. In many of these domains, standards exist that the
respective subsystem has to comply with or for which many tools can be reused.</p>
      <p>
        For example, the smart grid is a cyber-physical system that spans the physical structures of the electricity
network and the system of software systems that monitor, control, and repair the system in case of outages [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Currently, many heterogeneous systems and standards have to interoperate to achieve the desired reliability,
stability, and efficiency of the electricity network.
      </p>
      <p>Because each of these standards describe different aspects of the system, models according to these standards
have to be combined if multiple aspects are required to gain some insights about the system. Applying
modeldriven engineering, model views are a tool to extract information from multiple models without confronting the
user with unnecessary information for a particular purpose, for example analysis.</p>
      <p>In the area of smart grids, an additional challenge is the size of the models and the frequency of changes. In
combination, this means that very large amounts of data have to be processed in a very short amount of time.
However, the changes usually only affect small parts of the model, which is why an incremental view computation
appears beneficial.</p>
      <p>
        In this paper, we propose a benchmark based on selected model views created by Mittelbach and Burger [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] for a model-based outage management system for smart grids. Section 2 introduces the case in a bit more
detail, in particular the involved standards. Section 3 defines two tasks of the benchmark. Section 4 introduces
the benchmark framework. Section 5 explains how solutions to the benchmark are to be evaluated.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Case Description</title>
      <p>
        In the area of smart grids, the relevant standards are IEC 61970/61968, IEC 61850 and IEC 62056. These
standards are briefly described below (taken from [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]):
IEC 61970/61968 The IEC 61970 standard defines the Common Information Model (CIM), which is used to
describe the physical components, measurement data, control and protection elements, and the SCADA
system. It is defined in UML notation. The IEC 61968 standard is an extension of the CIM for the
distribution network [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It is also called distributed CIM (DCIM)
IEC 61850 This is a series of standards for substations with the purpose of supporting interoperability of
intelligent electronic devices (IED) in substation automation systems. It defines the Abstract Communication
Service Interface with a mapping to concrete communication protocols, the XML-based Substation
Configuration Description Language (SCL), and the Logical Node (LN) model, which describes power system
functions [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        IEC 62056 COSEM (Companion Specification for Energy Metering) is the international standard for data
exchange for meter reading, tariff and load control in the domain of electricity metering. It works together
with the Device Language Message Specification (DLMS). Together, they provide a communication
profile to transport data from metering equipment to the metering system and to define a data model and
communication protocols for data exchange [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        While these standards are useful in their domain, one has to combine the information represented by these
standards to detect and prevent outage situations. Mittelbach and Burger presented a model-based outage
system that synchronizes models of these standards and consists of a set of 15 views to help operators to manage
outage situations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Tasks</title>
      <p>In the scope of the proposed benchmark, we focus on two model views contained in the model-based outage
management system. A rather simple view is created to detect outages, while a second slightly more complex
view supports the prevention of outages.</p>
      <p>
        For both tasks, we present the original implementation of the view in ModelJoin [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], a language to
specify both the view type and the view definition in a single specification through a language inspired by SQL
joins. Currently, an idiomatic QVT-O model transformation is generated from this specification. Due to space
limitations, we do not show the generated transformation, but it is available in the benchmark resources for
reference.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Task 1: A view to detect outages</title>
        <p>To detect an outage, we use the fact that a smart meter cannot send any data when it is cut off power supply.
If this happens, the system can try to reach the meter but will receive a connection failure notification. This is
used to detect outages without relying on customer feedback.</p>
        <p>The information that a connection to a smart meter is lost is depicted in the CIM model. The relevant
excerpt for this task is depicted in Figure 1b. It has to be matched with the corresponding physical devices in
the COSEM model where its location is stored. The latter is depicted in Figure 1c.</p>
        <sec id="sec-3-1-1">
          <title>An implementation in ModelJoin is depicted in Listing 1.</title>
          <p>PositionPoint
xPosition : EString
yPosition : EString
zPosition : EString
[0..1] Position</p>
          <p>Location
[0..1] Location</p>
          <p>EnergyConsumer
Reachability : EInt
PowerA : EDouble = 0.0
ID : EString</p>
          <p>UUID : EString</p>
          <p>EnergyConsumer
[0..1] EnergyConsumer
ServiceDeliveryPoint
[0..*] ServiceDeliveryPoints</p>
          <p>IdentifiedObject
mRID : EString</p>
          <p>[0..*] Assets
[0..1] AssetContainer</p>
          <p>[0..1] Location
Asset
[0..*] Assets
(a) The Viewtype for the Outage De- [0..1] ServiceDeliveryPoint</p>
          <p>[0..*] EndDeviceAssets
tection Task</p>
          <p>(b) Excerpt from the CIM metamodel relevant for task 1</p>
          <p>COSEMRoot
[0. *] PhysicalDevice</p>
          <p>PhysicalDevice
ID : EString</p>
          <p>[0. 1] AutoConnect
[0. 1] ElectricityValues</p>
          <p>AutoConnectObject
Connection : EBoolean = false</p>
          <p>ElectricityValues</p>
          <p>ApparentPowermL1 : EDouble = 0.0
(c) Excerpt from the COSEM metamodel relevant for task 1</p>
          <p>A correct solution should match the meter assets in the CIM model with the physical devices in the COSEM
model. For each of these matches, the result model should contain a model element that conforms to the
generated view type class EnergyConsumer. For this element, a range of (partially derived) attributes shall be
copied and the reference to the location of meter asset and physical device should be saved.</p>
          <p>This reference to location and position point shall respect referential integrity. This means, if two meter assets
in the CIM model reference the same location, their joins in the view should also reference the same Location
element in the view.</p>
          <p>Element</p>
          <p>
            UUID : EString
The analysis algorithms to detect system disturbances proposed in [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] work on phasor measurement data : Their
basic concept is to compare the current phasor data of the traveling voltage wave with a historic set of normal
phasor data and calculate an equality indicator like a correlation coefficient. This is compared with a certain
benchmark. If it lies above, a failure is indicated. To enable this, the following information is necessary: a historic
set of normal phasor data of that section, a matrix of the current phasor data and a calculation mechanism to
compare the two followed by a comparison mechanism to decide if it is a failure or not. [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]
          </p>
          <p>Task 2 requires to match elements from all three domain standards. For the COSEM standard, the used
metamodel excerpt is very similar to Task 1. The relevant metamodel excerpts for CIM and the substation
standard are depicted in Figure 2a and Figure 2b, respectively.</p>
          <p>
            The analysis viewtypes will not provide the analysis result but only the matrix of phasor data for the
comparison. Six queries were defined in [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ] that all have the same structure and provide the three-phase measurements of
voltage, frequency, current, active power, reactive power and apparent power. An implementation of this matrix
in ModelJoin is depicted in Listing 2.
theta join CIM.IEC61968.Metering.MeterAsset with substationStandard.LNNodes.LNGroupM.MMXU where "CIM.IEC61968.Metering.MeterAsset.
          </p>
          <p>mRID␣=␣substationStandard.LNNodes.LNGroupM.MMXU.NamePlt.IdNs" as jointarget.PMUVoltageMeter {
keep attributes CIM.IEC61970.Core.IdentifiedObject.mRID
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.phsA.cVal.mag.f" as PMUVoltageMeter.VoltageAMag:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.phsA.cVal.ang.f" as PMUVoltageMeter.VoltageAAng:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.phsB.cVal.mag.f" as PMUVoltageMeter.VoltageBMag:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.phsB.cVal.ang.f" as PMUVoltageMeter.VoltageBAng:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.phsC.cVal.mag.f" as PMUVoltageMeter.VoltageCMag:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.phsC.cVal.ang.f" as PMUVoltageMeter.VoltageCAng:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.neut.cVal.mag.f" as PMUVoltageMeter.VoltageNeutMag:</p>
          <p>Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.neut.cVal.ang.f" as PMUVoltageMeter.VoltageNeutAng:</p>
          <p>Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.net.cVal.mag.f" as PMUVoltageMeter.VoltageNetMag:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.net.cVal.ang.f" as PMUVoltageMeter.VoltageNetAng:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.res.cVal.mag.f" as PMUVoltageMeter.VoltageResMag:Double
keep calculated attribute "substationStandard.LNNodes.LNGroupM.MMXU.PhV.res.cVal.ang.f" as PMUVoltageMeter.VoltageResAng:Double
keep supertype CIM.IEC61968.Assets.Asset as type jointarget.Asset {
keep outgoing CIM.IEC61968.Assets.Asset.Location as type jointarget.Location {
keep outgoing CIM.IEC61968.Common.Location.Position as type jointarget.PositionPoint {
keep attributes CIM.IEC61968.Common.PositionPoint.xPosition,
CIM.IEC61968.Common.PositionPoint.yPosition,</p>
          <p>CIM.IEC61968.Common.PositionPoint.zPosition
}
keep outgoing CIM.IEC61968.Common.Location.PowerSystemResources as type jointarget.PowerSystemResource {
keep subtype CIM.IEC61970.Core.ConductingEquipment as type jointarget.ConductingEquipment {
keep outgoing CIM.IEC61970.Core.ConductingEquipment.Terminals as type jointarget.Terminal {
keep outgoing CIM.IEC61970.Core.Terminal.TieFlow as type jointarget.TieFlow {</p>
          <p>keep outgoing CIM.IEC61970.ControlArea.TieFlow.ControlArea as type jointarget.ControlArea {
subtypes of some model elements. If an energy consumer in a service delivery point is a ConformLoad, then the
view computation should be different to the case when the energy consumer is a NonConformLoad.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Benchmark Framework</title>
      <p>
        The benchmark framework is based on the benchmark framework of the TTC 2015 Train Benchmark case [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and
supports a generator of change sequences, automated build and execution of solutions as well the visualization
of the results using R. The source code and documentation of the benchmark as well as metamodels, reference
solutions in ModelJoin and QVT-O, example models and example change sequences are publicly available
online at http://github.com/georghinkel/ttc2017SmartGrids.
      </p>
      <sec id="sec-4-1">
        <title>The benchmark consists of the following phases:</title>
        <p>1. Initialization: In this phase, solutions may load metamodels and other infrastructure independent of the
used models as required. Because time measurements are very hard to measure for this phase, the time
measurement is optional.</p>
      </sec>
      <sec id="sec-4-2">
        <title>2. Loading: The initial model instances are loaded.</title>
      </sec>
      <sec id="sec-4-3">
        <title>3. Initial: An initial view is created.</title>
        <p>LoadGroup [0.1] SubLoadArea</p>
        <p>SubLoadArea
NonConformLoadGroup</p>
        <p>ConformLoadGroup
[0.1] LoadGroup
NonConformLoad
[0.1] LoadGroup</p>
        <p>[0.1] ControlArea
ConformLoad</p>
        <p>ControlArea
mRID : EString
[0.1] LoadArea</p>
        <p>LoadArea
[0.1] ControlArea
4. Updates: A sequence of change sequences is applied to the model. Each change sequence consists of several
atomic change operations. After each change sequence, the view must be consistent with the changed source
models, either by entirely creating the view from scratch or by propagating changes to the view result.</p>
        <p>In the following subsections, the change sequences, solution requirements and the benchmark configuration
are explained in more detail.
4.1</p>
        <sec id="sec-4-3-1">
          <title>Change Sequences</title>
          <p>To measure the incremental performance of solutions, the benchmark uses generated change sequences. These
change sequences are in the changes directory of the benchmark resources. Additional change sequences can be
generated using a generator contained in the benchmark resources, however the generator is implemented using</p>
        </sec>
      </sec>
      <sec id="sec-4-4">
        <title>NMF [9] and thus requires .NET Framework 4.5.1 to be installed.</title>
        <p>The changes are available in the form of models. An excerpt of the metamodel is depicted in Figure 4: There
are classes for each elementary change operation that distinguish between simple assignments and collection
interactions, such as adding or removing single elements or resetting, which means erasing the collection contents.
The true metamodel contains concrete classes that distinguish further between the type of feature, whether it is
an attribute, association or composition change. In these concrete classes, the added, deleted or assigned items
are included1. The change metamodel also supports change transactions where a source change implies some
other changes, for example setting opposite references.</p>
        <p>Unfortunately, the implementation to apply these changes is only available in NMF. Incremental tools are
therefore asked to transform the change sequences into their own change representation. To ease the specification
of batch solutions, the generator also outputs the models after each change sequence step.
1In a composite insertion, the added element is contained, otherwise only referenced.</p>
        <p>ModelChangeSet
invertible : EBoolean = false</p>
        <p>ElementaryChange
affectedElement : EObject</p>
        <p>ElementaryChangeTransaction</p>
        <p>ListInsertion
index : EInt
feature : EStructuralFeature</p>
        <p>CollectionInsertion
feature : EStructuralFeature</p>
        <p>CollectionReset
feature : EStructuralFeature</p>
        <p>CollectionDeletion
feature : EStructuralFeature</p>
        <p>PropertyChange
feature : EStructuralFeature</p>
        <p>ListDeletion
index : EInt
feature : EStructuralFeature
The solutions are required to perform the steps of the benchmark and report the following metrics after each
step, in case of the update phase after every change sequence.</p>
      </sec>
      <sec id="sec-4-5">
        <title>Tool: The name of the tool.</title>
      </sec>
      <sec id="sec-4-6">
        <title>View: The view that is currently computed</title>
      </sec>
      <sec id="sec-4-7">
        <title>ChangeSet: The name of the change set that is currently run</title>
      </sec>
      <sec id="sec-4-8">
        <title>RunIndex: The run index in case the benchmark is repeated</title>
      </sec>
      <sec id="sec-4-9">
        <title>Iteration: The iteration (only required for the Update phase)</title>
      </sec>
      <sec id="sec-4-10">
        <title>PhaseName: The phase of the benchmark</title>
      </sec>
      <sec id="sec-4-11">
        <title>MetricName: The name of the reported metric</title>
      </sec>
      <sec id="sec-4-12">
        <title>MetricValue: The value of the reported metric</title>
        <p>Solutions should report on the runtime of the respective phase in integer nanoseconds (Time), the working
set in bytes (Memory) and the root element count in the created view (Elements). The memory measurement
is optional. If it is done, it should report on the used memory after the given phase (or iteration of the update
phase) is completed. Solutions are allowed to perform a garbage collection before memory measurement that
does not have to be taken into account into the times. In the update phase, we are not interested in the time to
load models or changes or the perhaps required transformation of changes, but only the pure view update, i.e.
either recomputation of the view or propagation of the change.</p>
        <p>To enable automatic execution by the benchmark framework, solutions should add a subdirectory to the
solutions folder of the benchmark with a solution.ini file stating how the solution should be built and how
it should be run. An example configuration for the ModelJoin solution is depicted in Listing 3. Because the
solution contains the already compiled Jar archive, no action is required for build. However, solutions may want
to run build tools like maven in this case to ensure the benchmark runs with the latest version.
1 [build]
2 default=echo ModelJoin solution is already compiled
3 skipTests=echo The ModelJoin solution is already compiled
4
5 [run]
6 OutageDetection=java -Xmx8G -jar solution.jar -view OutageDetection
7 OutagePrevention=java -Xmx8G -jar solution.jar -view OutagePrevention</p>
        <p>Listing 3: An example solution.ini file
1 {
2
3
4
5
6
7
8
9
10
11
12
13 }
"Views": [
"OutageDetection",
"OutageAvoidance"
],
"Tools": ["ModelJoin"],
"ChangeSets": [</p>
        <p>"changeSequence1"
],
"Sequences": 100
"SequenceLength": 10
"Runs": 5
5.
1 &amp;&gt; python scripts/run.py
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation</title>
      <p>The repetition of executions as defined in the benchmark configuration is done by the benchmark. This means,
for 5 runs, the specified command-line for a particular view will be called 5 times. These runs should all have the
same prerequisites. In particular, solutions must not save intermediate data between different runs. Meanwhile,
all iterations of the Update phase are executed in the same process and solutions are allowed (and encouraged) to
save any intermediate computation results they like, as long as the results are correct after each change sequence.</p>
      <p>The root path of the input models and changes, the run index, the number of iterations and the name of
change sequences are passed using environment variables ChangePath, RunIndex, Sequences and ChangeSet. To
demonstrate the usage of these environment variables, the benchmark framework also contains a demo solution
which does nothing but print out a time csv entry using the provided environment variables.
The benchmark framework only requires Python 2.7 or above and R to be installed. R is required to create
diagrams for the benchmark results. Furthermore, the solutions may imply additional frameworks. We would
ask solution authors to explicitly note dependencies to additional frameworks necessary to run their solutions.</p>
      <p>If all prerequisits are fulfilled, the benchmark can be run using Python with the command
python scripts/run.py. Additional command-line options can be queried using the option –help.</p>
      <sec id="sec-5-1">
        <title>Listing 4: The default benchmark configuration</title>
        <p>The benchmark framework can be configured using JSON configuration files. The default configuration is
depicted in Listing 4. In this configuration, both views are computed, using only the reference solution in
ModelJoin, running the change sequence changeSequence1 contained in the changes directory 5 times each.
The exact commands created by the benchmark framework are determined using the solution configuration files
described below.</p>
        <p>To execute the chosen configuration, the benchmark can be run using the command line depicted in Listing</p>
      </sec>
      <sec id="sec-5-2">
        <title>Listing 5: Running the benchmark</title>
        <p>Additional commandline parameters are available to only update the measurements, create visualizations or
generate new change sequences.</p>
        <p>Solutions of the proposed benchmark should be evaluated by their completeness, correctness, conciseness,
understandability, batch performance and incremental performance.</p>
        <p>For each evaluation, a solution can earn 5 points for Task 1 and 5 points for Task 2. In the latter, we explain
how the points are awarded for Task 1. The points for Task 2 are awarded equivalently.
5.1</p>
        <sec id="sec-5-2-1">
          <title>Completeness &amp; Correctness</title>
          <p>Assessing the completeness and correctness of model transformations is a difficult task. In the scope of this
benchmark, besides manual assessment by opponents, solutions are checked for the correct number of model
elements in the result after each change.</p>
        </sec>
      </sec>
      <sec id="sec-5-3">
        <title>Points are awarded according to the following rules: 0 points The task is not solved.</title>
      </sec>
      <sec id="sec-5-4">
        <title>1-4 points The task is solved, but the number of elements in the result is either too high or too low.</title>
      </sec>
      <sec id="sec-5-5">
        <title>5 points The task is completely and correctly solved.</title>
        <p>Detecting and especially forecasting an outage in a smart grid heavily relies on heuristics. Therefore, it is
important to specify views in a concise manner.</p>
        <p>0 points The task is not solved.</p>
      </sec>
      <sec id="sec-5-6">
        <title>1 point The solution is the least concise.</title>
      </sec>
      <sec id="sec-5-7">
        <title>5 points The solution is the most concise.</title>
      </sec>
      <sec id="sec-5-8">
        <title>1-5 points All solutions in between are classified relative to the most and least concise solution.</title>
        <p>To evaluate the conciseness, we ask every solution to note on the lines of code of their solution. This shall
include the model views and glue code to actually run the benchmark. Code to convert the change sequence can
be excluded. For any graphical part of the specification, we ask to count the lines of code in a HUTN2 notation
of the underlying model.
5.3</p>
        <sec id="sec-5-8-1">
          <title>Understandability 5.4</title>
        </sec>
        <sec id="sec-5-8-2">
          <title>Performance</title>
          <p>Similarly to conciseness, it is important for maintainance tasks that the solution is understandable. However, as
there is no appropriate metric for understandability available, the assessment of the understandability is done
manually. For solutions participating in the contest, this score is collected using questionnaires at the workshop.
For the performance, we consider two scenarios: batch performance and incremental performance. For the
batch performance, we measure the time the solution requires to create the view for existing models. For the
incremental solution, we measure the time for the solution to propagate a given set of changes. Points are
awarded according to the following rules:
0 points The task is not solved.</p>
        </sec>
      </sec>
      <sec id="sec-5-9">
        <title>1 point The solution is the slowest.</title>
      </sec>
      <sec id="sec-5-10">
        <title>5 points The solution is the fastest.</title>
      </sec>
      <sec id="sec-5-11">
        <title>1-5 points All solutions in between are classified according to their speed.</title>
        <p>The measurements for batch performance and for incremental performance are done separately. This means,
solutions are allowed to run in a different configuration when competing for the batch performance than they
use in the incremental setting. This is because in an application, usually only one of these aspects is particularly
important.
5.5</p>
        <sec id="sec-5-11-1">
          <title>Overall Evaluation</title>
          <p>Due to their importance, the points awarded in completeness &amp; correctness and understandability are doubled
in the overall evaluation. Furthermore, due to the importance of incremental updates, we give the incremental
performance a double weight.</p>
        </sec>
      </sec>
      <sec id="sec-5-12">
        <title>Thus, each solution may earn up to 80 points in total (40 for each task).</title>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>2http://www.omg.org/spec/HUTN/
We would like to thank Victoria Mittelbach for the permission to use her master thesis results for this case.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mittelbach</surname>
          </string-name>
          , “
          <article-title>Model-driven Consistency Preservation in Cyber-Physical Systems,” Master's thesis, Karlsruhe Institute of Technology (KIT), Germany</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Burger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mittelbach</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Koziolek</surname>
          </string-name>
          , “
          <article-title>Model-driven consistency preservation in cyber-physical systems</article-title>
          ,”
          <source>in Proceedings of the 11th Workshop on Models@run.time co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS</source>
          <year>2016</year>
          ), (Saint Malo, France),
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[3] “Iec 61970 energy management system application program interface (ems-api) - part 301 common information model (cim) base</article-title>
          ,”
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[4] “Iec 61850 communication networks and systems for power utility automation</article-title>
          ,”
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. U.</given-names>
            <surname>Association</surname>
          </string-name>
          , “
          <article-title>Excerpt from companion specification for energy metering cosem interface classes and obis identification system</article-title>
          ,”
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Burger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Henß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Küster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kruse</surname>
          </string-name>
          , and L. Happe, “
          <article-title>View-Based Model-Driven Software Development with ModelJoin,”</article-title>
          <source>Software &amp; Systems Modeling</source>
          , vol.
          <volume>15</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>472</fpage>
          -
          <lpage>496</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Qianqian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xue</surname>
          </string-name>
          , and L. Xiang, “
          <article-title>A new smart distribution grid fault self-healing system based on traveling-wave</article-title>
          ,” in Industry Applications Society Annual Meeting,
          <year>2013</year>
          IEEE,
          <year>2013</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Szárnyas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Semeráth</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ráth</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Varró</surname>
          </string-name>
          , “
          <article-title>The TTC 2015 train benchmark case for incremental model validation,” in Proceedings of the 8th Transformation Tool Contest, a part of the Software Technologies: Applications and Foundations (STAF 2015) federation of conferences, L'Aquila</article-title>
          , Italy, July
          <volume>24</volume>
          ,
          <year>2015</year>
          .,
          <year>2015</year>
          , pp.
          <fpage>129</fpage>
          -
          <lpage>141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hinkel</surname>
          </string-name>
          , “
          <article-title>NMF: A Modeling Framework for the</article-title>
          .
          <source>NET Platform</source>
          ,” Karlsruhe Institute of Technology,
          <source>Tech. Rep.</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>