<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Model-driven Automated Deployment of Large-scale CPS Co-simulations in the Cloud</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yogesh</forename><forename type="middle">D</forename><surname>Barve</surname></persName>
							<email>yogesh.d.barve@vanderbilt.edu</email>
						</author>
						<author>
							<persName><forename type="first">Himanshu</forename><surname>Neema</surname></persName>
							<email>himanshu.neema@vanderbilt.edu</email>
						</author>
						<author>
							<persName><forename type="first">Aniruddha</forename><surname>Gokhale</surname></persName>
							<email>a.gokhale@vanderbilt.edu</email>
						</author>
						<author>
							<persName><forename type="first">Janos</forename><surname>Sztipanovits</surname></persName>
							<email>janos.sztipanovits@vanderbilt.edu</email>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Institute for Software-Integrated Systems</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Dept. of EECS</orgName>
								<orgName type="institution">Vanderbilt University</orgName>
								<address>
									<postCode>37212</postCode>
									<settlement>Nashville</settlement>
									<region>TN</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Model-driven Automated Deployment of Large-scale CPS Co-simulations in the Cloud</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">0EBE1142FAC38F6391D7B6A862823035</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T01:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>co-simulations</term>
					<term>verification</term>
					<term>model driven</term>
					<term>cloud</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>With increasing advances in Internet-enabled devices, large cyber-physical systems (CPS) are being realized by integrating several sub-systems together. Analyzing and reasoning different properties of such CPS requires co-simulations by composing individual and heterogeneous simulators, each of which addresses only certain aspects of the CPS. Often these co-simulations are realized as point solutions or composed in an ad hoc manner, which makes it hard to reuse, maintain and evolve these co-simulations. Although our prior work on a modelbased framework called Command and Control Wind Tunnel (C2WT) supports distributed co-simulations, many challenges remain unresolved. For instance, evaluating these complex CPSs requires large amount of computational and I/O resources for which the cloud is an attractive option yet there is a general lack of scientific approaches to deploy co-simulations in the cloud. In this context, the key challenges include (i) rapid provisioning and de-provisioning of experimental resources in the cloud for different co-simulation workloads, (ii) simulating incompatibility and resource violations, (iii) reliable execution of co-simulation experiments, and (iv) reproducible experiments. Our solution builds upon the C2WT heterogeneous simulation integration technology and leverages the Docker container technology to provide a model-driven integrated tool-suite for specifying experiment and resource requirements, and deploying repeatable cloudscale experiments. In this work, we present the core concepts and architecture of our framework, and provide a summary of our current work in addressing these challenges.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION AND PROBLEM STATEMENT</head><p>Large-scale cyber physical systems (CPS) experiments are being increasingly deployed for real-world scenarios in domains such as building automation and control, smart power grid, health-care, and industrial processes. For example, power grid CPSs are composed of many multi-domain subsystems with different assets and technologies, such as electric grid, sensors, networking and physical control systems. Thus, designing and analyzing such complex systems needs extensive simulation and prototyping tools that span multiple domains.</p><p>While recent advances in simulation tools have enabled modeling and simulation of system characteristics, a single simulator tool is not sufficient to model and experiment with CPS. This is due to the fact that no single simulator can simulate all aspects of CPS, and moreover, CPS require heterogeneous resources and execution environments. Thus co-simulation environments have emerged as an approach for modeling and simulating CPS. Co-simulation or coupled simulation is a methodology that focuses on evaluating the behavior of a system by integrating simulations of its components. Each specialized simulation tool can process and communicate various events among participating simulation engines to model large-scale CPS. To realize such a co-simulation platform, proper time synchronization and coordination of message flows among participating simulations engines is needed.</p><p>C2WT <ref type="bibr" target="#b0">[1]</ref> is a heterogeneous simulation integration framework that we have previously developed at Vanderbilt University. It enables model-based rapid synthesis of heterogeneous and distributed CPS co-simulations. C2WT relies on the IEEE High-Level Architecture (HLA) standard. Domainspecific tools have been built on top of C2WT such as C2WT-TE <ref type="bibr" target="#b1">[2]</ref> which targets transactive smart grid domain, and the SURE testbed <ref type="bibr" target="#b2">[3]</ref> that targets security and resilience in CPS.</p><p>Despite these advances, many challenges still remain unresolved. For instance, large-scale simulations exhibit compute and/or I/O intensive workloads and may need large amount of such resources. Cloud computing can provide access to such a large pool of resources elastically and on-demand. However, existing cloud platforms lack tools for effective deployment of large-scale CPS simulations. Migrating existing simulation tools to the cloud is also a challenging task, which hinders the widespread adoption of cloud computing for CPS co-simulation. This problem is further exacerbated since CPS domain experts conducting the simulations often lack a proper understanding of the cloud resource provisioning and utilization thereby resulting in ad hoc and sub-optimal deployment of CPS simulations in the cloud.</p><p>In this research, we focus primarily on cloud-based provisioning of large-scale CPS experiments, and outline the key challenges associated with deploying and experimenting with CPS co-simulations in the cloud.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. CHALLENGES IN REALIZING CLOUD-HOSTED CPS CO-SIMULATIONS</head><p>The following challenges must be resolved to support reusable and extensible cloud-based CPS co-simulations.</p><p>1. Integrated tool to rapidly deploy experiments on cloud resources: To run experiments in the cloud, the framework should be able to acquire required resources, instantiate the deployment and execution of the co-simulation, and tear down the acquired resources when the experiment is completed. The run-time infrastructure should require minimal startup and shutdown time to ensure a quick experiment start and prompt release of resources, without incurring additional resource utilization cost. The simulations also impose different resource requirements such as CPU cores, GPU, RAM, and disk space. Moreover, the simulations could be CPU and/or I/O intensive. These resource requirements must be configured in the tool, and the cloud resources should be allocated accordingly. A dynamic cloud resource management strategy can be highly effective for better cloud resource utilization.</p><p>2. Handling simulation incompatibility and resource violations: For faithful experimental outcomes, different simulators impose co-simulation specific data-exchange requirements and QoS constraints such as communication latencies, computation execution deadlines, hardware resource availability (CPUs, memory, etc.). For instance, if one of the simulators in the co-simulation requires high I/O bandwidth to stream large videos, the receiving simulator needs to consume the streamed data within a given time-period. Thus, if these simulators are not co-located in the cloud, a violation or incompatibility warning should be raised so that the user can make the necessary modifications to satisfy the QoS constraints.</p><p>3. Proactive fault tolerance for simulation execution: The cloud-based co-simulation framework must be resilient to system faults and failures that can occur within the cloud platforms. Our solution, called Co-simulation checkpointing, leverages Linux container's save and restore functions, and enables, in the event of a failure, an effective recovery of systems to their previously checkpointed states. Implementing checkpoint with distributed co-simulations, that have intertwined dependencies, is even more challenging. Here, checkpointing also needs to be coordinated and synchronized across all simulators. This ensures reliable recovery and correct execution from snapshot images during system restoration.</p><p>4. Reproducible Experiments: Deterministic execution and reproducible experiments are needed for many CPS cosimulations. The co-simulation integration methods and runtime execution tools must be designed for these requirements from the start. In addition, for repeatable experiments, the cloud experimentation platform should provide the same runtime execution environment and configuration for the same experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. PROPOSED SOLUTION AND CURRENT STATUS</head><p>We are developing a framework that enables effective CPS co-simulation in cloud. Figure <ref type="figure">1</ref> provides the functional architecture of our framework. Our framework uses Docker containers for deploying simulations in an Openstack cloud. The simulations are built using corresponding pre-packaged simulators inside a Docker container. These containers provide a repeatable runtime environment. We are also developing a domain-specific modeling language to capture experiment resource requirements of individual simulators.</p><p>In future, we plan on integrating an SMT solver for an optimal placement of simulators in the cloud environment Fig. <ref type="figure">1</ref>: Architecture Overview of CPS Co-simulation Deployment in the Cloud while still satisfying individual resource requirements. We are also building a cloud resource monitoring framework utilizing collectd and other tools to enable real time monitoring of cloud resources which can then be fed to the SMT solvers to make effective decisions. To enable fault-tolerant co-simulations, we are developing a Co-simulation checkpointing technique using the save and restore functions of the CRIU library for Docker containers. It is also critical that checkpointing should be synchronized and coordinated, and must support distributed simulator deployments.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>This work is supported in part by NIST contract number 70NANB15H312, NSF CPS VO contract number CNS-1521617 and NSF US Ignite CNS 1531079. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Rapid synthesis of high-level architecture-based heterogeneous simulation: a model-based integration approach</title>
		<author>
			<persName><forename type="first">G</forename><surname>Hemingway</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Neema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Nine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sztipanovits</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Karsai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Simulation</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="217" to="232" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">C2WT-TE: A model-based open platform for integrated simulations of transactive smart grids</title>
		<author>
			<persName><forename type="first">H</forename><surname>Neema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sztipanovits</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Burns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Griffor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2016 Workshop on. IEEE</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">SURE: An Experimentation and Evaluation Testbed for CPS Security and Resilience: Demo Abstract</title>
		<author>
			<persName><forename type="first">H</forename><surname>Neema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Volgyesi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Potteiger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Emfinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Koutsoukos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Karsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Vorobeychik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sztipanovits</surname></persName>
		</author>
		<ptr target="http://dl.acm.org/citation.cfm?id=2984464.2984491" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Cyber-Physical Systems, ser. ICCPS &apos;16</title>
				<meeting>the 7th International Conference on Cyber-Physical Systems, ser. ICCPS &apos;16<address><addrLine>Piscataway, NJ, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Press</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
