<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">StreamConnect: Ingesting Historic and Real-Time Data into Unified Streaming Architectures</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Philipp</forename><surname>Zehnder</surname></persName>
							<email>zehnder@fzi.de</email>
							<affiliation key="aff0">
								<orgName type="department">FZI Research Center for Information Technology</orgName>
								<address>
									<addrLine>Haid und Neu Str. 10 14</addrLine>
									<postCode>76131, 76131</postCode>
									<settlement>Karlsruhe, Karlsruhe</settlement>
									<country>Germany, Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dominik</forename><surname>Riemer</surname></persName>
							<email>riemer@fzi.de</email>
							<affiliation key="aff0">
								<orgName type="department">FZI Research Center for Information Technology</orgName>
								<address>
									<addrLine>Haid und Neu Str. 10 14</addrLine>
									<postCode>76131, 76131</postCode>
									<settlement>Karlsruhe, Karlsruhe</settlement>
									<country>Germany, Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">StreamConnect: Ingesting Historic and Real-Time Data into Unified Streaming Architectures</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">78C1065A026CD7EC5BF8CB549F3BA4E3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:42+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Stream Processing</term>
					<term>Data Ingestion</term>
					<term>Semantic Web</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The web of things provides a steadily increasing amount of both real-time and historic data sources. Yet widespread standards are missing and the heterogeneity of data formats and communication protocols makes the integration of such sources a challenging task often requiring for manual programming effort. This paper presents a novel, lightweight semantics-based approach to quickly connect heterogeneous data sources to stream processing systems. Our main contributions are i) a new model to represent characteristics of data streams and data sets such as schema and quality independent from the actual run-time format, ii) generic data adapters and methods to automatically discover these characteristics at runtime and iii) a distributed architecture to pre-process (e.g. clean and filter) raw data coming from these adapters directly on the edge before data is processed by a stream processing engine. Our contribution eases the ingestion of batch and real-time data into unified streaming architectures.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>In recent years, emerging trends such as the web of things have led to an enormous data growth. For instance, manufacturing companies more and more gather, besides existing data sources such as master and customer data, also massive amounts of real-time data sources coming directly from shop floors. At the same time, the web benefits from many data sources that are publicly made available by means of open APIs.</p><p>One major benefit of this trend is the ability to integrate and process such sources in real-time as a basis for advanced analytic operations, enabling companies to find correlations such as incident patterns early or even ahead of time.</p><p>From an information management perspective, architectural patterns such as publish/subscribe systems have gained popularity, enabling enterprises to establish so-called data backbones that collect data in a single, yet distributed messaging system such as Apache Kafka <ref type="bibr" target="#b0">[1]</ref>. At the same time, modern distributed streaming engines such as Apache Flink <ref type="bibr" target="#b1">[2]</ref> are able to process both real-time and historical data in a unified streaming architecture. Such architectures, also known as Kappa architecture <ref type="bibr" target="#b2">[3]</ref>, reduce the effort to deploy and maintain two different code bases for batch processing of historical data and stream processing for quickly computing real-time views required by other Big Data architectures such as the Lambda architecture <ref type="bibr" target="#b2">[3]</ref>.</p><p>However, a still remaining open problem is the accessibility and ingestion of data sources into such architectures for further processing. The development of adapters for individual data sources is still a highly manual task <ref type="bibr" target="#b3">[4]</ref> due to the heterogeneity of protocols and diversity of data formats. This usually requires for both technological expertise as well as domain knowledge to understand the meaning of gathered data. The main objective of this paper is to reduce the technical effort for the integration of new data sources into a big data streaming-only infrastructure by introducing a semantics-based adapter concept. This objective poses a number of technical challenges and requirements that need to be considered:</p><p>-Temporal Aspect: Adapters need to be able to handle both real-time and historical data. -Adapter Configuration: A solution must provide a higher level of abstraction to enable domain experts to configure adapters, which still requires a lot of technical understanding. -Data Cleaning: Since adapters are often long running processes, they should ensure that the quality of the data does not change over time. -(Edge) Pre-Processing: Simple pre-processing steps such as filtering, transforming or aggregating data should be executed locally close to the sensor to avoid sending noisy data to the messaging system.</p><p>This paper is structured as follows: In section 2, we present a motivating scenario that illustrates the need and general approach of our contribution. Section 3 introduces an event model we developed for both raw-data and semantically described virtual sensors. After defining the model, section 4 describes the adapter architecture and illustrates how a new adapter can be modeled. Finally, section 5 presents the related work followed by section 6, conclusion and outlook.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Motivating Scenario</head><p>This section provides an illustrative scenario that shows the challenges that must be solved when integrating multiple heterogeneous data sources with a single adapter concept. As an example we will present how an adapter for a new temperature sensor on an oil rig is created. The events are stored in a message broker, like Apache Kafka, and can then be used by other systems like StreamPipes<ref type="foot" target="#foot_0">1</ref>  <ref type="bibr" target="#b4">[5]</ref> to build and execute processing pipelines, shown in figure <ref type="figure" target="#fig_0">1</ref>. To create a new adapter, the user makes use of a model editor, which realizes a guided process where the user has to provide mandatory information about the data in a graphical interface. As a first step in the modeling process, it must be defined what kind of data should be processed: Real-time data or historic data (e.g. a CSV file). In our example, we assume that the temperature sensor originates from a real-time data source. Afterwards, the communication protocol for accessing the data source needs to be selected based on several available options. In our example, the temperature sensor provides a REST interface that must be polled every second. The modeling process is semi-automatic and the system tries to guess as much as possible, like the format of data and semantic content. This is done based on the available meta data or by extracting sample data from the source. Once the adapter has been initialized, raw data is processed in the pipeline according to rules that are inferred from the model and is transformed into the virtual sensor representation. In our example, the pipeline filters out all events with a temperature higher then 350. This rule is automatically created by the framework due to the RDF description of the virtual sensor which describes the measurement range to be lower then 350. Furthermore, the raw data event is transformed into JSON. Finally, events are emitted by the adapter and put onto a message broker. They can then be consumed by processing engines, like for example StreamPipes or Apache Spark <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Event Model</head><p>This section introduces an RDFS-based event model, which is based on the Semantic Event Producer model of [7, section 7.3] and further re-uses parts of the Semantic Sensor Network Ontology <ref type="bibr" target="#b7">[8]</ref> and the QUDT Ontology <ref type="bibr" target="#b8">[9]</ref> for representing measurement units. We present two variations of the model, the raw data model and the virtual sensor. The raw data model is a subset of the virtual sensor model, which contains only basic information about the data. Both models can be seen in figure <ref type="figure" target="#fig_1">2</ref>, with the raw data model highlighted in bold. The event model is instantiated in a design phase, once a new adapter is being created by a domain expert.</p><p>The raw data model has two types of data sequences, a Data Set and a Data Stream, each of them has a grounding, the protocol and a format. Further, it has a simple event schema that consist of a runtimeName, for example the column name of a CSV table, and the according runtimeType, the type of data that is stored in the table column (e.g. String or Integer). Besides primitive types, an event schema can also describe nested structures and lists. Additionally, a measurement unit for properties can be provided (e.g. temperature measured in degrees celsius or fahrenheit). The virtual sensor has a more detailed semantic description as shown in figure <ref type="figure" target="#fig_1">2</ref>. The Data Sequence is produced by a Data Producer. Further qualities, like the frequency of a Data Sequences can be described and it has one event schema, consisting of multiple EventProperties. They can have different Prop-ertyQualities (e.g. accuracy or precision of a sensor measurement), a runtime name (often the same as the runtime name of the raw data model), and a domain property for example 'http://schema.org/location'. Properties might also be modeled as lists or nested structures. The EventPropertyPrimitives have a runtime type and a unit. The Functionality Enumeration can be used to mark a property for example as a timestamp. Furthermore, the ValueSpecification can be used in order to restrict possible values of the property.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Adapter Architecture and Modeling</head><p>This section describes the runtime-architecture of the adapters. Adapters are modeled via a graphical user interface by a domain expert and are automatically instantiated.</p><p>Figure <ref type="figure" target="#fig_2">3</ref> shows the different components of the adapter architecture. In the beginning, a data converter has the main task to establish a connection to the data source, collect data and transform it into the internal raw data format. After data is available in the internal format, it is transformed into the virtual sensor data representation via the raw data pipeline. This pipeline is explained in more detail later in this section. The last component of the adapter is a broker, where virtual sensor events are sent to in order to be consumed by other applications and tools. All adapters have two interfaces, one for accessing real-time data and one for providing the schema description. Data set adapters have an additional interface to start a data replay and to serve data from different time-slots to multiple consumers. This is not needed for real-time data since all events are immediately emitted as they are produced. Within a pipeline, multiple data transformations can take place. In general, our idea is to use the defined semantics to automatically transform data during runtime. Such transformations could be enrichments with context information, filter out unreasonable values and transformations to ensure that the resulting events always have the same schema and quality. The first component in a pipeline is always the source providing raw-data while the last one is always a sink that writes the virtual sensor values to the resulting broker. Currently there is support for five kinds of agents in the pipeline, a structural transformer, a unit transformer, a filter, a frequency reducer and a schema agent. Agents are developed in a way that it is possible to add more at a later point in time. First, we will describe the structural transformer agent:</p><p>Structural Transformer Agent The task of the Structural Transformer Agent is to transform the internal raw-data representation into the structure required by the virtual sensor. This is done via mappings between the two model schema's. One example would be to a flat data structure into a nested structure or vice versa. Another example could be to flatten a property list into primitive properties. All operations are performed on the internal format and are inferred from the models based on the provided runtime name. At runtime, each data point is transformed individually in a stateless manner.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Unit Transform Agent</head><p>The Unit Transformer Agent uses the unit information of the EventPropertyPrimitive to automatically provide the correct measurement unit. Users only need to model the structure of the required format for the event property and the system automatically transforms the data. This is accomplished using the QUDT Ontology <ref type="bibr" target="#b8">[9]</ref>, that provides information about different units and also contains a conversion formula for individual conversions. First, the measurement values are transformed into a standard metric, afterwards this standard metric is further transformed into the goal unit.</p><p>Filter Agent The Filter Agent filters data values out that are not compatible to the virtual sensor description. Filter rules are extracted from the semantic model, for example the PropertyQualities and the ValueSpecification as described in section 3. For instance, if a quantitative value has a modeled range from 0 to 10 and the measured value is 11, the system infers that this is a false value and can automatically remove it from the output stream. This agent ensures that data consumers can expect only semantically correct data, which reduces the probability of run-time errors.</p><p>Frequency Reducer Agent This agent changes the frequency of the data, according to the actual values of the properties. When the user activates this agent during the design phase, all values of the events are monitored during runtime. If the agent detects that values of the events do not change over a period of time, the frequency for emitting new events is reduced. With this agent, it is possible to reduce the amount of data sent over the network without loosing information.</p><p>Schema Agent Some information about the stream must not be modeled by the user, but is inferred automatically at run-time. In this case, the schema description is adapted accordingly. This agent does not transform data, it monitors runtime data and changes the schema description. Two such examples are frequency and latency of the produced events. These values are measured at runtime and are constantly updated in the schema description.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Related Work</head><p>Integrating semantic web technologies and big data streaming architectures is becoming more and more relevant. One example is Strider <ref type="bibr" target="#b9">[10]</ref>, which consumes data from Apache Kafka and uses Apache Spark to optimize query planning. Our approach is complementary and could be used to easily integrate new data sources into Kafka and process it with this framework.</p><p>There are also other solutions leveraging from semantic data models for streaming data from sensors like for example <ref type="bibr" target="#b10">[11]</ref>, the sensor middleware for OpenIoT <ref type="bibr" target="#b11">[12]</ref>. The authors use the SSN ontology and also have the concept of virtual sensors. One difference is that we mainly use semantics during the design process to automatically transform data later during runtime, but we do not focus on processing RDF data at runtime.</p><p>A programming model that is related to our overall architecture are foglets <ref type="bibr" target="#b12">[13]</ref>. This architecture consists of a central cloud computing instance and several edge nodes located closer to the sensors in the networking stack. With foglets, it is possible to distribute programs across all the computing instances and perform some processing steps on edge nodes an some in the cloud. Our approach is similar, but our programming model is more lightweight as we clean data directly on the edge nodes, where no further programming is required.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion and Outlook</head><p>In this paper, we presented a framework for data adapters that are capable of ingesting real-time and historic batch data into unified stream processing engines. We introduced a lightweight, RDFS-based model for raw data sources and an extended model to represent virtual sensors. These models can be instantiated by domain experts with little technical knowledge using a graphical user interface.</p><p>Based on these models, our contribution consists of a generic adapter architecture to automatically consume, pre-process and harmonize data. Our approach bridges the gap between a large variety of data sources and the processing engine that performs the actual algorithms.</p><p>In our future work, we plan to further extend our framework by supporting more protocols and formats and also extending the (semi-) automatic transformation capabilities of our raw-data pipeline.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: Example: Functionality of the adapter</figDesc><graphic coords="3,160.70,115.83,293.96,177.46" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 :</head><label>2</label><figDesc>Fig. 2: Model of virtual sensor</figDesc><graphic coords="4,134.77,314.06,345.85,327.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 :</head><label>3</label><figDesc>Fig. 3: Architecture of the adapter</figDesc><graphic coords="5,169.35,522.22,276.66,94.87" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://streampipes.fzi.de</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Kafka: A distributed messaging system for log processing</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kreps</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Narkhede</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the NetDB</title>
				<meeting>the NetDB</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Apache flink: Stream and batch processing in a single engine</title>
		<author>
			<persName><forename type="first">P</forename><surname>Carbone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Katsifodimos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bulletin of the IEEE Computer Society Technical Committee on Data Engineering</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Liquid: Unifying nearline and offline big data integration</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Fernandez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Pietzuch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Seventh Biennial Conference on Innovative Data Systems Research</title>
				<meeting><address><addrLine>Asilomar, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note>CIDR 2015</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Bischof</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Polleres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schneider</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-25010-6_4</idno>
		<ptr target="http://dx.doi.org/10.1007/978-3-319-25010-6_4" />
		<title level="m">Collecting, Integrating, Enriching and Republishing Open City Data as Linked Data</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="57" to="75" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Streampipes: Solving the challenge with semantic stream processing pipelines</title>
		<author>
			<persName><forename type="first">D</forename><surname>Riemer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Kaulfersch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hutmacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Stojanovic</surname></persName>
		</author>
		<idno type="DOI">10.1145/2675743.2776765</idno>
		<ptr target="http://doi.acm.org/10.1145/2675743.2776765" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th ACM International Conference on Distributed Event-Based Systems, ser. DEBS &apos;15</title>
				<meeting>the 9th ACM International Conference on Distributed Event-Based Systems, ser. DEBS &apos;15<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="330" to="331" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Spark: Cluster computing with working sets</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zaharia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chowdhury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Franklin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shenker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Stoica</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">HotCloud</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">95</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Methods and tools for management of distributed event processing applications</title>
		<author>
			<persName><forename type="first">D</forename><surname>Riemer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<pubPlace>Karlsruhe</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Karlsruher Institut für Technologie (KIT)</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Dissertation</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Janowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Phuoc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Haller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lefrançois</surname></persName>
		</author>
		<ptr target="https://www.w3.org/TR/2017/CR-vocab-ssn-20170711/" />
		<title level="m">Semantic sensor network ontology</title>
				<imprint>
			<date type="published" when="2017-07">Jul. 2017</date>
		</imprint>
	</monogr>
	<note>W3C, Candidate Recommendation</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Hodgson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Keller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hodges</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Spivak</surname></persName>
		</author>
		<ptr target="http://www.qudt.org/" />
		<title level="m">Qudt -quantities, units, dimensions and data types ontologies</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">Tech. Rep</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Strider: A Hybrid Adaptive Distributed RDF Stream Processing Engine</title>
		<author>
			<persName><forename type="first">X</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Curé</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1705.05688" />
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Xgsn: An open-source semantic sensing middleware for the web of things</title>
		<author>
			<persName><forename type="first">J.-P</forename><surname>Calbimonte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sarni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Eberle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Aberer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">TC/SSN@ ISWC</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="51" to="66" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Openiot: Open source internet-of-things in the cloud</title>
		<author>
			<persName><forename type="first">J</forename><surname>Soldatos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kefalakis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Interoperability and open-source solutions for the internet of things</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="13" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Incremental deployment and migration of geo-distributed situation awareness applications in the fog</title>
		<author>
			<persName><forename type="first">E</forename><surname>Saurez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lillethun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Ramachandran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ottenwälder</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems</title>
				<meeting>the 10th ACM International Conference on Distributed and Event-based Systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="258" to="269" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
