=Paper= {{Paper |id=Vol-1536/paper20 |storemode=property |title= Time Series Databases |pdfUrl=https://ceur-ws.org/Vol-1536/paper20.pdf |volume=Vol-1536 |dblpUrl=https://dblp.org/rec/conf/rcdl/Namiot15 }} == Time Series Databases == https://ceur-ws.org/Vol-1536/paper20.pdf
                                       Time Series Databases
                                           © Dmitry Namiot
                                Lomonosov Moscow State University, Moscow

                                                dnamiot@gmail.com


                                                                           ts_value FLOAT
                        Abstract
                                                                         )
   Data persistence for time series is an old and in many                In the real machine to machine (M2M) or Internet of
cases traditional task for databases. In general, the time           Things (IoT) application, we will have more than one
series is just a sequence of data elements. The typical              sensor. So, more likely, our application should support
use case is a set of measurements made over a time                   many time series simultaneously. Of course, in any
interval. Much of the data generated by sensors, in a                practical system the whole set of attributes is limited.
machine to machine communication, in Internet of                     But for the each individual measurement we could have
Things area could be collected as time series. Time                  (potentially) any subset from this limited list. It is the
series are used in statistics, mathematical and finance. In          typical use case for Machine to Machine (M2M) or
this paper, we provide a survey of data persistence                  Internet of Things (IoT) applications. Devices in our
solutions for time series data. The paper covers the                 system will provide data asynchronously (it is the most
traditional relational databases, as well as NoSQL-based             practical use case). So, each row in our table will have
solutions for time series data.                                      many empty (null-value) columns. This decision (one
                                                                     row per measurement) leads to the very inefficient use
1 Introduction                                                       of disk space. Also, it complicates the future processing.
                                                                         Now let us discuss the possible operations. For time
    According to the classical definition, a time series is
                                                                     series (for measurements) the main operation in data
simply a sequence of numbers collected at regular
                                                                     space is adding new data (INSERT statement in SQL).
intervals over a period of time. More generally, a time
                                                                     Updating or deleting data is completely uncommon for
series is a sequence of data points (not necessarily
                                                                     time series (measurements). And reading has got some
numbers). And typically, time series consisting of
                                                                     special moments too. Obviously, the reading of data is
successive measurements made over a time interval.
                                                                     closely related to processing methods. The challenge in
    So, time series exist in any domain of applied
                                                                     a database of evolving time series is to provide efficient
science and engineering which involves temporal
                                                                     algorithms and access methods for query processing,
measurements. By this reason, data persistence
                                                                     taking into consideration the fact that the database
mechanisms for time series are among oldest tasks for
                                                                     changes continuously as new data become available [1].
databases.
                                                                         Many (most of) algorithms for time series data
    Let us start from the relational databases. At the first
                                                                     mining actually always work with only part of the data.
hand, the table design looks simple. We can create a
                                                                     And for the streamed data the sliding window is the
table with a timestamp as a key column. Each new
                                                                     most natural choice. Let us discuss some well-known
measurement will simply add a new row. And columns
                                                                     techniques. Random Sampling lets us sampling the
will describe our measurements (attributes). For the
                                                                     stream at periodic intervals. In this approach, we
different time series we can add series ID column too:
                                                                     maintain a sample called the “reservoir,” from which a
                                                                     random sample can be generated. As the data stream
   CREATE TABLE TS AS                                                flows, every new element has a certain probability of
   (                                                                 replacing an old element in the reservoir [2]. Instead of
     ts_time TIMESTAMP                                               sampling the data stream randomly, we can use the
                                                                     sliding window model to analyze stream data [3]. The
             NOT NULL  PRIMARY KEY,
                                                                     basic idea is that rather than running computations on
     ts_id INT,                                                      all of the data seen so far, or on some sample (as the
                                                                     above-mentioned random sampling), we can make
                                                                     decisions based only on recent data. So, any element for
Proceedings of the XVII International Conference                     analysis, arrived at some time t will be declared as
«Data Analytics and Management in Data Intensive                     expired at time t+w, where w is the window “size”. In
Domains» (DAMDID/RCDL’2015), Obninsk, Russia,                        many practical use cases, we can assume (e.g. sensing in
October 13 - 16, 2015                                                IoT applications) that the only recent events may be
                                                                     important.




                                                               132
      A histogram approach partitions the data into a set           intermediate locations. Buffers lets schedule disk writes
of contiguous buckets.                                              so that each writing operations deals with a large block
     In a titled frame model, we use different                      of data [8]. The simple database related analogue is a
granularities for time frames. The most recent time is              transactions monitor. This optimization lets perform
registered at the finest granularity; the most distant time         fast data writing (INSERT operations we are mostly
is registered at a coarser granularity. And so on.                  interested for time series). Also, these “local” buffers
     We present this short survey for highlighting the fact         can be used during replications. Figure 2 illustrates the
that time series data mining algorithms actually almost             benchmark for INSERT operations (provided by
always work with only part of the data. It is a common              TokuDB).
use case for time series processing. In terms of SQL,               .
SELECT statement with some complex condition is
uncommon for time series processing. What we need to
read for most of the algorithms is some limited portion
(window) of recent constantly updated (added) data. In
the same time, we need the full log of data (e.g., for the
verification, audit, billing, etc.). Probably, it is the
perfect example of so-called lambda architecture [4]. It
is illustrated in Figure 1.




                                                                                   Fig. 2 TokuDB vs. InnoDB [9]

                                                                        Of course, the “traditional” relation systems are easy
                                                                    to maintain, they could be cheaper to host and it could
                                                                    be easier (cheaper) to find developers. So, the
                                                                    maintenance is the biggest advantage.
              Fig, 1. Lambda architecture [5]                            As the second position in this section we would like
    On this picture we have data source with constantly             to mention time series-related SQL extension in Vertica
updated data. Most of the data should be processed in               database [10]. Vertica provides so-called time series
the real time. It is especially true for Internet of Things         analytics as an extension for SQL. As we have
and M2M systems, where time series processing is the                mentioned above, input records for time series data
main source behind control actions and conclusions                  usually appear at non-uniform intervals. It means they
(alerts). In the same time, we could still have some                might have gaps (missed values). Vertica provides so-
processing without the strong limitations for time-to-              called gap-filling functionality. This option fills in
decision. And database could save processed data for                missing data points using an interpolation scheme.
queries from users and applications. It would be                    Secondly, Vertica allows developers to use event-based
convenient to have such a processing as a part of a                 windows to break time series data into windows that
database system.                                                    border on significant events within the data. SQL
    The rest of the paper is organized as follows. The              extension proposed by Vertica could be used as a useful
section 2 is devoted to time series support in relational           example of language-based support for time series in
databases. In section 3 we describe NoSQL solutions for             SQL databases. Here is a typical example:
time series.                                                            SELECT item, slice_time, ts_first_value(price,
                                                                    'const') price FROM ts_test WHERE price_time
2 Time Series and relational databases                              BETWEEN timestamp '2015-04-14 09:00' AND
   In this section, we would like to discuss time series            timestamp '2015-04-14 09:25' TIMESERIES slice_time
persistence (and processing, of course) and “traditional”           AS '1 minute' OVER (PARTITION BY item ORDER
databases. As the first example, we have tested                     BY price_time) ORDER BY item, slice_time, price;
TokuDB as an engine for time series data [6]. This                      This request should fill missed data from 09:00 till
engine uses a Fractal Tree index (instead of more                   09:25 with 1 minute step in the returned snapshot.
known B-tree). A Fractal Tree index is a tree data                      Vertica provides additional support for time series
structure. Any node can have more than two sub-nodes                analytics with the following SQL extensions:
(children) [7]. Like a B-tree, it that keeps data always                   The SELECT … TIMESERIES clause supports
sorted and allows fast searches and sequential access.              gap-filling and interpolation computation.
But unlike a B-tree, a Fractal Tree index has buffers at                   TS_FIRST_VALUE and TS_LAST_VALUE are
each node. Buffers allow changes to be stored in                    time series aggregate functions that return the value at




                                                              133
the start or end of a time slice, respectively, which is           (and they are independent) is just a wrapper for access
determined by the interpolation scheme.                            to HBase database [14]. Each TSD is independent. Each
    TIME_SLICE is a (SQL extension) date/time                      TSD uses the HBase to store and retrieve time series
function that aggregates data by different fixed-time
                                                                   data. TSD itself supports a set of protocols for access to
intervals and returns a rounded-up input TIMESTAMP
value to a value that corresponds to the start or end of           data.
the time slice interval.                                               The second dimension of this architecture solution is
    The similar examples are so-called window-                     the optimized schema for data. In this case, the schema
functions in PostgreSQL [11]. A window function                    is highly optimized for fast aggregations of similar time
performs a calculation across a set of table rows that are         series. This schema is actually almost a de-facto
somehow related to the current row. This is comparable             standard for presenting time-series in a so-called
to the type of calculation that can be done with an
                                                                   BigTable data model. In OpenTSDB, a time series data
aggregate function. But unlike regular aggregate
functions, use of a window function does not cause rows            point consists of a metric name, a timestamp, a value
to become grouped into a single output row — the rows              and a set of tags (key-value pairs). So, for example,
retain their separate identities. Behind the scenes, the           suppose we have a metric (name) data.test, a key name
window function is able to access more than just the               is host, a key value is host1. In this case, each row in
current row of the query result. The typical example (a            master table looks so:
moving average for three rows) is illustrated below:
                                                                      data.test Time Value host host1
   SELECT id_sensor, name_sensor, temperature,
           avg(temperature) OVER (ORDER BY                             here Time is a timestamp, Value is a measured
id_sensor ROWS BETWEEN 1 PRECEDING AND 1
                                                                   value. And all APIs will use the similar format for data
FOLLOWING)
                                                                   writing – there is no schema definition. Set of keys lets
        FROM temperature_table;
                                                                   present so-called multivariate time series.
3 Time Series in NoSQL systems                                         OpenTSDB handles things a bit differently by
                                                                   introducing the idea of 'tags'. Each time series still has a
    NoSQL as one of the basic principles proclaimed                'metric' name, but it's much more generic, something
rejection of a universal model of data. The data model             that can be shared by many unique time series. Instead,
must meet the required processing methods. The second              the uniqueness comes from a combination of tag
basic principle is the lack of the dedicated programming           key/value pairs that allows for flexible queries with very
access tools (layers). Data access API is a part of                fast aggregations. Every time series in OpenTSDB must
NoSQL systems, and it presents one of the important                have at least one tag. The underlying data schema will
elements for the final selection of a data model.                  store all of the tag’s time series next to each other so
    By our opinion, NoSQL solutions for time series                that aggregating the individual values is very fast and
could be described as “best practices” in using NoSQL              efficient. OpenTSDB was designed to make these
stores for time series. Let us see, for example, the               aggregate queries as fast as possible.
architecture for OpenTSDB [12]. It is one of the popular               OpenTSDB follows to one of the commonly used
NoSQL solutions for time series.                                   patterns for time series data persistence in column-
                                                                   oriented databases like HBase. The basic data storage
                                                                   unit in HBase is a cell. Each cell is identified by the
                                                                   Row ID, column-family name, column name and the
                                                                   version. Each cell can have multiple versions of data. At
                                                                   the physical level, each column family is stored
                                                                   continuously on disk and the data are physically sorted
                                                                   by Row ID, column name and version [15]. The version
                                                                   dimension is used by HBase for time-to-live (TTL)
                                                                   calculations. Column families may be associated with a
                                                                   TTL value (length). So, HBase will automatically delete
                                                                   rows once the expiration time is reached. For time series
                                                                   data, this feature lets automatically delete old (obsolete)
            Fig. 3. OpenTSDB architecture [13]                     measurements, for example. The possible schemes for
                                                                   time series data are:
   OpenTSDB is a set of so-called Time Series                          a) The row key is constructed as a combination of a
Daemon (TSD) and command line utilities. Each TSD                  timestamp and sensor ID. Each column is the offset of




                                                             134
the time for the timestamp in the row. E.g., the                        Geras DB [19] uses SenML as data format. Another
timestamp is one hour, the column offset is two                     important feature for any time series database is MQTT
minutes. Each cell contains the values for all sensor’s             support. MQTT is a popular connectivity protocol for
(defined by Sensor ID) measurements at the moment                   Machine-to-Machine       and    Internet    of    Things
timestamp + offset. As the format for cell’s data, we can           communications [20]. It was designed as an extremely
use JSON or even commas separated values. As a                      lightweight publish/subscribe messaging transport.
variation, we can combine all data in a row into a binary           Sensors as data sources may use MQTT, so a time series
object (blob).                                                      database should be able to acquire data right from
    b) The row key is constructed as a combination of a             MQTT [21, 22].
timestamp and sensor ID. Each column corresponds to
one measurement (metric) and contains values for all
time offsets.
    KairosDB [16] is a rewrite of the original
OpenTSDB and uses Cassandra as a data store.
    There are several patterns for storing time series data
in Cassandra. When writing data to Cassandra, data is
sorted and written sequentially to disk. When retrieving               Fig. 4 SenML example
data by row key and then by range, you get a fast and
efficient access pattern due to minimal disk seeks.                     Druid is an open-source analytics data store
    The simplest model for storing time series data is              designed for OLAP queries on time series data (trillions
creating a wide row of data for each measurement. E.g.:             of events, petabytes of data). Druid provides cost-
                                                                    effective and always-on real-time data ingestion,
   SensorID, {timestamp1, value1},          {timestamp2,            arbitrary data exploration, and fast data aggregation
value2} … {timestampN, valueN}                                      [23]. Druid is a system built to allow fast ("real-time")
                                                                    access to large sets of seldom-changing data. It
    Cassandra can store up to 2 billion columns per row.            provides:
For high frequency measured data, we can add a shard                    the column-based storage format for partially nested
interval to a row key. The solution is to use a pattern             data structures;
called row partitioning by adding data to the row key to                the hierarchical query distribution with intermediate
limit the amount of columns you get per device. E.g.,               pruning;
instead of some generic name like smart_meter1 we can                    indexing for quick filtering;
use smart_meter1_day (e.g. smart_meter_20150414).                        realtime ingestion (ingested data is immediately
    Another common pattern for time series data is so-              available for querying);
called rolling storage. Imagine we are using this data for              the fault-tolerant distributed architecture that doesn’t
a dashboard application, and we only want to show the               lose data.
last 10 temperature readings. Older data is no longer                   Data is ingested by Druid directly through its real-
useful, so they can be purged eventually. With many                 time nodes, or batch-loaded into historical nodes from a
other databases, we would have to setup a background                deep storage facility. Real-time nodes accept JSON-
job to clean out older data. With Cassandra, we can take            formatted data from a streaming datasource. Batch-
advantage of a feature called expiring columns to have              loaded data formats can be JSON, CSV, or TSV. Real-
our data quietly disappear after a set amount of seconds            time nodes temporarily store and serve data in real time,
[17].                                                               but eventually push the data to the deep storage facility,
    What is really interesting and new for NoSQL                    from which it is loaded into historical nodes. Historical
solutions is the growing support for SenML [18].                    nodes hold the bulk of data in the cluster.
SenML is defined by a data model for measurements                       Real-time nodes chunk data into segments, and they
and simple meta-data about measurements and devices.                are designed to frequently move these segments out to
The data in SenML is structured as a single object with             deep storage. To maintain cluster awareness of the
attributes. The object contains an array of entries                 location of data, these nodes must interact with Mysql to
(measurements). Each entry is an object that has                    update metadata about the segments, and with Apache
attributes such as a unique identifier for the sensor, the          ZooKeeper to monitor their transfer.
time the measurement was made, and the current value                    Figure 5 illustrates Druid architecture.
(Figure 4). Serializations for this data model are defined
for JSON and XML.




                                                              135
                                                                 timestamps, and so on. By explicitly telling HANA
                                                                 about time series data, it can more efficiently store and
                                                                 manage this data to increase performance and decrease
                                                                 the memory footprint through improved compression.
                                                                     TABLESAMPLE allows ad-hoc random samples
                                                                 over column tables so it is easy, for example, to
                                                                 calculate a result from a defined percentage of the data
                                                                 in a table.
                                                                           There are examples for built-in functions:
                                                                     SERIES_GENERATE – generate a complete series
                                                                     SERIES_DISAGGREGATE – move from coarse
                                                                 units (e.g., day) to finer (e.g., hour)
                                                                     SERIES_ROUND – convert a single value to a
                                                                 coarser resolution
               Fig.5 Druid Architecture [24]                         SERIES_PERIOD_TO_ELEMENT – convert a
                                                                 timestamp in a series to its offset from the start
    SciDB’s [25] native multi-dimensional array data                 SERIES_ELEMENT_TO_PERIOD – convert an
model is designed for ordered, highly dimensional,               integer to the associated period .
multifaceted data. SciDB’s data is never overwritten,                      Analytical functions are:
allowing you to record and access data corrections and               CORR – Pearson product-moment correlation
updates over time. SciDB is designed to efficiently              coefficient
handle both dense and sparse arrays providing dramatic               CORR_SPEARMAN - Spearman rank correlation
storage efficiencies as the number of dimensions and                 LINEAR_APPROX - Replace NULL values by
attributes grows. Math operations run directly on the            interpolating adjacent non-NULL values
native data format. Partitioning data in each coordinate             MEDIAN - Compute median value
of an array facilitates fast joins and access along any              InfluxDB [28] is open-source, distributed, time
dimension, thereby speeding up clustering, array                 series database with no external dependencies.
operations and population selection.                             InfluxDB is targeted at use cases for DevOps, metrics,
    BlinkDB [26] supports a slightly constrained set of          sensor data, and real-time analytics. The key moments
SQL-style declarative queries and provides approximate           behind InfluxDB are:
results for standard SQL aggregate queries, specifically
queries involving COUNT, AVG, SUM and                            x    SQL like query language.
                                                                 x    HTTP based API.
PERCENTILE and is being extended to support any
                                                                 x    Database managed retention policies for data.
User-Defined Functions (UDFs). Queries involving
                                                                 x    Built-in management interface.
these operations can be annotated with either an error
bound or a time constraint, based on which the system            x    On the fly aggregation.
selects an appropriate sample to operate on. For                     SQL-like query with aggregation by time looks so:
example:
    SELECT       avg(Temperature) from Table where                  SELACT mean (value) FROM T GROUP BY
SensorID=1 WITHIN 2 seconds                                      time(5m).
    SAP HANA’s [27] column-oriented in-memory
structures have been extended to provide efficient                   The lack of external dependencies makes Influx very
processing of series data. SAP HANA provides:                    attractive from the practical point of view. The opposite
    series property aspect of tables;                            approach is the above-mentioned Druid with almost full
    built-in Special SQL functions for working with              of Apache stack (Zookeeper, etc.)
series data;                                                         From cloud-based solutions for time series data, we
    analytic functions: special SQL functions for                can mention Blueflood [29]. Blueflood uses Cassandra
analyzing series data;                                           as the data store.
    storage support: advanced techniques for storing                 As per the classical definition, Big Data could be
equidistant data using dictionary encoding                       described via so-called 3V: Variety, Velocity and
    By adding Series Data descriptors to column tables,          Volume. By our opinion, for time series databases the
users can identify which columns contain series data,            key factor is Velocity. NoSQL solution for time series
period information, hints on how to handle missing               should be selected in case of high frequency




                                                           136
measurements. And in case of NoSQL solutions for                 [16] Goldschmidt, T., Jansen, A., Koziolek, H.,
time series, Cassandra is the preferred choice.                       Doppelhamer, J., & Breivold, H. P. (2014, June).
                                                                      Scalability and Robustness of Time-Series
                                                                      Databases for Cloud-Native Monitoring of
References                                                            Industrial Processes. In Cloud Computing
[1] Kontaki, M., Papadopoulos, A. N., &                               (CLOUD), 2014 IEEE 7th International
    Manolopoulos, Y. (2007). Adaptive similarity search               Conference on (pp. 602-609). IEEE.
    in streaming time series with sliding windows. Data          [17] Planet Cassandra
    & Knowledge Engineering, 63(2), 478-502.                          http://planetcassandra.org/getting-started-with-
 [2] Chatfield C. The analysis of time series: an                     time-series-data-modeling/
      introduction. – CRC press, 2013.                           [18] Jennings, Cullen, Jari Arkko, and Zach Shelby.
 [3] Han J., Kamber M., Pei J. Data mining: Concepts                  "Media types for sensor markup language
      and techniques. – Morgan Kaufmann, 2006.                        (SENML)." (2012).
 [4] Fan, Wei, and Albert Bifet. "Mining big data:               [19] Geras DB http://1248.io/geras.php Retrieved: Feb,
      current status, and forecast to the future." ACM                2015
      SIGKDD Explorations Newsletter 14.2 (2013): 1-             [20] Hunkeler, U., Truong, H. L., & Stanford-Clark, A.
      5.                                                              (2008, January). MQTT-S—A publish/subscribe
 [5] Lambda Architecture: Design Simpler, Resilient,                  protocol for Wireless Sensor Networks. In
      Maintainable and Scalable Big Data Solutions                    Communication systems software and middleware
      http://www.infoq.com/articles/lambda-                           and workshops, 2008. comsware 2008. 3rd
      architecture-scalable-big-data-solutions                        international conference on (pp. 791-798). IEEE.
 [6] Bartholomew, D. (2014). MariaDB Cookbook.                   [21] Namiot D., Sneps-Sneppe M. On IoT
      Packt Publishing Ltd.                                           Programming //International Journal of Open
 [7] Chen, S., Gibbons, P. B., Mowry, T. C., &                        Information Technologies. – 2014. – . 2. – . 10.
      Valentin, G. (2002, June). Fractal prefetching B+-              – p. 25-28.
      trees: Optimizing both cache and disk                      [22] Sneps-Sneppe, M., & Namiot, D. (2012, April).
      performance. In Proceeding of the 2002 ACM                      About M2M standards and their possible
      SIGMOD international conference on                              extensions. In Future Internet Communications
      Management of data (pp. 157-168). ACM.                          (BCFIC), 2012 2nd Baltic Congress on (pp. 187-
 [8] Bender, M. A.; Farach-Colton, M.; Fineman, J.;                   193). IEEE.
      Fogel, Y.; Kuszmaul, B.; Nelson, J. (June 2007).           [23] Druid http://druid.io
      "Cache-Oblivoius streaming B-trees". Proceedings           [24] Druid Whitepaper
      of the 19th Annual ACM Symposium on                             http://static.druid.io/docs/druid.pdf
      Parallelism in Algorithms and Architectures (CA:           [25] Stonebraker, M., Brown, P., Poliakov, A., &
      ACM Press): 81–92.                                              Raman, S. (2011, January). The architecture of
 [9] TOKUDB® VS. INNODB FLASH MEMORY                                  SciDB. In Scientific and Statistical Database
      http://www.tokutek.com/tokudb-for-                              Management (pp. 1-16). Springer Berlin
      mysql/benchmarks-vs-innodb-flash/                               Heidelberg.
[10] Lamb, A., Fuller, M., Varadarajan, R., Tran, N.,            [26] Agarwal, S., Mozafari, B., Panda, A., Milner, H.,
      Vandiver, B., Doshi, L., & Bear, C. (2012). The                 Madden, S., & Stoica, I. (2013, April). BlinkDB:
      vertica analytic database: C-store 7 years later.               queries with bounded errors and bounded response
      Proceedings of the VLDB Endowment, 5(12),                       times on very large data. In Proceedings of the 8th
      1790-1801.                                                      ACM European Conference on Computer Systems
[11] Obe R., Hsu L. S. PostgreSQL: up and running. – "                (pp. 29-42). ACM.
      O'Reilly Media, Inc.", 2012.                               [27] Färber, F., May, N., Lehner, W., Große, P., Müller,
[12] Wlodarczyk, T. W. (2012, December). Overview                     I., Rauhe, H., & Dees, J. (2012). The SAP HANA
      of time series storage and processing in a cloud                Database--An Architecture Overview. IEEE Data
      environment. In Proceedings of the 2012 IEEE 4th                Eng. Bull., 35(1), 28-33.
      International Conference on Cloud Computing                [28] Leighton, B., Cox, S. J., Car, N. J., Stenson, M. P.,
      Technology and Science (CloudCom) (pp. 625-                     Vleeshouwer, J., & Hodge, J. (2015). A Best of
      628). IEEE Computer Society.                                    Both Worlds Approach to Complex, Efficient,
[13] OpenTSDB http://opentsdb.net                                     Time Series Data Delivery. In Environmental
[14] George, L. (2011). HBase: the definitive guide. "                Software Systems. Infrastructures, Services and
      O'Reilly Media, Inc.".                                          Applications (pp. 371-379). Springer International
[15] Han, D., & Stroulia, E. (2012, September). A                     Publishing.
      three-dimensional data model in hbase for large            [29] Blueflood: A new Open Source Tool for Time
      time-series dataset analysis. In Maintenance and                Series Data at Scale
      Evolution of Service-Oriented and Cloud-Based                   https://developer.rackspace.com/blog/blueflood-
      Systems (MESOCA), 2012 IEEE 6th International                   announcement/
      Workshop on the (pp. 47-56). IEEE.




                                                           137