<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>MSBench-Net: Scenario-Based Evaluation of Multi-Model Database Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>David Lengweiler</string-name>
          <email>david.lengweiler@unibas.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Vogt</string-name>
          <email>marco.vogt@unibas.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Heiko Schuldt</string-name>
          <email>heiko.schuldt@unibas.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Basel, Department of Mathematics and Computer Science</institution>
          ,
          <addr-line>Spiegelgasse 1, 4051 Basel</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Multi-model database systems have gained increasing popularity due to their eficient management of diverse types of data and support for complex queries. They ofer a unified approach for managing data in various formats, including structured, semi-structured, and unstructured data. However, benchmarking the performance of such systems is a challenging task, given their complexity, mainly due to their support for multiple data models. While significant research exists for benchmarking single-model databases, a comprehensive approach for evaluating multi-model databases is still in an early stage. To address this challenge, we propose MMSBench-Net, a benchmark for evaluating multi-model database systems that support structured relational, semi-structured document, and graph data models. MMSBench-Net enables comparative analysis of database systems and demonstrates how diferent workloads can reveal the strengths and weaknesses of multi-model database systems. To demonstrate the efectiveness of the benchmark, we compare the performance of two database systems: Polypheny and SurrealDB. Our work is a first step towards a comprehensive evaluation methodology for multi-model database systems.</p>
      </abstract>
      <kwd-group>
        <kwd>Database benchmark</kwd>
        <kwd>Polystore</kwd>
        <kwd>Multi-model database</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The field of data management has experienced a
significant transformation in recent years. While relational
cialized systems have emerged. Two data models that
have gained substantial popularity are the graph and the
document model. These data models allow data to be
represented and queried unknown from the relational
model [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, these new data models are by no
means an evolution of the relational model. As a result,
lational model might only be modeled poorly with the
graph or the document model. As a result, database
management systems supporting multiple data models
have gained popularity. These multi-model database
systems allow applications to manage their data in a way
that best suits the specific domains, but also introduce
greater complexity.
      </p>
      <p>
        While there are well-established
benchmarks like TPC-C [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], TPC-H [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and YCSB [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for
single-model databases, the set of benchmarks targeting
multi-model databases is very limited. Existing
benchmarks for multi-model databases often focus on specific
data models, which restricts the range of systems that
can be evaluated. Moreover, these benchmarks typically
involve complex scenarios that lack fine-grained
workload adjustments, limiting their usefulness for detailed
evaluations and only allowing for broad comparisons.
      </p>
      <p>This paper makes two contributions: Firstly, we
propose a benchmark called MMSBench-Net that is tailored
SurrealDB2 and discuss the results.
based on a real-world scenario that deals with relational,
document and graph data. Secondly, we demonstrate the
utility of our benchmark by comparing the performance
of two multi-model database systems, Polypheny1 and</p>
      <p>The remainder of this paper is structured as follows:
the underlying scenario and present the data, and
workload that is being generated. In Section 3, we then briefly
introduce the two multi-model database systems subject
to the benchmark evaluation presented in this paper.
Section 4 then presents and discusses the obtained results.
The paper concludes with an overview of related work
in Section 5, an outlook towards future work in Section 6
and a conclusion in Section 7.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Benchmark</title>
      <p>To evaluate the performance of multi-model database
systems, we propose MMSBench-Net, a benchmark that
assesses their ability to manage structured relational,
Bench-Net is designed to evaluate the eficiency and
versatility of multi-model database systems under diferent
workloads. The “-Net” sufix refers to the first scenario
use cases that could be modeled optimally with the re- In Section 2, we introduce the MMSBench-Net, discuss
(H. Schuldt)</p>
      <p>Logs
deviceId &lt;number&gt;
error: optional
↳ message &lt;string&gt;
↳ type &lt;string&gt;
↳ stacktrace &lt;array&gt;
users &lt;array&gt; optional
message &lt;string&gt; optional
timestamp &lt;string&gt;</p>
      <p>[Device]
[Connection]
[Device]
[Connection]
[Connection]
[Device]
id: &lt;int&gt;,
add. props</p>
      <sec id="sec-3-1">
        <title>Relational</title>
      </sec>
      <sec id="sec-3-2">
        <title>Login User</title>
        <p>accesstime &lt;timestamp&gt; PK</p>
        <p>id &lt;int&gt; PK
deviceId &lt;int&gt;
userId &lt;int&gt; FK
duration &lt;float&gt;
successful &lt;boolean&gt;
firstname &lt;string&gt;
lastname &lt;string&gt;
birthday &lt;int&gt;
salary &lt;int&gt;
introduced in this paper. We plan to add more scenarios
(and thus sufixes) in the future, leading to a complete
suite.</p>
        <p>The MMSBench-Net benchmark consists of a set of
queries that reflect real-world use cases across the three
data models. These queries are designed to evaluate
various aspects of multi-model database systems, including
their ability to handle complex data structures, support
complex queries, and eficiently execute transactions.
2.1. Scenario
deviceId: 3,
timstamp: "2017-07-23:14-03",
error: {
message: "Out of Memory",
type: "Application Error",
stacktrace: ["Error on start of..."]
},
user:{
id: 34,
status: "logged in"
},
users: [34, 45]
MMSBench-Net is inspired by a real-world scenario of
a company’s network monitoring application. Network
monitoring plays a vital role in identifying and address- Figure 2: Example Status Log Showing an Error
ing potential issues, threats and vulnerabilities in the
network infrastructure, ensuring smooth operations and
preventing data breaches or downtime. The monitoring its “purchase year” and other relevant information.
application continuously collects all kinds of information In irregular intervals, each device produces a
semiabout the network, including logged-in devices, usage structured log-entry containing information about its
statistics and log messages, resulting in huge amounts of current state. An example of such a log can be seen in
heterogeneous data. Figure 2. These logs might contain error information,
in</p>
        <p>The network monitoring application modeled by MMS- dicating problems with the device. All log entries include
Bench-Net maintains data in three data models, (i) a the properties deviceId, timestamp and users. However,
graph part, modeling the topology of the network, (ii) a additional properties with varying levels of nesting are
document part, which consists of semi-structured logs randomly generated for each log entry.
produces by the devices and (iii) a relational part, which An important information for monitoring a network
holds basic information about the users and recorded is which person is currently associated with which
dedata about their access patterns. The complete schema is vices. For this scenario, we assume a rather simple user
depicted in Figure 1. database represented as a relational table containing
in</p>
        <p>The topology of the network is saved as a graph, where formation on the employee. Furthermore, there is also a
each device in the network is represented as a node, and table for recording successful and failed login attempts
a network connection between two devices is modeled and for accounting the usage of devices. Hence, this
as an edge. For both, the devices modeled as nodes and scenario necessitates the database system to deal with
the connection modeled as edges, additional information heterogeneous read and write workloads.
is being stored, such as, the “manufacturer” of a device,
2.2. Schema and Data Generation
To generate the schema and to populate it with
realistic, but artificially created data, MMSBench-Net starts
with building a simulation. This simulation includes the 2.3. Workload Generation
graph representing the network that is being monitored, A workload consists of a collection of randomly chosen
as well as the users interacting with it. The nodes in the queries according to a configurable distribution. Since
graph represent devices (e.g., computers, mobile phones, the order in which queries are executed can impact the
switches, and routers). The edges between these nodes performance of a database system (e.g., due to
concurrepresent network connections between these devices. rency efects and locking), the implementation needs to
The simulation utilizes the defined topology to gener- make sure that the workload is identical for all systems
ate meaningful workloads. By making changes to this under evaluation (e.g., by using the same seed).
MMStopology, it becomes possible to adjust the distribution Bench-Netuses a variety of queries to build its workloads:
of available targets for the queries. This enables to easily
align with specific requirements and desired focus of a Read Device or Connection Selects a device or
conworkload. nection and retrieves it partially or fully. One of</p>
        <p>The process of generating this simulated network con- the static parameters is chosen for this.
sists of multiple steps:
relational queries. The collected queries are then
sequentially executed on the database systems.</p>
        <p>User Generation First, a configurable number of users
is being generated.</p>
        <p>Read Log Selects a device and reads all or parts of its
logs. Filters as well as projections of underlying
keys are chosen from the target device.</p>
        <p>Generation of Devices For each type of device (e.g., Remove Device Selects a device and deletes it, also
switches, computers), a random number (within a all connections to this device are deleted as well.
configurable range) of devices is being generated. Logs are deleted as well, information on log-in
attempts are kept.</p>
        <sec id="sec-3-2-1">
          <title>Device Properties and Logs Generation For each de</title>
          <p>vice, a random set of properties is being generated. Remove Connection Randomly selects a connection
Furthermore, a set of login, as well as status logs between two network devices and deletes it.
is added as well.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>Generation of Connections According to the layout</title>
          <p>of the network, multiple pairs of devices are
selected and connections between them are created.</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>Connection Properties Generation In contrast to the</title>
          <p>devices, connections do not have status logs, but
they also create multiple dynamic properties.</p>
          <p>Add Device Adds a device to the network. Generate</p>
          <p>new connections to existing devices.</p>
          <p>Remove Logs Randomly selects a device and deletes</p>
          <p>some of its logs.</p>
          <p>Add Logs Creates a random log message and adds it to
existing devices or connections.</p>
          <p>After the generation of the network is done, it is used
as a template to create the workload. Each workload
consists of queries in a query language supported by the Remove User Randomly selects a user who is being
system under evaluation. deleted.</p>
          <p>A distinction is made between the three data models.</p>
          <p>First, the graph data is handled as already seen in Figure 1. Change User Randomly selects a user and adjusts an
For this, each device is represented as a node and each attribute.
connection is translated to an edge which connects them.</p>
          <p>The small set of dynamic properties is inserted directly Besides simple queries, there are also more complex
as part of these nodes and edges (if properties are not retrieval operations which can be chosen, their frequency
supported by the data model, these are handled as if they is also configurable.
are unstructured data). Then all generated device logs
(an example can be seen in Figure 2), are translated to Connectivity Checks “Find all similar connected
dedocument queries. Each entity of type device translates vices” or “Find connected device of specific type”
its nested status logs into multiple document queries, Error Analysis “Identify the top 10 most common errors”
each containing a timestamp and the ID of the device. or “Calculate the percentage of errors caused by</p>
          <p>Finally, all login records are collected from the devices each user”
and together with the user data itself are translated into
Add User Creates a new user. All attributes are
randomly generated.</p>
          <p>Login Activity “Successful logins by user and month” or provide fast performance while adhering to these
guaran“Average duration of successful logins by user and tees. It also supports unstructured data and basic graph
hour of the day” functionality, which makes it a suitable choice for this
comparison. SurrealDB was designed with the goal of
re</p>
          <p>Firstly, the actions are selected and implemented on the ducing the number of joins required for retrieval queries.
simulated network, while concurrently being captured It accomplishes this objective by utilizing a graph
strucand converted into queries for the evaluated systems. ture that allows a tuple to any other tuple. SurrealQL, a
Once the simulation concludes, the gathered queries are SQL-like query language, is the primary means of
interdistributed across a configurable number of available acting with the system, which can be accessed through
threads and executed on the evaluated system. The exe- either a REST or a web socket interface.
cution time for each query is measured individually and
recorded for subsequent analysis. This facilitates a
comprehensive analysis of various aspects of the database 4. Evaluation
systems. Each iteration of this workload generation and
execution process is referred to as a cycle; in an
evaluation, multiple cycles can be chained together to construct
more extensive workloads.</p>
          <p>
            Our evaluation uses Chronos [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ], an
‘evaluation-as-aservice’ framework which allows to easily execute
diferent system evaluations and configurations in parallel. To
achieve this, it manages a collection of nodes, which are
used to execute these diferent evaluation configurations.
3. Evaluated Systems The evaluation machines used for obtaining the results
presented in this paper are equipped with an Intel Xeon
To showcase the capabilities of MMSBench-Net, two X5650 24-core CPU with 24 GiB of RAM. All machines
multi-model databases have been chosen to be evalu- run Ubuntu 22.04 LTS (with kernel version 5.15.0-37) and
ated: Polypheny and SurrealDB. These two systems have the same patch level. As Java runtime environment, we
been selected since they follow completely opposite ap- use OpenJDK version 17.0.3. The presented numbers are
proaches for implementing multiple data models beneath the median over three runs.
one facade. While Polypheny maintains the individual Each run uses either a SurrealDB instance in a Docker4
models independently, SurrealDB follows a more mono- container, deployed from scratch and configured to use
lithic approach by combining all data models in one uni- a persistent on-file configuration, or a fresh Polypheny
ifed model. instance. The Polypheny instance uses a MongoDB5
store for the document data, a Neo4j6 store for the graph
3.1. Polypheny data, and a PostgreSQL7 store for the relational data.
Each of these stores is deployed by Polypheny using
Polypheny [
            <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
            ] is a PolyDBMS [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], which is a multi- Docker containers, this requires less setup than
baremodel database system built according to the architecture metal deployments and achieves similar performance [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ].
principle of a polystore and supporting multiple query Both Polypheny and SurrealDB have indexes on their
languages. Data can be represented according to the re- primary keys. We provide a reference implementation of
lational, the document and the labeled-property graph the benchmark, including all configurations and the raw
data models. Polypheny utilizes multiple highly opti- results8.
mized database systems like HypherSQL3, MongoDB, As a first overview comparison, the default
configuNeo4j, and PostgreSQL as storage and execution engines. ration of the benchmark, simulating a network with 10
To achieve competitive performance, Polypheny utilizes users and around 65 devices, is being used. All scaling
pathese underlying data stores to push down queries. Queries rameters are configured to only allow for a slight growth
not supported by the underlying data store are executed of the network. The diferent runtimes after multiple
within Polypheny itself. Polypheny also provides support cycles of workloads can be seen in Figure 3.
for transactions with ACID guarantees. With such a small network and thus a low number of
queries, SurrealDB manages to execute the workloads
3.2. SurrealDB faster than Polypheny, even when the amount of queries
increases. If one observes the results grouped by the
query model, Polypheny is faster than SurrealDB for the
relational queries, this can be seen in Figure 4.
          </p>
          <p>SurrealDB is a multi-model database management
system that provides traditional database guarantees, such
as ACID transactions, persistent data storage, and
finegrained data access control. Its primary objective is to
3https://hsqldb.org/
SurrealDB
Polypheny
SurrealDB
)sn 1013
,
e
l
a
c
s
g
l(o 1012
e
m
i
t
n
u
lR 1011
a
t
o
T
)
s
n
,
e
l
a
c
s
g
o
l(
e 1011
m
i
t
n
u
R
l
a
t
o
T</p>
          <p>Polypheny
SurrealDB
Polypheny
SurrealDB
10 20 30 40 50 60 70 80 90 100</p>
          <p>Cycles
1x
2x</p>
          <p>3x
Device Scaling
4x
5x</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Related Work</title>
      <p>
        However, in most real-world scenarios, the network
starts with a significantly higher number than the 10 One of the first prominent benchmarks for evaluating
users used by default. Thus, the number of users is being database management systems was the Wisconsin [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
adjusted, leading to a higher number of user logins and benchmark, introduced in 1983. The space of
multitherefore more relational workload. With an increasing model database evaluation, in contrast, has a rather short
ratio of relational workload, Polypheny is able to per- history. One of the first ones being BigBench [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],
inform similar to SurrealDB. This behavior is similar if the troduced by TPC as TPCx-BB. BigBench uses a schema,
number of devices in the network is increased. While which combines structured, semi-structured and
unstructhis does not increase the ratio of the relational workload, tured data. But beside TPC, there has been an increase in
compared to the other data models, it still results in better work, which provides similar benchmarks to the one
prooverall performance of Polypheny, which is depicted in posed in this paper. In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], a benchmark using key-value,
Figure 5. Figure 6 depicts a comparison of diferent ratios column, document and graph data is used to compare
of complex queries in the workload. ArangoDB9 and OrientDB10 against a combination of
      </p>
      <p>
        The results obtained from the evaluation of the two single-model databases, using a proposed synthetic
generquite diferent systems confirms the concepts of the MMS- ated benchmark. They were able to show, that depending
Bench-Net benchmark, in particular that it is agnostic to on the scenario, multi-model databases can be faster than
the concrete database under evaluation and has a wide configurations combining multiple single-model database
applicability for the evaluation of single- and multi-model systems. UniBench [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] targets the same data models as
database systems in realistic settings. MMSBench-Net, but also considers key-value and XML
9https://www.arangodb.com/
10https://orientdb.org/
data. It puts great efort in modeling an as realistic as
possible social-commerce scenario. M2Bench [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] relies
heavily on existing benchmark datasets and extends the
used data models of UniBench by introducing the array
model into its evaluation.
      </p>
    </sec>
    <sec id="sec-5">
      <title>6. Future Work</title>
      <p>Our goal for MMSBench-Net is to extend it into a
benchmarking suite that ofers various real-world usage
scenarios for multi-model data management. However, there
are some limitations with MMSBench-Net that we need
to address in the future.</p>
      <p>First, the minimal set of queries that we have chosen
for evaluation may not be representative of all possible
ways to query multi-model systems. In future
evaluations, we should include a more diverse set of queries to
reflect the range of possibilities when querying these
systems. This would provide more comprehensive results
and strengthen the obtained conclusions.</p>
      <p>Second, the current composition of workloads is too
broad and general to allow for nuanced comparisons
of multi-model systems. We need to create more
finegrained workloads that focus on specific aspects of data
models to capture the subtle diferences between these
systems.</p>
      <p>In addition to the limitations of the benchmark, our
evaluation only compared two systems, leaving a lot of
unexplored territory. Future evaluations should include
additional systems such as ArangoDB and OrientDB to
gain more insights into their performance. Although not
all multi-model databases support the same data models,
it is possible to use parts of unsupported data models or
substitute them with other models to expand the range
of systems that can be evaluated.</p>
      <p>Lastly, we should consider evaluating configurations
that use a combination of multiple single-model databases
to facilitate interesting comparisons. By addressing these
limitations, we can develop a more comprehensive and
nuanced benchmarking suite that ofers a more accurate
evaluation of multi-model systems.</p>
    </sec>
    <sec id="sec-6">
      <title>7. Conclusion</title>
      <p>In this paper, we introduced the MMSBench-Net, a new
benchmarked superficially tailored to benchmark
multimodel database systems that is based on the scenario
of a network monitoring application. Our evaluation of
Polypheny and SurrealDB demonstrates the efectiveness
and applicability of the proposed benchmark.</p>
      <p>Our research represents an important first step
towards establishing a comprehensive evaluation
methodology for multi-model database systems. The proposed
benchmark allows for a fair comparison of diferent
systems, and our results provide insights into the
performance of Polypheny and SurrealDB under diferent
workloads. Ultimately, this benchmark will guide the
development and evaluation of novel multi-model database
systems.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was partly supported by the SNSF
(“PolyphenyDDI: A Flexible Polystore-based Distributed Data
Infrastructure”, grant no. 200020_213121). The authors would
like to thank R. Arnold, R. Gasser, S. Heller, L. Sauter,
F. Spiess and A. Mbilinyi for their valuable feedback.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E. F.</given-names>
            <surname>Codd</surname>
          </string-name>
          ,
          <article-title>A relational model of data for large shared data banks</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>13</volume>
          (
          <year>1970</year>
          )
          <fpage>377</fpage>
          -
          <lpage>387</lpage>
          . doi:10/dwxst4.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. P. P.</given-names>
            <surname>Council</surname>
          </string-name>
          ,
          <source>TPC benchmark c revision 5</source>
          .11,
          <year>2010</year>
          . URL: https://tpc.org/tpc_documents_current_ versions/pdf/tpc-c_
          <year>v5</year>
          .
          <fpage>11</fpage>
          .0.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T. P. P.</given-names>
            <surname>Council</surname>
          </string-name>
          ,
          <source>TPC benchmark h standard revision 3.0.1</source>
          ,
          <year>2022</year>
          . URL: https://tpc.org/tpc_documents_ current_versions/pdf/tpc-h
          <source>_v3.0</source>
          .1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B. F.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Silberstein</surname>
          </string-name>
          , E. Tam,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sears</surname>
          </string-name>
          ,
          <article-title>Benchmarking cloud serving systems with YCSB</article-title>
          ,
          <source>in: Proc. SoCC'10</source>
          , ACM Press,
          <year>2010</year>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>154</lpage>
          . doi:10/cxjrfd.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vogt</surname>
          </string-name>
          ,
          <article-title>Adaptive Management of Multimodel Data and Heterogeneous Workloads</article-title>
          ,
          <source>Ph.D. thesis</source>
          , University of Basel,
          <year>2022</year>
          . doi:10/j44k.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vogt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schönholz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lengweiler</surname>
          </string-name>
          , I. Geissmann,
          <string-name>
            <given-names>S.</given-names>
            <surname>Philipp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stiemer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schuldt</surname>
          </string-name>
          , Polypheny-DB:
          <article-title>Towards bridging the gap between polystores and HTAP systems</article-title>
          ,
          <source>in: Proc. Poly'21, LNCS</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>36</lpage>
          . doi:10/gnxv2h.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vogt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lengweiler</surname>
          </string-name>
          , I. Geissmann,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hennemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mendelin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Philipp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schuldt</surname>
          </string-name>
          ,
          <article-title>Polystore systems and DBMSs: Love marriage or marriage of convenience?</article-title>
          ,
          <source>in: Proc. Poly'21</source>
          , volume
          <volume>12921</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>69</lpage>
          . doi:10/ gn8qvm.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vogt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stiemer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Coray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schuldt</surname>
          </string-name>
          ,
          <article-title>Chronos: The swiss army knife for database evaluations</article-title>
          ,
          <source>in: Proc. EDBT'20</source>
          , OpenProceedings.org,
          <year>2020</year>
          , pp.
          <fpage>583</fpage>
          -
          <lpage>586</lpage>
          . doi:10/g8w5.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W.</given-names>
            <surname>Felter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rajamony</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rubio</surname>
          </string-name>
          ,
          <article-title>An updated performance comparison of virtual machines and Linux containers</article-title>
          ,
          <source>in: Proc. ISPASS'15</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>171</fpage>
          -
          <lpage>172</lpage>
          . doi:10/gfvg6d.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Boral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>DeWitt</surname>
          </string-name>
          ,
          <article-title>A methodology for database system performance evaluation</article-title>
          ,
          <source>in: Proc. SIGMOD'84</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>1984</year>
          , pp.
          <fpage>176</fpage>
          -
          <lpage>185</lpage>
          . doi:10/fk5fbn.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Baru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bhandarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Curino</surname>
          </string-name>
          , et al.,
          <article-title>Discussion of BigBench: A Proposed Industry Standard Performance Benchmark for Big Data, in: Performance Characterization and Benchmarking</article-title>
          . Traditional to Big Data, Springer,
          <year>2015</year>
          , pp.
          <fpage>44</fpage>
          -
          <lpage>63</lpage>
          . doi:10/j44q.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. del Val Cura</surname>
          </string-name>
          ,
          <article-title>Performance Evaluation of NoSQL Multi-Model Data Stores in Polyglot Persistence Applications</article-title>
          ,
          <source>in: Proc. IDEAS'16</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2016</year>
          , pp.
          <fpage>230</fpage>
          -
          <lpage>235</lpage>
          . doi:10/j44n.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Lu,
          <string-name>
            <given-names>P.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Chen,</surname>
          </string-name>
          <article-title>UniBench: A Benchmark for Multi-model Database Management Systems, in: Performance Evaluation and Benchmarking for the</article-title>
          <source>Era of Artificial Intelligence</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>23</lpage>
          . doi:10/j44m.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Koo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Enkhbat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. Moon,</surname>
          </string-name>
          <article-title>M2Bench: A Database Benchmark for Multi-Model Analytic Workloads</article-title>
          ,
          <source>Proceedings of the VLDB Endowment</source>
          <volume>16</volume>
          (
          <year>2022</year>
          )
          <fpage>747</fpage>
          -
          <lpage>759</lpage>
          . doi:10/j44p.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>