<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">MMSBench-Net: Scenario-Based Evaluation of Multi-Model Database Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">David</forename><surname>Lengweiler</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Computer Science</orgName>
								<orgName type="institution">University of Basel</orgName>
								<address>
									<addrLine>Spiegelgasse 1</addrLine>
									<postCode>4051</postCode>
									<settlement>Basel</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marco</forename><surname>Vogt</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Computer Science</orgName>
								<orgName type="institution">University of Basel</orgName>
								<address>
									<addrLine>Spiegelgasse 1</addrLine>
									<postCode>4051</postCode>
									<settlement>Basel</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Heiko</forename><surname>Schuldt</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Computer Science</orgName>
								<orgName type="institution">University of Basel</orgName>
								<address>
									<addrLine>Spiegelgasse 1</addrLine>
									<postCode>4051</postCode>
									<settlement>Basel</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">MMSBench-Net: Scenario-Based Evaluation of Multi-Model Database Systems</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">66C6B7D7308C86A496346D1570FE98EE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Database benchmark</term>
					<term>Polystore</term>
					<term>Multi-model database</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Multi-model database systems have gained increasing popularity due to their efficient management of diverse types of data and support for complex queries. They offer a unified approach for managing data in various formats, including structured, semi-structured, and unstructured data. However, benchmarking the performance of such systems is a challenging task, given their complexity, mainly due to their support for multiple data models. While significant research exists for benchmarking single-model databases, a comprehensive approach for evaluating multi-model databases is still in an early stage. To address this challenge, we propose MMSBench-Net, a benchmark for evaluating multi-model database systems that support structured relational, semi-structured document, and graph data models. MMSBench-Net enables comparative analysis of database systems and demonstrates how different workloads can reveal the strengths and weaknesses of multi-model database systems. To demonstrate the effectiveness of the benchmark, we compare the performance of two database systems: Polypheny and SurrealDB. Our work is a first step towards a comprehensive evaluation methodology for multi-model database systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The field of data management has experienced a significant transformation in recent years. While relational databases continue to dominate the market, more specialized systems have emerged. Two data models that have gained substantial popularity are the graph and the document model. These data models allow data to be represented and queried unknown from the relational model <ref type="bibr" target="#b0">[1]</ref>. However, these new data models are by no means an evolution of the relational model. As a result, use cases that could be modeled optimally with the relational model might only be modeled poorly with the graph or the document model. As a result, database management systems supporting multiple data models have gained popularity. These multi-model database systems allow applications to manage their data in a way that best suits the specific domains, but also introduce greater complexity. While there are well-established benchmarks like TPC-C <ref type="bibr" target="#b1">[2]</ref>, TPC-H <ref type="bibr" target="#b2">[3]</ref> and YCSB <ref type="bibr" target="#b3">[4]</ref> for single-model databases, the set of benchmarks targeting multi-model databases is very limited. Existing benchmarks for multi-model databases often focus on specific data models, which restricts the range of systems that can be evaluated. Moreover, these benchmarks typically involve complex scenarios that lack fine-grained work-34 th GI-Workshop on Foundations of Databases (Grundlagen von Datenbanken), June 7-9, 2023, Hirsau, Germany Envelope david.lengweiler@unibas.ch (D. Lengweiler); marco.vogt@unibas.ch (M. Vogt); heiko.schuldt@unibas.ch (H. Schuldt) Orcid 0009-0004-0588-8210 (D. Lengweiler); 0000-0002-2674-2219 (M. Vogt); 0000-0001-9865-6371 (H. Schuldt) load adjustments, limiting their usefulness for detailed evaluations and only allowing for broad comparisons.</p><p>This paper makes two contributions: Firstly, we propose a benchmark called MMSBench-Net that is tailored to benchmarking multi-model database systems. It is based on a real-world scenario that deals with relational, document and graph data. Secondly, we demonstrate the utility of our benchmark by comparing the performance of two multi-model database systems, Polypheny 1 and SurrealDB 2 and discuss the results.</p><p>The remainder of this paper is structured as follows: In Section 2, we introduce the MMSBench-Net, discuss the underlying scenario and present the data, and workload that is being generated. In Section 3, we then briefly introduce the two multi-model database systems subject to the benchmark evaluation presented in this paper. Section 4 then presents and discusses the obtained results. The paper concludes with an overview of related work in Section 5, an outlook towards future work in Section 6 and a conclusion in Section 7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Benchmark</head><p>To evaluate the performance of multi-model database systems, we propose MMSBench-Net, a benchmark that assesses their ability to manage structured relational, semi-structured document, and graph data models. MMS-Bench-Net is designed to evaluate the efficiency and versatility of multi-model database systems under different workloads. The "-Net" suffix refers to the first scenario introduced in this paper. We plan to add more scenarios (and thus suffixes) in the future, leading to a complete suite.</p><p>The MMSBench-Net benchmark consists of a set of queries that reflect real-world use cases across the three data models. These queries are designed to evaluate various aspects of multi-model database systems, including their ability to handle complex data structures, support complex queries, and efficiently execute transactions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Scenario</head><p>MMSBench-Net is inspired by a real-world scenario of a company's network monitoring application. Network monitoring plays a vital role in identifying and addressing potential issues, threats and vulnerabilities in the network infrastructure, ensuring smooth operations and preventing data breaches or downtime. The monitoring application continuously collects all kinds of information about the network, including logged-in devices, usage statistics and log messages, resulting in huge amounts of heterogeneous data.</p><p>The network monitoring application modeled by MMS-Bench-Net maintains data in three data models, (i) a graph part, modeling the topology of the network, (ii) a document part, which consists of semi-structured logs produces by the devices and (iii) a relational part, which holds basic information about the users and recorded data about their access patterns. The complete schema is depicted in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>The topology of the network is saved as a graph, where each device in the network is represented as a node, and a network connection between two devices is modeled as an edge. For both, the devices modeled as nodes and the connection modeled as edges, additional information is being stored, such as, the "manufacturer" of a device, deviceId: 3, timstamp: "2017-07-23:  In irregular intervals, each device produces a semistructured log-entry containing information about its current state. An example of such a log can be seen in Figure <ref type="figure" target="#fig_1">2</ref>. These logs might contain error information, indicating problems with the device. All log entries include the properties deviceId, timestamp and users. However, additional properties with varying levels of nesting are randomly generated for each log entry.</p><p>An important information for monitoring a network is which person is currently associated with which devices. For this scenario, we assume a rather simple user database represented as a relational table containing information on the employee. Furthermore, there is also a table for recording successful and failed login attempts and for accounting the usage of devices. Hence, this scenario necessitates the database system to deal with heterogeneous read and write workloads.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Schema and Data Generation</head><p>To generate the schema and to populate it with realistic, but artificially created data, MMSBench-Net starts with building a simulation. This simulation includes the graph representing the network that is being monitored, as well as the users interacting with it. The nodes in the graph represent devices (e.g., computers, mobile phones, switches, and routers). The edges between these nodes represent network connections between these devices. The simulation utilizes the defined topology to generate meaningful workloads. By making changes to this topology, it becomes possible to adjust the distribution of available targets for the queries. This enables to easily align with specific requirements and desired focus of a workload.</p><p>The process of generating this simulated network consists of multiple steps:</p><p>User Generation First, a configurable number of users is being generated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Generation of Devices</head><p>For each type of device (e.g., switches, computers), a random number (within a configurable range) of devices is being generated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Device Properties and Logs Generation</head><p>For each device, a random set of properties is being generated. Furthermore, a set of login, as well as status logs is added as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Generation of Connections</head><p>According to the layout of the network, multiple pairs of devices are selected and connections between them are created.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Connection Properties Generation</head><p>In contrast to the devices, connections do not have status logs, but they also create multiple dynamic properties.</p><p>After the generation of the network is done, it is used as a template to create the workload. Each workload consists of queries in a query language supported by the system under evaluation.</p><p>A distinction is made between the three data models. First, the graph data is handled as already seen in Figure <ref type="figure" target="#fig_0">1</ref>. For this, each device is represented as a node and each connection is translated to an edge which connects them. The small set of dynamic properties is inserted directly as part of these nodes and edges (if properties are not supported by the data model, these are handled as if they are unstructured data). Then all generated device logs (an example can be seen in Figure <ref type="figure" target="#fig_1">2</ref>), are translated to document queries. Each entity of type device translates its nested status logs into multiple document queries, each containing a timestamp and the ID of the device.</p><p>Finally, all login records are collected from the devices and together with the user data itself are translated into relational queries. The collected queries are then sequentially executed on the database systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Workload Generation</head><p>A workload consists of a collection of randomly chosen queries according to a configurable distribution. Since the order in which queries are executed can impact the performance of a database system (e.g., due to concurrency effects and locking), the implementation needs to make sure that the workload is identical for all systems under evaluation (e.g., by using the same seed). MMS-Bench-Netuses a variety of queries to build its workloads:</p><p>Read Device or Connection Selects a device or connection and retrieves it partially or fully. One of the static parameters is chosen for this.</p><p>Read Log Selects a device and reads all or parts of its logs. Filters as well as projections of underlying keys are chosen from the target device.</p><p>Remove Device Selects a device and deletes it, also all connections to this device are deleted as well.</p><p>Logs are deleted as well, information on log-in attempts are kept.</p><p>Remove Connection Randomly selects a connection between two network devices and deletes it.</p><p>Add Device Adds a device to the network. Generate new connections to existing devices.</p><p>Remove Logs Randomly selects a device and deletes some of its logs.</p><p>Add Logs Creates a random log message and adds it to existing devices or connections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Add User</head><p>Creates a new user. All attributes are randomly generated.</p><p>Remove User Randomly selects a user who is being deleted.</p><p>Change User Randomly selects a user and adjusts an attribute.</p><p>Besides simple queries, there are also more complex retrieval operations which can be chosen, their frequency is also configurable.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Connectivity Checks "Find all similar connected devices" or "Find connected device of specific type"</head><p>Error Analysis "Identify the top 10 most common errors" or "Calculate the percentage of errors caused by each user"</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Login Activity "Successful logins by user and month" or "Average duration of successful logins by user and hour of the day"</head><p>Firstly, the actions are selected and implemented on the simulated network, while concurrently being captured and converted into queries for the evaluated systems. Once the simulation concludes, the gathered queries are distributed across a configurable number of available threads and executed on the evaluated system. The execution time for each query is measured individually and recorded for subsequent analysis. This facilitates a comprehensive analysis of various aspects of the database systems. Each iteration of this workload generation and execution process is referred to as a cycle; in an evaluation, multiple cycles can be chained together to construct more extensive workloads.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Evaluated Systems</head><p>To showcase the capabilities of MMSBench-Net, two multi-model databases have been chosen to be evaluated: Polypheny and SurrealDB. These two systems have been selected since they follow completely opposite approaches for implementing multiple data models beneath one facade. While Polypheny maintains the individual models independently, SurrealDB follows a more monolithic approach by combining all data models in one unified model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Polypheny</head><p>Polypheny <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref> is a PolyDBMS <ref type="bibr" target="#b6">[7]</ref>, which is a multimodel database system built according to the architecture principle of a polystore and supporting multiple query languages. Data can be represented according to the relational, the document and the labeled-property graph data models. Polypheny utilizes multiple highly optimized database systems like HypherSQL<ref type="foot" target="#foot_0">3</ref> , MongoDB, Neo4j, and PostgreSQL as storage and execution engines. To achieve competitive performance, Polypheny utilizes these underlying data stores to push down queries. Queries not supported by the underlying data store are executed within Polypheny itself. Polypheny also provides support for transactions with ACID guarantees.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">SurrealDB</head><p>SurrealDB is a multi-model database management system that provides traditional database guarantees, such as ACID transactions, persistent data storage, and finegrained data access control. Its primary objective is to</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head><p>Our evaluation uses Chronos <ref type="bibr" target="#b7">[8]</ref>, an 'evaluation-as-aservice' framework which allows to easily execute different system evaluations and configurations in parallel. To achieve this, it manages a collection of nodes, which are used to execute these different evaluation configurations. The evaluation machines used for obtaining the results presented in this paper are equipped with an Intel Xeon X5650 24-core CPU with 24 GiB of RAM. All machines run Ubuntu 22.04 LTS (with kernel version 5.15.0-37) and the same patch level. As Java runtime environment, we use OpenJDK version 17.0.3. The presented numbers are the median over three runs.</p><p>Each run uses either a SurrealDB instance in a Docker<ref type="foot" target="#foot_1">4</ref> container, deployed from scratch and configured to use a persistent on-file configuration, or a fresh Polypheny instance. The Polypheny instance uses a MongoDB<ref type="foot" target="#foot_2">5</ref> store for the document data, a Neo4j<ref type="foot" target="#foot_3">6</ref> store for the graph data, and a PostgreSQL<ref type="foot" target="#foot_4">7</ref> store for the relational data. Each of these stores is deployed by Polypheny using Docker containers, this requires less setup than baremetal deployments and achieves similar performance <ref type="bibr" target="#b8">[9]</ref>. Both Polypheny and SurrealDB have indexes on their primary keys. We provide a reference implementation of the benchmark, including all configurations and the raw results <ref type="foot" target="#foot_5">8</ref> .</p><p>As a first overview comparison, the default configuration of the benchmark, simulating a network with 10 users and around 65 devices, is being used. All scaling parameters are configured to only allow for a slight growth of the network. The different runtimes after multiple cycles of workloads can be seen in Figure <ref type="figure">3</ref>.</p><p>With such a small network and thus a low number of queries, SurrealDB manages to execute the workloads faster than Polypheny, even when the amount of queries increases. If one observes the results grouped by the query model, Polypheny is faster than SurrealDB for the relational queries, this can be seen in Figure <ref type="figure">4</ref>. However, in most real-world scenarios, the network starts with a significantly higher number than the 10 users used by default. Thus, the number of users is being adjusted, leading to a higher number of user logins and therefore more relational workload. With an increasing ratio of relational workload, Polypheny is able to perform similar to SurrealDB. This behavior is similar if the number of devices in the network is increased. While this does not increase the ratio of the relational workload, compared to the other data models, it still results in better overall performance of Polypheny, which is depicted in Figure <ref type="figure">5</ref>. Figure <ref type="figure">6</ref> depicts a comparison of different ratios of complex queries in the workload.</p><p>The results obtained from the evaluation of the two quite different systems confirms the concepts of the MMS-Bench-Net benchmark, in particular that it is agnostic to the concrete database under evaluation and has a wide applicability for the evaluation of single-and multi-model database systems in realistic settings. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Related Work</head><p>One of the first prominent benchmarks for evaluating database management systems was the Wisconsin <ref type="bibr" target="#b9">[10]</ref> benchmark, introduced in 1983. The space of multimodel database evaluation, in contrast, has a rather short history. One of the first ones being BigBench <ref type="bibr" target="#b10">[11]</ref>, introduced by TPC as TPCx-BB. BigBench uses a schema, which combines structured, semi-structured and unstructured data. But beside TPC, there has been an increase in work, which provides similar benchmarks to the one proposed in this paper. In <ref type="bibr" target="#b11">[12]</ref>, a benchmark using key-value, column, document and graph data is used to compare ArangoDB <ref type="foot" target="#foot_6">9</ref> and OrientDB<ref type="foot" target="#foot_7">10</ref> against a combination of single-model databases, using a proposed synthetic generated benchmark. They were able to show, that depending on the scenario, multi-model databases can be faster than configurations combining multiple single-model database systems. UniBench <ref type="bibr" target="#b12">[13]</ref> targets the same data models as MMSBench-Net, but also considers key-value and XML data. It puts great effort in modeling an as realistic as possible social-commerce scenario. M2Bench <ref type="bibr" target="#b13">[14]</ref> relies heavily on existing benchmark datasets and extends the used data models of UniBench by introducing the array model into its evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Future Work</head><p>Our goal for MMSBench-Net is to extend it into a benchmarking suite that offers various real-world usage scenarios for multi-model data management. However, there are some limitations with MMSBench-Net that we need to address in the future.</p><p>First, the minimal set of queries that we have chosen for evaluation may not be representative of all possible ways to query multi-model systems. In future evaluations, we should include a more diverse set of queries to reflect the range of possibilities when querying these systems. This would provide more comprehensive results and strengthen the obtained conclusions.</p><p>Second, the current composition of workloads is too broad and general to allow for nuanced comparisons of multi-model systems. We need to create more finegrained workloads that focus on specific aspects of data models to capture the subtle differences between these systems.</p><p>In addition to the limitations of the benchmark, our evaluation only compared two systems, leaving a lot of unexplored territory. Future evaluations should include additional systems such as ArangoDB and OrientDB to gain more insights into their performance. Although not all multi-model databases support the same data models, it is possible to use parts of unsupported data models or substitute them with other models to expand the range of systems that can be evaluated.</p><p>Lastly, we should consider evaluating configurations that use a combination of multiple single-model databases to facilitate interesting comparisons. By addressing these limitations, we can develop a more comprehensive and nuanced benchmarking suite that offers a more accurate evaluation of multi-model systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>In this paper, we introduced the MMSBench-Net, a new benchmarked superficially tailored to benchmark multimodel database systems that is based on the scenario of a network monitoring application. Our evaluation of Polypheny and SurrealDB demonstrates the effectiveness and applicability of the proposed benchmark.</p><p>Our research represents an important first step towards establishing a comprehensive evaluation methodology for multi-model database systems. The proposed benchmark allows for a fair comparison of different systems, and our results provide insights into the performance of Polypheny and SurrealDB under different workloads. Ultimately, this benchmark will guide the development and evaluation of novel multi-model database systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Multi-Model Schema of MMSBench-Net</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Example Status Log Showing an Error</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :Figure 4 :</head><label>34</label><figDesc>Figure 3: Runtime with Increasing Number of Cycles</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :Figure 6 :</head><label>56</label><figDesc>Figure 5: Runtime with Increasing Number of Devices</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">https://hsqldb.org/ provide fast performance while adhering to these guarantees. It also supports unstructured data and basic graph functionality, which makes it a suitable choice for this comparison. SurrealDB was designed with the goal of reducing the number of joins required for retrieval queries. It accomplishes this objective by utilizing a graph structure that allows a tuple to any other tuple. SurrealQL, a SQL-like query language, is the primary means of interacting with the system, which can be accessed through either a REST or a web socket interface.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">https://www.docker.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_2">https://www.mongodb.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3">https://neo4j.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_4">https://www.postgresql.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_5">https://download-dbis.dmi.unibas.ch/paper/GvDB23.zip</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_6">https://www.arangodb.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_7">https://orientdb.org/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was partly supported by the SNSF ("Polypheny-DDI: A Flexible Polystore-based Distributed Data Infrastructure", grant no. 200020_213121). The authors would like to thank R. Arnold, R. Gasser, S. Heller, L. Sauter, F. Spiess and A. Mbilinyi for their valuable feedback.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A relational model of data for large shared data banks</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">F</forename><surname>Codd</surname></persName>
		</author>
		<idno>doi:10/dwxst4</idno>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="377" to="387" />
			<date type="published" when="1970">1970</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P P</forename><surname>Council</surname></persName>
		</author>
		<ptr target="https://tpc.org/tpc_documents_current_versions/pdf/tpc-c_v5.11.0.pdf" />
		<title level="m">TPC benchmark c revision 5</title>
				<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">11</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P P</forename><surname>Council</surname></persName>
		</author>
		<ptr target="https://tpc.org/tpc_documents_current_versions/pdf/tpc-h_v3.0.1.pdf" />
		<title level="m">TPC benchmark h standard revision 3</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Benchmarking cloud serving systems with YCSB</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">F</forename><surname>Cooper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Silberstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Tam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ramakrishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sears</surname></persName>
		</author>
		<idno>doi:10/cxjrfd</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. SoCC&apos;10</title>
				<meeting>SoCC&apos;10</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="143" to="154" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Adaptive Management of Multimodel Data and Heterogeneous Workloads</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vogt</surname></persName>
		</author>
		<idno>doi:10/j44k</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
		<respStmt>
			<orgName>University of Basel</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. thesis</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Polypheny-DB: Towards bridging the gap between polystores and HTAP systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vogt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Hansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schönholz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lengweiler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Geissmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Philipp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Stiemer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schuldt</surname></persName>
		</author>
		<idno>doi:10/gnxv2h</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. Poly&apos;21</title>
				<meeting>Poly&apos;21</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="25" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Polystore systems and DBMSs: Love marriage or marriage of convenience?</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vogt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lengweiler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Geissmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Hansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hennemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mendelin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Philipp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schuldt</surname></persName>
		</author>
		<idno>doi:10/ gn8qvm</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. Poly&apos;21</title>
				<meeting>Poly&apos;21</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">12921</biblScope>
			<biblScope unit="page" from="65" to="69" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Chronos: The swiss army knife for database evaluations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vogt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Stiemer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Coray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schuldt</surname></persName>
		</author>
		<idno>doi:10/g8w5</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. EDBT&apos;20, OpenProceedings.org</title>
				<meeting>EDBT&apos;20, Openeedings.org</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="583" to="586" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">An updated performance comparison of virtual machines and Linux containers</title>
		<author>
			<persName><forename type="first">W</forename><surname>Felter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ferreira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rajamony</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rubio</surname></persName>
		</author>
		<idno>doi:10/gfvg6d</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. ISPASS&apos;</title>
				<meeting>ISPASS&apos;</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="171" to="172" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A methodology for database system performance evaluation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Boral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Dewitt</surname></persName>
		</author>
		<idno>doi:10/fk5fbn</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. SIG-MOD&apos;84</title>
				<meeting>SIG-MOD&apos;84</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="1984">1984</date>
			<biblScope unit="page" from="176" to="185" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Discussion of BigBench: A Proposed Industry Standard Performance Benchmark for Big Data</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bhandarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Curino</surname></persName>
		</author>
		<idno>doi:10/j44q</idno>
	</analytic>
	<monogr>
		<title level="m">Performance Characterization and Benchmarking</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="44" to="63" />
		</imprint>
	</monogr>
	<note>Traditional to Big Data</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Performance Evaluation of NoSQL Multi-Model Data Stores in Polyglot Persistence Applications</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">R</forename><surname>Oliveira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Del</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Val</forename><surname>Cura</surname></persName>
		</author>
		<idno>doi:10/j44n</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. IDEAS&apos;16</title>
				<meeting>IDEAS&apos;16</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="230" to="235" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">UniBench: A Benchmark for Multi-model Database Management Systems</title>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<idno>doi:10/j44m</idno>
	</analytic>
	<monogr>
		<title level="m">Performance Evaluation and Benchmarking for the Era of Artificial Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="7" to="23" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">M2Bench: A Database Benchmark for Multi-Model Analytic Workloads</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Koo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Enkhbat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Moon</surname></persName>
		</author>
		<idno>doi:10/j44p</idno>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the VLDB Endowment</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="747" to="759" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
