<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Assessing Non-volatile Memory in Modern Heterogeneous Storage Landscape using a Write-optimized Storage Stack</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sajad</forename><surname>Karim</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Databases and Software Engineering</orgName>
								<orgName type="institution">Otto von Guericke University Magdeburg</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Johannes</forename><surname>Wünsche</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Parallel Computing and I/O</orgName>
								<orgName type="institution">Otto von Guericke University Magdeburg</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">David</forename><surname>Broneske</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Databases and Software Engineering</orgName>
								<orgName type="institution">Otto von Guericke University Magdeburg</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Kuhn</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Parallel Computing and I/O</orgName>
								<orgName type="institution">Otto von Guericke University Magdeburg</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gunter</forename><surname>Saake</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Databases and Software Engineering</orgName>
								<orgName type="institution">Otto von Guericke University Magdeburg</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Assessing Non-volatile Memory in Modern Heterogeneous Storage Landscape using a Write-optimized Storage Stack</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">5376C36FD1E8972A0B79E28D5AD6BC5D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Non-Volatile Memory</term>
					<term>Persistent Memory</term>
					<term>Storage Class Memory</term>
					<term>Non-Volatile Random Access Memory</term>
					<term>Persistent Memory Programming</term>
					<term>Modern Heterogeneous Storage Landscape</term>
					<term>Write-Optimized Storage Engine (Haura)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Non-volatile memory (NVM), or persistent memory, is a promising and emerging storage technology that has not only disrupted the typical long-established memory hierarchy but also invalidated the proclaimed programming paradigm used in traditional database management systems and file systems. It bridges the gap between primary and secondary storage and, hence, shares the characteristics of both categories. However, currently, there exists no common storage engine built particularly to study the characteristics of the modern storage landscape, which has become more heterogeneous after NVM. Therefore, a general-purpose storage engine, Haura, is utilized to study the benefits of the modern storage landscape. In this work, NVM is integrated into the storage stack of Haura and studied the patterns for modern storage devices involved and their impact on the performance of Haura. Our work shows, NVM performs best under sequential workloads, but random access is better with larger block sizes. Furthermore, the block size has a significant impact on the performance of storage devices, with smaller block sizes favoring NVM and larger block sizes favoring NVMe-supported devices.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>According to Kazemie <ref type="bibr" target="#b0">[1]</ref>, the volume of data in 2011 was around 1.8 Zettabytes, and its volume doubles approximately every two years. At least it was before the onset of COVID-19 whose outbreak further fueled its growth when the use of digital services rose exponentially. This data deluge is unprecedented and has created new challenges for database management systems and file systems which are used in a wide range of applications for data analysis and management.</p><p>The traditional database management systems and file systems are developed considering the typical storage hierarchy where memory is fast but volatile and limited, and secondary storage is persistent and vast but has high latency. In such systems, the data is logically split into two sets of copies; working and consistent copies. The working copy resides in main memory, whereas the persistent copy resides on one or more secondary storage devices. Also, making data persistent is an error-prone process as problems like crashes and race conditions can corrupt data or leave it in an inconsistent state. There-34 th GI-Workshop on Foundations of Databases (Grundlagen von Datenbanken), June 7-9, 2023, Hirsau, Germany Envelope sajad.karim@ovgu.de (S. Karim); johannes.wuensche@ovgu.de (J. Wünsche); david.broneske@ovgu.de (D. Broneske); michael.kuhn@ovgu.de (M. Kuhn); gunter.saake@ovgu.de (G. Saake) Orcid 0009-0002-4910-8453 (S. Karim); 0000-0002-5304-7262 (J. Wünsche); 0000-0002-9580-740X (D. Broneske); 0000-0001-8167-8574 (M. Kuhn); 0000-0001-9576-8474 (G. Saake) fore, strategies like journaling and copy-on-write are used to ensure the consistency of data. Moreover, the scalability of DRAM resulted in main memory database systems <ref type="bibr" target="#b1">[2,</ref><ref type="bibr">3,</ref><ref type="bibr" target="#b3">4]</ref>. However, DRAM's further scalability has innately become quite a challenging task <ref type="bibr" target="#b4">[5]</ref>, and also, because of its energy consumption, the solution is unaffordable for most businesses.</p><p>Persistent memory, on the other hand, is considered to be an alternative to deal with the above-mentioned issues. It is a new category in the storage hierarchy that is non-volatile, byte-addressable, provides DRAM-like latency, and offers much higher capacity than DRAM. This new storage class has not only opened opportunities for new system designs but has also opened opportunities for enhancements in the existing storage engines. For instance, some work is already made in traditional database systems in that NVM is used to improve the traditional disk-based (centralized/decentralized) logging <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9]</ref>. In prior work, NVM is used as a buffer between DRAM and secondary storage devices <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b10">11]</ref>. Moreover, several index data structures like NVTree <ref type="bibr" target="#b11">[12]</ref> and FPTree <ref type="bibr" target="#b12">[13]</ref> are introduced that exploit the properties of NVM. Nevertheless, presently, there is no common storage engine that is built particularly to study the characteristics of the modern storage landscape, which has become more heterogeneous after the addition of NVM, and in consequence, there is a research gap in this direction.</p><p>In order to investigate the benefits of a common storage engine that manages all the storage devices in the modern storage landscape, a prototype of a generalpurpose storage engine, called Haura <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>, is used.</p><p>It runs in user space and supports object and key-value interfaces. The key contributions of our work are as follows:</p><p>• We used PMDK (Persistent Memory Development Kit) to supplement Haura with persistent memory.</p><p>PMDK is an open-source (C/C++) kit that offers different libraries/utilities to interact with persistent memory. We used the most appropriate library after a brief evaluation process. • We investigated the impact of the abovementioned change on Haura and used persistent memory to store the B 𝜖 -tree nodes (Section 2.3). • We identified the access patterns for different storage devices that supplement or affect the throughput of Haura.</p><p>The remainder of this paper is structured as follows. Section 2 provides background on non-volatile memory and the related programming techniques, and also briefly describes Haura. The implementation phase is discussed in Section 3 which is then followed by the evaluation in Section 4. Next, Section 5 details the related work and the paper concludes with a summary and open challenges in Section 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>In this section, we discuss non-volatile memory and different programming models to access it. We then describe Haura and briefly touch upon its key components.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Non-Volatile Memory</head><p>Persistent Memory (PMem), Storage Class Memory (SCM), Non-Volatile Memory (NVM), and Non-Volatile RAM (NVRAM) are the names often used to address this new class of storage. It sits between primary and secondary storage in the typical storage hierarchy, and it is also considered a disruptive technology as it has disrupted the traditional memory paradigm. It is nonvolatile, has DRAM-like latency, and offers much higher capacity than DRAM. It is byte-addressable, and its property to be directly accessible using the cache lines by the CPU demands a different architecture than the one used in typical storage engines. Some example technologies are Phase Change Memory <ref type="bibr" target="#b15">[16]</ref>, Spin Transfer Torque RAM (STT-RAM) <ref type="bibr" target="#b16">[17]</ref>, Carbon NanoTube RAM (NRAM, NanoRAM) <ref type="bibr" target="#b17">[18]</ref>, and Memristors <ref type="bibr" target="#b18">[19]</ref>.</p><p>Presently, only Intel ® produces persistent memory modules under the brand named Intel ® Optane ™ DC Persistent Memory. It offers different generations that vary in performance and capacity, and the modules are designed to be used with specific generations of Intel ® 's processors, Intel ® Xeon Scalable Processors, for instance.</p><p>They are available in DIMM form factor and compatible with conventional DDR4 sockets. They co-exist with conventional DDR4 DRAM DIMMs and use the same memory channel. The internal granularity of the modules is 256 bytes and they can be operated in three different modes; memory, app direct, and dual modes.</p><p>In the memory mode, NVRAM supplements DRAM where DRAM acts as an L4 cache and NVRAM as the main (volatile) memory. The host memory controller integrated into the processor manages the movement of data between DRAM and NVRAM. On the other hand, in the app direct mode, NVRAM is a persistent memory module where the applications have direct access to the device, it is still byte-addressable, and applications can use it as a storage device. Lastly, in the dual mode, part of the NVRAM can be allocated to applications, and the rest can be utilized as non-volatile memory.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Non-Volatile Memory Programming</head><p>The typical programming models categorize the data structures into two broad categories; memory resident and storage resident data structures <ref type="bibr" target="#b19">[20]</ref>. It is mainly due to the underlying system architecture where main memory is attached directly to the memory bus and secondary storage, due to its high latency, communicates with the system via an I/O controller. The models operate on the data in main memory at byte granularity and ensure its persistency by explicitly writing it to secondary storage. A key challenge of such models is to ensure the consistency and integrity of data across all storage classes that share different characteristics. For example, the integrity of data in main memory could be ensured using mutexes whereas the consistency and durability of data on secondary storage are ensured using strategies like journaling and write-ahead logging <ref type="bibr" target="#b20">[21]</ref>.</p><p>The above-mentioned programming paradigm cannot be followed when working with persistent memory as it is attached to the memory bus and is non-volatile. Therefore, a new model is inevitable that simultaneously addresses all the atomicity and consistency issues, like concurrency and power failure, for instance <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b22">23]</ref>. Moreover, persistent memory is a comparatively new technology, and writing software with all the considerations requires an in-depth knowledge of the hardware and cache. Therefore, several APIs are available that handle the hardware-related intricacies internally. PMDK<ref type="foot" target="#foot_0">1</ref> (Persistent Memory Development Kit) is one such example which is based on SNIA NVM programming model<ref type="foot" target="#foot_1">2</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SNIA (Storage and Networking Industry Association)</head><p>proposes different programming models <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b23">24,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b22">23]</ref> to program persistent memory. The simplest way is to use the module as a block device and access it using a standard file API. Another approach is via an optimized file system, ext and XFS in Linux and NTFS in Windows, that is adapted specifically for persistent memory. This approach, contrary to the previous one, allows small readand-write operations and is more efficient. Last but not least, DAX or Direct Access is another approach in that the persistent memory is accessed as a memory-mapped file. Nevertheless, contrary to memory mapping files on secondary storage, the operating system does not maintain pages for persistent memory in main memory.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Haura</head><p>Haura is a general-purpose and write-optimized tiered storage stack that runs in user space and supports object and key-value interfaces <ref type="bibr" target="#b13">[14]</ref>. It has the ability to handle multiple datasets, provides data integrity, and supports advanced features like snapshots, data placement, and fail-over strategies. The core of the engine is the B 𝜖tree, which is the sole index data structure in the engine. The engine supports block storage devices, solid-state drives, for instance, and also has its own caching layer using DRAM (separate from the operating system). It also offers features like data striping, mirroring, and parity. It follows ZFS <ref type="bibr" target="#b24">[25]</ref> architecture and uses a similar layered approach <ref type="bibr" target="#b13">[14]</ref> where layers or modules interact with each other using interfaces. A schematic diagram of Haura is presented in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>Haura was initially built as a key-value storage engine where arbitrary-sized keys and values can be stored and an extension to support objects was made later in <ref type="bibr" target="#b14">[15]</ref> in that the ObjectStore module was added to the stack. The ObjectStore module exposes the necessary routines to interact with the engine and supports all the primitive operations like create, read, write, and query. It uses the same key-value interface to store the objects. However, a key challenge it addresses is the transformation of objects into key-value pairs. Since, in the key-value version of Haura, although the keys and the values can have a variable size, there was still an upper limit defined on their sizes. Objects, on the other hand, can contain data of several gigabytes; therefore, a mechanism is devised in that objects are split into chunks, and each chunk is assigned a unique identifier. Moreover, the object name is primarily the key of the object, it can have a variable size and can spread over a few kilobytes, therefore, an indirection is added where the object name and other metadata are stored separately, and a unique fixed-size identifier is assigned to each chunk. Furthermore, the ObjectStore module, via the Database module, maintains two different datasets to store the data and the metadata of the objects. The first dataset stores the chunks of the object, whereas the second dataset stores the indirectionrelated information and other metadata, like modification time and size, to name a few.</p><p>The Database module controls and manages all the activities regarding a database. A database in Haura consists of one or more datasets and their respective snapshots. Datasets and snapshots are actually B 𝜖 -trees. Moreover, it also maintains a separate B 𝜖 -tree, named the root tree, to store all the information regarding the database. For example, it maintains active datasets and their pointers in the storage. It also maintains information regarding the usage of storage devices in the form of bitmaps.</p><p>The Tree module contains the actual implementation of the B 𝜖 -tree and encapsulates all the tree-related operations and exposes the methods to the upper layer.</p><p>The DataManagement module ensures the persistence of the underlying data and its retrieval when requested. However, it internally interacts with a wide range of modules, especially with the helper modules, and plays a vital role in achieving their internal functionalities. The cache module, for instance, is managed by this module. The write and update requests from the upper layer first land into this module and then passed on to the cache module. Similarly, in the case of a cache miss, this module fetches the required data using the StoragePool module and are passes the data to the cache module for later usage. Moreover, this module is also responsible for the compression and decompression of the data, and it uses the compression module for this purpose. Furthermore, the decision as to which blocks on storage media are to be used to write the data is also taken in this module, and it uses the AllocationHandler module to allocate the blocks. Last but not least, it communicates with the StoragePool module to perform the write and read I/O operations.</p><p>The StoragePool module performs two key operations. First, it maintains queues for asynchronous I/O operations. Second, it dispatches the I/O calls to the respective virtual devices in the Vdev module. However, it also exposes the methods for synchronous calls where it bypasses the queues. The interface of this layer matches with the Vdev module, however, it requires an additional parameter to communicate with the desired virtual device in the Vdev module as the module may contain more than one virtual device.</p><p>Lastly, the Vdev module provides different implementations to interact with the storage devices, and they are referred to as virtual devices in the system. Currently, Haura supports single, mirror, and parity implementations. The single version of the implementations works on a single storage device, and it has two further subimplementations, file and memory, for SSD/HDD and DRAM (as volatile storage) respectively. It is the simplest implementation provided by this module, it is not faulttolerant, and the underlying data is lost in case of error or failure. The other implementations, as their names suggest, mirror and parity, are introduced to support mirroring and parity functionalities respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Implementation</head><p>In this section, we briefly discuss the implementation of the new virtual device for persistent memory and touch upon the important steps considered during this phase.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Programming Model Selection</head><p>PMDK provides several high and low-level libraries to interact with persistent memory. Haura, on the other hand, is also a well-developed engine and expects a virtual device to implement a certain interface. Therefore, in the initial phase, a list of properties is formulated to set a criterion, and each new implementation of a virtual device must adhere to the list to work properly with the existing interfaces in Haura. The properties are:</p><p>• Haura stores the nodes of the B 𝜖 -tree using virtual devices, therefore, the virtual device should be able to perform read and write operations in varied block sizes, from a few kilobytes to megabytes.</p><p>• The virtual device should be able to perform both synchronous and asynchronous calls to the underlying storage device. • Haura uses bitmaps to manage the allocation of the blocks on the storage devices. It partitions the whole space into equal-size blocks and allocates and de-allocates the blocks internally, therefore, this bookkeeping is not required in the virtual device or any library used in it. • Haura uses the copy-on-write technique to update the nodes. It first copies nodes to the main memory, applies changes to the nodes, and then writes the data back to the device in a new location. It never performs in-place updates. Now considering the above properties, libpmem and libpmemblk from PMDK's persistent library suite suits the current architecture of Haura and can be used to implement the functionality.</p><p>Libpmemblk is a high-level library that provides functionality to manage an array of fixed-size blocks. The blocks can be updated and read using their indices in the array. It does not provide byte-level access to the blocks, and any update requires the re-writing of the whole block. Whereas, libpmem is one of the low-level libraries in the kit, and the other high-level libraries are built on top of it. It wraps the basic operations exposed by an operating system and adds optimizations for persistent memory.</p><p>The other libraries from the PMDK's persistent suite, like libpmemobj and libpmemkv, are not a suitable choice because; first, they do not provide the required interface to implement the virtual device, and second, they internally perform the operations that are already addressed in Haura. For example, libpmemobj internally implements the object store functionality on top of memorymapped files, whereas the current architecture of Haura expects the virtual device to perform raw read and write operations on a specified location in the memory. Moreover, the management of key-value data at the library level, as in the case of libpmemkv, is the core functionality of Haura and is therefore redundant.</p><p>The final selection from the shortlisted 3 libraries is made using an experiment, which results are in Figure <ref type="figure" target="#fig_1">2</ref>. First, it is quite evident from the graph that the approach for accessing persistent memory via a memory-aware file API is not a feasible approach as its latency to read and write the data, especially using small buffers, is significantly high. Nevertheless, a prominent drop can be observed in its latency with the increase in buffer size but it is still higher than the rest. The approaches that compete with each other are libpmem and libpmemblk. Libpmemblk, in some instances, performed better than libpmem, but its performance is worst with small buffers, 3 PMem-aware file API is also added to the list for comparison.  and also it crashes when the block size exceeds 8 MB. Therefore, libpmem is finally selected as it is (comparatively) consistent throughout the experiment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Rust Wrapper for libpmem</head><p>Haura is written in Rust, and PMDK supports other languages in that support for only C/C++ is fully tested, therefore, the second step, after selecting the library from PMDK's suite, involved writing a wrapper for the selected library (i.e. libpmem) in Rust. In this regard, Rust provides a utility named bindgen that generates the FFI <ref type="foot" target="#foot_2">4</ref>bindings to C/C++ libraries. The tool requires two files to generate the bindings. The first file is wrapper.h which should contain all the header files and declarations that the target application intends to use. The other file is build.rs which should contain the details regarding the generation of the bindings. This file is part of the folder structure followed in Rust and its compiler, before the compilation of the code, looks for this file in the root folder and executes it so that the bindings can be generated (and made available) before the execution of the actual program.</p><p>Once the bindings are generated, the next step involved writing the methods to perform read and write operations on persistent memory and exposing them to be used in its respective virtual device in the Vdev module which is mentioned in the following section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">NVM as a Virtual Device</head><p>As discussed in Section 2.3, Haura interacts with storage devices using different virtual device implementations in the Vdev module, and currently, there are four implementations available namely, file, memory, parity1, and mirror. Similarly, a new implementation of a virtual device for persistent memory is added to the Vdev module. Moreover, as further mentioned in Section 3.1, the library from PMDK is chosen with careful consideration to avoid any architecture-related changes to Haura. Therefore, this new virtual device implementation exposes a similar interface as available in the other implementations and it internally makes use of the wrapper library mentioned in the previous section to perform the storage-specific operations.</p><p>The integration of the new virtual device required alterations in a few traits <ref type="foot" target="#foot_3">5</ref> and structs in different modules. For example, the DataManagement module interacts with the StoragePool module using a trait called StoragePoolLayer, and StoragePoolUnit, which is a struct in the StoragePool module, implements StoragePoolLayer and maintains the information regarding the configured virtual devices in an array of type StorageTier. Furthermore, the type StorageTier is an array of Dev, and Dev is an enum that stores the instance of the associated virtual device. The enum Dev provides three different types of features, Leaf, Mirror, and Parity1. Leaf further offers two different features, File and Memory. All these mentioned traits and structs are affected by the new implementation.</p><p>Furthermore, virtual devices are accessed using different traits that define distinct behaviors. The first trait is Vdev. It exposes functions to query different properties and states of the virtual devices. For example, it can be used to fetch the id and size of the device. On the other hand, the traits VdevWrite and VdevRead, as their name suggests, are used to perform read and write operations on virtual devices, and provide methods to perform the operations synchronously and asynchronously. These traits are implemented for the new virtual device. Last but not least, other changes have been added to make the virtual device visible to Haura through configuration details.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Performance Analysis</head><p>In this section, we analyze the impact of the newly added virtual device on Haura. We start by testing Haura for different workloads, and the baseline is the existing bestperforming implementation of the virtual device. We then study the impact with different configurations, with different thread counts and cache sizes, for instance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Experimental Setup</head><p>The experiments are performed on a dual-socket server having Intel ® Xeon ® Gold 5220R with 2.20 GHz base fre- quency and each CPU contains 24 physical cores and each core supports two threads. Each CPU-socket contains two integrated memory controllers (iMCs) with three memory channels, each channel (except the last ones) connected to one PMem and DRAM DIMM resulting in 4 interleaved <ref type="foot" target="#foot_4">6</ref> PMem and 6 DRAM DIMMs per socket. The PMem used is 128 GB Intel ® ™ DC Persistent Memory Series 100 DIMMs, resulting in a total persistent memory capacity of 1 TB (128 GB x 4 DIMMs x 2 sockets), and the capacity of DRAM is 384 GB (32 GB x 6 DIMMs x 2 sockets). Moreover, the server contains two NUMA nodes each with 48 logical cores, 4 PMem DIMMs, and 6 DRAM DIMMs. However, to avoid memory access overhead, the experiments are run on socket 0. Lastly, the machine runs Ubuntu 20.04.3 LTS (5.4.0-126-generic), and PMem is accessed in the app direct Mode using fsdax <ref type="foot" target="#foot_5">7</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Sequential Workload</head><p>In this experiment, Haura is configured for three different storage devices; PMem, SSD NVMe, and SSD SATA. The experiment writes 5 objects, each size 5 GB, and then reads them sequentially in the same order, however, the write requests are asynchronous, which allows multiple write requests to be dispatched, whereas the read requests are synchronous. The results in Figure <ref type="figure" target="#fig_2">3</ref> show that PMem performed better than the rest for the write I/O, and it lagged behind SSD NVMe for the read I/O. But, the resulted throughput for PMem in both cases is quite off from the expected values because as per the specifications, a single PMem DIMM (with four cache-lines) can write and read up to 1,800 and 6,800 MB/s 8 , respectively 9 .</p><p>The key reason behind the poor performance of the read I/O is the layout in which Haura stores the data. Haura starts by splitting the objects into chunks and transforming them into messages. The messages are then pushed into the root node, and they descend gradually to the target leaf node, and during the descent, they are buffered in internal nodes and flushed down to the child node only when the buffer is full. Later, when the sync operation is performed, Haura follows the postorder <ref type="bibr" target="#b25">[26]</ref> approach to persist the data that does not guarantee the ordering of the chunks on the storage device. On the other hand, when Haura fetches an object, it starts fetching its chunks sequentially from the root node first and keeps fetching the child nodes until it reaches the leaf node or finds the messages for the queried chunk. Therefore, this reading approach cannot benefit from sequential access as the tree data is already stored in the postorder layout. Furthermore, the other main reason that applies to both scenarios is the use of a single thread that leaves the device underutilized. Last but not least, the reason the write I/Os performed better is because they were asynchronous calls where the thread was capable of issuing multiple asynchronous I/Os using the asynchronous programming technique, whereas the read I/Os were synchronous calls.</p><p>Moreover, another interesting pattern that surfaced during the detailed analysis is, the relative performance of the storage devices was not consistent all the time. As shown in Figure <ref type="figure" target="#fig_3">4</ref>, the difference between PMem and SSD NVMe is significant for small block sizes, however, the difference shrinks considerably for large blocks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Random I/O and Worker Threads</head><p>This experiment evaluates the impact of the number of threads and cache size on Haura when configured with different storage devices. It writes an archive file 10 (size 1011 MB) as an object to the engine. The file contains 80,690 entries with metadata stored in the first 9.3 MiB that contain the central directory to locate the individual files. Moreover, the scenarios with circles (Figure <ref type="figure" target="#fig_4">5</ref>) store the metadata on the first device (i.e. SSD SATA) and the remaining content on the second mentioned device. Lastly, the script fetches 50,000 files randomly 11 .</p><p>The results are presented in Figure <ref type="figure" target="#fig_4">5</ref>, and it is evident from all sub-plots that the execution time improves with the increase in the worker threads and cache size.</p><p>The scenarios that performed worst are the ones that used SSD SATA to store the contents and a faster deto 3,200 and 2,000 MB/s and random read and write operations of up to 540,000 and 55,000 IOPS. SSD SATA can perform sequential read and write operations of up to 550 and 510 MB/s and random read and write operations of up to 86,000 and 30,000 IOPS, respectively. 10 https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.12. <ref type="bibr" target="#b12">13</ref>   vice to store the metadata of the file. However, a minute difference can be observed, with PMem the execution time is slightly worse for threads fewer than 6, nevertheless, the difference diminishes, and with the increase in threads (e.g., 30 threads), PMem performed better than SSD NVMe. On the other hand, a significant difference in performance can be seen when only PMem and SSD NVMe are used to store the whole file. As can be seen, SSD NVMe performs marginally better when fewer threads are used, however, as the thread count passes 9 threads, PMem surpasses SSD NVMe with a significant difference at the end.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Node-Size Significance</head><p>An intriguing behavior we came across while discussing the sequential workload is that the size of the payload influences the performance of Haura. Therefore, to further investigate the behavior, this experiment evaluates Haura (for PMem and SSD NVMe) with different internal and leaf node sizes, that is, thus far set to 4 MB for both node types. The experiment first sets the size of the inter-</p><formula xml:id="formula_0">2 1 2 3 2 5 2 7 2 9 2 11 2 13</formula><p>Leaf Node Size (KB) -Write PMem (Write/Read) SSD NVMe (Write/Read) nal nodes and then repeats the experiment with different leaf node sizes, and writes and reads an object with the size of 128 MB sequentially. The results are illustrated using a heatmap in Figure <ref type="figure">6</ref>. First, it is evident that both storage devices share almost the same temperatures. Second, the engine performs worst for small block sizes, especially for internal nodes. However, the performance improves with the increase in the size, and the concentration of blue color indicates the engine performs better under large block sizes.</p><formula xml:id="formula_1">2 1 2 3 2 5 2 7 2 9 2 11 2 13 2 1 2 3 2 5 2 7</formula><p>One reason for the high temperatures is due to the height of the tree that grows deep when the nodes, internal nodes in particular, are small. For instance, when the node size is 512 bytes, an object size 128 MB would result in 26,1376 nodes <ref type="foot" target="#foot_6">12</ref> , and the internal nodes contain a limited number of messages and pivots, whereas, when the node size is 4 MB, the tree would only need 32 nodes to accommodate the object. Therefore, when the tree is deep, Haura spends considerable time flushing and merging the nodes. Moreover, another obvious reason is when the nodes are small multiple requests are dispatched to the storage devices. However, further analysis is required to capture the time only taken by the virtual devices.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Related Work</head><p>In existing engines, NVM is mostly utilized to improve the caching and recovery of the engines. In <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>, the logging component uses NVM at different levels that improve the logging and recovery of the engine. Moreover, <ref type="bibr" target="#b26">[27]</ref> discusses three different logging techniques and implements their equivalent NVM designs. The results show that in-place update is the most appropriate technique for NVM. Furthermore, SOFORT <ref type="bibr" target="#b27">[28]</ref> and FOE-DUS <ref type="bibr" target="#b28">[29]</ref> are examples of main memory database systems that utilize NVM to improve the recovery of the system.</p><p>In some literature, NVM is also utilized as a buffer and there are two main designs in this approach. The first is to use NVM to supplement DRAM which is already mentioned in <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>, and the second is to use NVM as another layer between DRAM and secondary storage and in this regard, a technique called three-tier buffer management is suggested in <ref type="bibr" target="#b30">[31]</ref>.</p><p>Furthermore, data structures are also optimized to exploit the full potential of NVM. FPTree <ref type="bibr" target="#b12">[13]</ref> is an NVMaware B + -tree that stores leaf nodes in NVM and internal nodes in DRAM, and it performs better than other NVM-optimized trees, NV-Tree <ref type="bibr" target="#b11">[12]</ref> and wBTree <ref type="bibr" target="#b31">[32]</ref>, for instance. Moreover, FOEDUS <ref type="bibr" target="#b28">[29]</ref> also uses a customized tree called Master-Tree.</p><p>Our work enables Haura to persist the tree nodes on NVM, which, along with an allocation strategy, can be used to improve recovery and caching. However, the migration of the persisted nodes is presently not possible. Haura can also store the internal nodes on NVM as done in FPTree <ref type="bibr" target="#b12">[13]</ref>. Lastly, depending on the size of the data, the engine can be used as NVM-DRAM engine.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>In this work, Haura, a general-purpose and writeoptimized storage engine is used to study the characteristics of the modern storage landscape that has become more heterogeneous with the advent of PMem which is a promising technology that shares the characteristics of primary and secondary storage and has disrupted the traditional memory paradigm. A few important findings our work uncovered are; first, persistent memory performs optimally when accessed using the largest possible blocks in random workloads. Second, the size of the cache and thread count impact Haura's throughput. Last but not least, the size of the nodes also determines the throughput of the engine with the internal node having more influence. The insights gathered in this paper can be used to significantly improve Haura's performance and further exploit the characteristics of PMem. However, two aspects that need to be investigated are the use of in-place updates for PMem and accessing it using devdax 13 that produces better results than DAX in certain cases <ref type="bibr" target="#b32">[33]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A layered conceptual diagram illustrating the main components of Haura. The objects in grey represent the main modules, whereas the objects in yellow and sky blue colors are the helper modules and classes respectively. The key classes in the modules are represented using white blocks.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: An object (size 5 GB) is written and read sequentially multiple times with different block sizes using libpmem, libpmemblk, and a standard File API.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Three different executions representing the sequential I/O throughput of Haura when configured with different storage devices. The data is recorded at an interval of 500ms.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The plots group the calls (write and read respectively) from Figure 3 with respect to their payloads.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The impact of threads and different cache sizes on Huara when used with different storage configurations.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>1 2 3 2 5 2 7 2 9 2 11 2 13 InternalFigure 6 :</head><label>136</label><figDesc>Figure 6: Multiple end-to-end executions to analyze the impact of internal and leaf node sizes on the throughput of Haura.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>.tar.xz 11 https://docs.rs/xoshiro/latest/xoshiro/struct.Xoshiro256Plus.html</figDesc><table><row><cell>Time (ms) Read</cell><cell>10 −2 10 0 10 2</cell><cell></cell><cell>SSD SATA SSD NVMe SSD NVMe (Avg) SSD SATA (Avg) PMem PMem (Avg)</cell></row><row><cell>Time (ms) Write</cell><cell>10 −2 10 0 10 2</cell><cell></cell><cell></cell></row><row><cell></cell><cell>4</cell><cell>8 4 0</cell><cell>1 , 3 2 8 1 , 2 0 8</cell><cell>4 , 4 8 4</cell></row><row><cell></cell><cell></cell><cell></cell><cell>Block Size (KB)</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://pmem.io/pmdk/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://snia.org/tech_activities/standards/curr_standards/npm</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_2">Foreign function interface (FFI) is a method to invoke calls from a library written and compiled in a different language.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_3">A trait, in Rust, can be considered as an equivalent to an interface in object-oriented languages like Java and C# [SK22d].</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_4">In DIMM interleaving, the data is interleaved as per the configured block size (i.e., 4 KB in the current settings.) across the DIMMs.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_5">https://docs.pmem.io/ndctl-user-guide/managing-namespaces</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_6">The actual count of the nodes for an object size 128 MB is higher than 261376 because each object chunk is assigned a key as well.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">U</forename><surname>Kazemi</surname></persName>
		</author>
		<title level="m">A survey of big data: challenges and specifications</title>
				<imprint>
			<publisher>CiiT IJSETA</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Faerber</surname></persName>
		</author>
		<title level="m">Main memory database systems, Foundations and Trends ® in Databases</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><surname>Levandoski</surname></persName>
		</author>
		<ptr target="https://docs.pmem.io/ndctl-user-guide/managing-namespaces" />
		<title level="m">Modern main-memory database systems</title>
				<imprint>
			<publisher>VLDB Endowment</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Anti-caching: A new approach to database management system architecture</title>
		<author>
			<persName><forename type="first">J</forename><surname>Debrabant</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>VLDB Endowment</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Oukid</surname></persName>
		</author>
		<title level="m">Storage class memory and databases: Opportunities and challenges</title>
				<imprint>
			<publisher>it-Information Technology</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">N</forename><surname>Gray</surname></persName>
		</author>
		<title level="m">Notes on data base operating systems</title>
				<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1978">1978</date>
			<biblScope unit="page" from="393" to="481" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Scalable logging through emerging non-volatile memory</title>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the VLDB Endowment</title>
				<meeting>the VLDB Endowment</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">7</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">How the Rdb/VMS data sharing system became fast</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lomet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Anderson</surname></persName>
		</author>
		<idno>CRL 92</idno>
		<imprint>
			<date type="published" when="1992">1992</date>
		</imprint>
		<respStmt>
			<orgName>DEC Cambridge Research Lab</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">NVRAM-aware logging in transaction systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Schwan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Qureshi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>VLDB Endowment</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Götze</surname></persName>
		</author>
		<title level="m">Data management on non-volatile memory: a perspective</title>
				<imprint>
			<publisher>Datenbank-Spektrum</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Reducing DRAM footprint with NVM in Facebook</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eisenman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>EuroSys</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">NV-Tree: Reducing Consistency Cost for NVMbased Single Level Systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Yang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>USENIX FAST</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">FPTree: A hybrid SCM-DRAM persistent and concurrent B-tree for SCM</title>
		<author>
			<persName><forename type="first">I</forename><surname>Oukid</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>SIGMOD</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Modern Storage Stack with Key-Value Store Interface and Snapshots Based on CoW B 𝜖 -Trees</title>
		<author>
			<persName><forename type="first">F</forename><surname>Wiedemann</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Design and Implementation of an Object Store with Tiered Storage</title>
		<author>
			<persName><forename type="first">T</forename><surname>Höppner</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Modeling of set and reset operations of phasechange memory cells</title>
		<author>
			<persName><forename type="first">A</forename><surname>Faraclas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE electron device letters</title>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Architecting on-chip interconnects for stacked 3D STT-RAM caches in CMPs</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Mishra</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>IEEE</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Will carbon nanotube memory replace DRAM?</title>
		<author>
			<persName><forename type="first">B</forename><surname>Gervasi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Micro</title>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The missing memristor found</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Strukov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">S</forename><surname>Snider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Stewart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">nature</title>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Programming persistent memory: A comprehensive guide for developers</title>
		<author>
			<persName><forename type="first">S</forename><surname>Scargall</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Springer Nature</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Transaction processing: Management of the logical database and its underlying structure</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sippu</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Persistent Memory: A Survey of Programming Support and Implementations</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baldassin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM CSUR</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Persistent memory programming</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rudoff</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>The Usenix Magazine</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Programming models for emerging non-volatile memory technologies</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rudoff</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>USENIX &amp; SAGE</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Dynamic data compression algorithm selection for big data processing on local file system</title>
		<author>
			<persName><forename type="first">W</forename><surname>Fuzong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Helin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Jian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Computer Science and Artificial Intelligence</title>
				<meeting>the International Conference on Computer Science and Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="110" to="114" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Valiente</surname></persName>
		</author>
		<title level="m">Algorithms on trees and graphs</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="volume">112</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Let&apos;s talk about storage &amp; recovery methods for non-volatile memory database systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Arulraj</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>SIGMOD</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Oukid</surname></persName>
		</author>
		<title level="m">SOFORT: A hybrid SCM-DRAM storage engine for fast data recovery</title>
				<imprint>
			<publisher>DaMoN</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<title level="m" type="main">FOEDUS: OLTP engine for a thousand cores and NVRAM</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kimura</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>SIGMOD</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">An analysis of LSM caching in NVRAM</title>
		<author>
			<persName><forename type="first">L</forename><surname>Lersch</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>DaMoN</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<author>
			<persName><forename type="first">A</forename><surname>Van Renen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Managing non-volatile memory in database systems</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Rethinking database algorithms for phase change memory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cidr</title>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">Maximizing persistent memory bandwidth utilization for OLAP workloads</title>
		<author>
			<persName><forename type="first">B</forename><surname>Daase</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>PODS SIGMOD</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
