<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Performance Analysis on DNA Alignment Workload with Intel SGX Multithreading</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Lorenzo</forename><surname>Brescia</surname></persName>
							<email>lorenzo.brescia@unito.it</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution" key="instit1">University of Turin</orgName>
								<orgName type="institution" key="instit2">Alpha research group</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Iacopo</forename><surname>Colonnelli</surname></persName>
							<email>iacopo.colonneli@unito.it</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution" key="instit1">University of Turin</orgName>
								<orgName type="institution" key="instit2">Alpha research group</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marco</forename><surname>Aldinucci</surname></persName>
							<email>marco.aldinucci@unito.it</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution" key="instit1">University of Turin</orgName>
								<orgName type="institution" key="instit2">Alpha research group</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Performance Analysis on DNA Alignment Workload with Intel SGX Multithreading</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">528E20C8D62078908ED34EC8E00320EE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Confidential computing</term>
					<term>Parallel computing</term>
					<term>Intel SGX</term>
					<term>Gramine</term>
					<term>Occlum</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Data confidentiality is a critical issue in the digital age, impacting interactions between users and public services and between scientific computing organizations and Cloud and HPC providers. Performance in parallel computing is essential, yet techniques for establishing Trusted Execution Environments (TEEs) to ensure privacy in remote environments often negatively impact execution time. This paper aims to analyze the performance of a parallel bioinformatics workload for DNA alignment (Bowtie2) executed within the confidential enclaves of Intel SGX processors. The results provide encouraging insights regarding the feasibility of using SGX-based TEEs for parallel computing on large datasets. The findings indicate that, under conditions of high parallelization and with twice as many threads, workloads executed within SGX enclaves perform, on average, 15% faster than non-confidential execution. This empirical demonstration supports the potential of SGX-based TEEs to effectively balance the need for privacy with the demands of high-performance computing.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, the awareness of the need for privacy has gained significant prominence. In the digital age, where information is predominantly stored and transmitted electronically, concerns regarding the protection of sensitive data have become increasingly prevalent. This confidential information can be extracted and reused without the knowledge or consent of the data owner, posing severe privacy risks. This issue is not confined to the interaction between individuals and digital services; It extends across various fields of scientific computing where data confidentiality is indispensable. Notable examples include bioinformatics, which processes DNA and genomic data; medical research that handles patient health records; epidemiology, particularly highlighted during the recent COVID-19 pandemic; and social sciences that address sensitive topics such as mental health, income levels, and political polarization. Economic considerations also drive the imperative to safeguard sensitive information. For instance, in economics, processing financial data for trading purposes necessitates stringent privacy measures. Similarly, in chemoinformatics, the discovery of drugs and molecular simulations, which possess significant commercial value, require robust data protection to prevent unauthorized access and exploitation.</p><p>For these reasons, it is imperative to adopt techniques that protect sensitive data at all stages. In scientific computing, private organizations often lack the computational power to perform their calculations. The simplest and most commonly used solution is outsourcing computation to a remote location by renting the necessary hardware resources. A typical example of this is cloud computing, where resources are allocated on demand, and an ecosystem exists to facilitate the execution of workloads seamlessly. Data protection is typically considered in two primary contexts: at rest (in storage) and in transit (during transmission over the network). However, it is less common to consider the vulnerability of data during computation. Once a program starts executing on a remote machine, such as in cloud computing, there is often no control or protection over the data in the main memory. Confidential computing addresses this issue using trusted hardware to ensure data protection during execution. This approach breaks the chain of trust between the user and the external provider by introducing an additional entity in the trust process, the hardware manufacturer. This indirection step helps safeguard data while it is being processed, enhancing overall data security in outsourced computational environments. Figure <ref type="figure" target="#fig_0">1</ref> illustrates the entities involved and their relationships when a general user utilizes a provider's remote resources. Without implementing confidential computing, the user transfers the computation to the provider. Even if the sensitive data is encrypted during transmission and on storage, it becomes vulnerable once it is decrypted for execution in the main memory. This exposure occurs because the data is no longer encrypted during processing, making it susceptible to risks in a multitenant environment, where potentially malicious workloads from other users may exist or if the provider is compromised or has malicious intent. In such scenarios, the user has no options; she has to unquestionably trust the provider, which is inherently untrusted. Confidential computing changes this dynamic by breaking the direct trust relationship between the user and the provider. Trusted hardware components, designed by the hardware manufacturer (e.g., CPU or GPU), incorporate specific features that ensure the confidentiality and integrity of the user's program during execution. This enables the user to establish an indirect trust relationship with the provider. Instead of trusting the provider directly, the user trusts the hardware manufacturer, which in turn supplies trusted components to the provider. This approach ensures that the user's data remains secure while being processed on the provider's infrastructure.</p><p>The purpose of this paper is to conduct a performance analysis on the use of Intel SGX processors as trusted hardware. The study is performed on an application called Bowtie2, which is a bioinformatics software. Section 2 explains all the necessary background: what Intel SGX CPUs are and how they can be exploited with Gramine and Occlum to facilitate their use. Furthermore, some reasons are given for the choice of Bowtie2 as workload to assess performance. Section 3 discusses related works considering other SGX frameworks besides Gramine and Occlum. In addition, an overview of previous SGX performance studies in the High-Performance Computing (HPC) domain is provided. In Section 4, the configurations implemented to execute Bowtie2 in native and within SGX enclaves are explained. In Section 5, the results of the previously configured environment are illustrated, and finally, in Section 6, conclusions and possible future works are presented.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Intel SGX</head><p>Intel Software Guard Extensions (SGX) <ref type="bibr" target="#b0">[1]</ref> is a technology implemented in Intel processors designed to protect processes during execution by ensuring confidentiality and integrity of the main memory. Intel SGX extends the Instruction Set Architecture (ISA) with instructions that enable the creation of Trusted Execution Environments (TEEs) <ref type="bibr" target="#b1">[2]</ref>, referred as enclaves in Intel's terminology. These enclaves are secure memory regions that provide protection even against privileged system software, such as operating systems or hypervisors. Activating SGX features involves a non-trivial process. There are primarily two approaches to obtain this:</p><p>Rewriting application code involves modifying the application code using the libraries provided by Intel's Software Development Kit (SDK) <ref type="bibr" target="#b2">[3]</ref> to manage enclaves. While this approach allows for granular control over what should be protected -down to the level of a single instruction -the effort required for the porting is considerable.</p><p>Using frameworks to execute existing applications aims to simplify application deployment by allowing them to run entirely within an enclave without significant rewriting. Several frameworks support this method, including Gramine and Occlum Library Operating System (LibOS), which facilitate the execution of legacy applications within enclaves.</p><p>Intel SGX has evolved, and the community recognizes two main versions: SGXv1 and SGXv2. These versions differ primarily in efficiency improvements and enclave size capacities, with SGXv2 supporting enclaves up to 512GB (against 128MB of SGXv1) and introducing Enclave Dynamic Memory Management (EDMM) <ref type="bibr" target="#b3">[4]</ref>. EDMM allows dynamic allocation of enclave pages (EPCs) as needed, rather than requiring a predefined enclave size at startup time, although this feature can be complex and inefficient to implement. A notable capability of Intel SGX processors is the concurrent execution of the same enclave code using multiple threads. Each thread is associated with an EPC with type Thread Control Structure (TCS); this requires prior knowledge of the number of threads to ensure sufficient EPC allocation. Obviously, this requirement is alleviated when EDMM is enabled due to the capabilities of allocating EPC after the enclave's creation. Another key feature of Intel SGX is remote attestation, which allows a remote user to verify the correct instantiation of an enclave on an SGX processor. This is not the focus of our work; in short, the remote attestation process verifies the hash of the enclave and relies on Intel's certificates as the root of trust. There are principally two attestation schemes for SGX: Enhanced Privacy ID (EPID) <ref type="bibr" target="#b4">[5]</ref> and Data Center Attestation Primitives (DCAP) <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Gramine</head><p>Gramine <ref type="bibr" target="#b6">[7]</ref>, known initially as Graphene <ref type="bibr" target="#b7">[8]</ref>, is a LibOS designed to enable unmodified Linux binaries to run within Intel SGX enclaves. The core purpose of a LibOS is to intercept system calls from an application and resolve them directly within user space whenever possible. Gramine extends this capability by integrating support for SGX, ensuring that the entire application, including the LibOS itself, operates within an SGX enclave transparently to the user. To execute an application with Gramine, the required effort is minimal and involves writing a manifest in a declarative manner. This manifest specifies all options necessary for the execution and customization of SGX features. Once the manifest is prepared, the workload can be executed using a set of commands from the Gramine toolchain. Although this LibOS was one of the first to support SGX, it remains highly competitive and continuously evolves to incorporate new SGX features, such as EDMM of SGXv2.</p><p>One of Gramine's most notable properties is its support for multiprocessing and related system calls, such as fork, vfork, clone, and execve. This support allows multiprocessing to be handled transparently, much like in non-SGX environments. For example, when a fork occurs, a second enclave is created, and the content is copied using message passing. Before this, a local attestation procedure is conducted between the enclaves, establishing a TLS secure channel for future communications. This method of handling multiprocessing is known as Enclave-Isolated Processes (EIP) (Figure <ref type="figure" target="#fig_1">2a</ref>), where each enclave contains an instance of LibOS.</p><p>The EIP approach is inherently expensive in terms of execution time. Creating a process within an enclave is costly, and inter-enclave communication requires exchanging encrypted messages over a secure TLS channel. However, despite these disadvantages, the EIP method has significant advantages. The primary purpose of a LibOS with SGX integration is to facilitate the transition of workloads from an unsafe environment to an enclave. By supporting system calls like fork and adopting EIP for multiprocessing, Gramine allows applications that use multiple processes to be deployed quickly, with no additional effort than single-process applications. This ease of deployment is crucial for transitioning existing applications to secure SGX environments. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Occlum</head><p>Occlum <ref type="bibr" target="#b8">[9]</ref> is a toolchain that includes a LibOS designed to run applications inside SGX enclaves. To facilitate the transition of existing applications, the Occlum toolchain provides various utilities to prepare all necessary configurations for the building and running phases. Occlum aims to implement a LibOS that efficiently handles multitasking, a generic term referring to the parallel execution of multiple tasks. Occlum achieves this through a Software Fault Isolation (SFI) scheme called MPX-based, Multi-Domain SFI (MMDSFI). In the MMDSFI scheme, each process resides alongside the LibOS within the single address space of an enclave. This approach, known as SFI-Isolated Processes (SIPs) (Figure <ref type="figure" target="#fig_1">2b</ref>), contrasts with the EIP scheme used by other LibOSes such as Gramine. The term "process" in the SIP scheme is somewhat misleading because the enclave maintains a single address space. Consequently, traditional process creation using the fork system call is not feasible, as it requires the child process to share the parent's address space. Instead, Occlum creates processes using the spawn system call, mapping each process to an SGX thread. This limitation means that applications relying on fork-like system calls cannot run within Occlum's LibOS without modification. However, the SIP scheme offers significant advantages, such as reducing the cost of setting up new enclaves (creation, local attestation, and duplication of the parent process state) and lowering the communication cost between enclaves. The primary disadvantage of the SIP scheme is the reduced portability of existing applications that utilize fork. To address this, intermediate work -potentially nontrivial, or even possible -may be required to replace fork calls with spawn. This additional effort can be a barrier for some applications, but the overall benefits of the SIP scheme can make it a worthwhile trade-off for some use cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Bowtie2: DNA alignment</head><p>Bowtie2<ref type="foot" target="#foot_0">1</ref> ([10], <ref type="bibr" target="#b10">[11]</ref> and <ref type="bibr" target="#b11">[12]</ref>) is a tool used for aligning sequencing reads to large genomes. During the alignment, the DNA sequences are compared to identify regions of similarity. This process is crucial for various applications, such as identifying genetic variations. Bowtie2 was selected as the performance evaluation workload in this paper for several logical considerations: Memory-Intensive Application Bowtie2 is memory-intensive, making it an ideal candidate for evaluating the overhead associated with SGX, which aims to secure the main memory using encryption techniques.</p><p>Sensitive Data Analysis DNA sequence analysis involves highly susceptible data that must be protected, especially in remote environments like cloud providers. Using Bowtie2 helps assess the effectiveness of SGX in safeguarding this data.</p><p>Multithreading Performance Bowtie2's performance can be tuned through multithreading. While using multiple threads typically enhances performance, evaluating this in the context of SGX threads is particularly insightful, as the benefits may not be as straightforward due to the additional overhead and security constraints imposed by SGX.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Related work</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Other SGX technologies</head><p>Besides Gramine and Occlum, there are other technologies whose purpose is to make it easy to run existing applications inside SGX enclaves:</p><p>• Haven <ref type="bibr" target="#b12">[13]</ref> is one of the pioneering approaches to execute an entire LibOS within an SGX enclave, enabling the execution of unmodified Windows binaries securely. • SCONE <ref type="bibr" target="#b13">[14]</ref> ensures the confidentiality and integrity of containerized applications by leveraging SGX. Unlike LibOS, SCONE uses a thinner shielding layer to protect the application from the untrusted host OS. This means there is no entire LibOS within the enclave, but only some widely lighter shielding modules. • Panoply <ref type="bibr" target="#b14">[15]</ref> is another approach that tries to minimize the amount of code that needs to reside inside an SGX enclave. It introduces the concept of a micro-container, which encapsulates units of code and data isolated within SGX enclaves. • SGX-LKL <ref type="bibr" target="#b15">[16]</ref> enables Linux binaries to run inside SGX enclaves, similar to a LibOS approach but based on the Linux Kernel Library (LKL). It combines the flexibility of Linux with the security benefits of SGX, providing a lightweight solution for running Linux-based applications securely within enclaves. • Ryoan <ref type="bibr" target="#b16">[17]</ref> leverages SGX to process sensitive data securely in environments considered untrusted, both in terms of the application to run and the platform itself.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">SGX performance analysis</head><p>Performance represents a significant concern in the realm of confidential computing. Although the goal is to achieve privacy, it is crucial not to compromise the execution time in chasing it. The study <ref type="bibr" target="#b17">[18]</ref> conducted a performance evaluation using HPC benchmarks within SGX enclaves. The work included a comparison of performance between Gramine and Occlum, even if this comparison is inherently limited due to Occlum's lack of support for multiprocessing, which is particularly relevant in HPC contexts. To address this limitation, our work focuses on evaluating a single real-world multithreaded workload rather than synthetic benchmarks. This approach ensures a fair comparison between Gramine and Occlum, providing valuable insights into their performance. Another performance study <ref type="bibr" target="#b18">[19]</ref> compares Intel SGX and AMD Secure Encrypted Virtualization (SEV) based-TEEs. Specifically, SCONE is employed to execute on SGX. HPC benchmarks have been used, encompassing traditional scientific computing, machine learning tasks, and graph analytics.</p><p>In our further recent work <ref type="bibr" target="#b19">[20]</ref>, the reference workload focused on the initial two steps of the Next Generation Sequencing (NGS) variant calling pipeline, which has been fully migrated to a cloud-based HPC environment <ref type="bibr" target="#b20">[21]</ref>. Specifically, one of these steps involves the execution of Bowtie2 using Gramine.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Methods</head><p>This section outlines the setup of execution environments for the Bowtie2 DNA alignment bioinformatics workload. The configurations were designed to ensure fairness across different LibOSes environments (Gramine and Occlum). Only crucial aspects of the configuration files are presented for each setup. Both LibOSes were established using Dockerfiles, created based on the existing Docker images provided by the respective maintainers. A public GitHub repository<ref type="foot" target="#foot_1">2</ref> was established to provide insight into the configurations implemented for running in various environments. However, due to confidentiality concerns, it was not possible to publish the DNA reads input data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Bare-metal</head><p>To use Bowtie2 on a native system, it is possible to easily utilize package managers such as Bioconda <ref type="foot" target="#foot_2">3</ref> , which provides a distribution of bioinformatics software as a channel for the versatile Conda<ref type="foot" target="#foot_3">4</ref> package manager. However, in this study, the executables were built directly from the downloaded sources to facilitate fair comparisons between all execution environments (bare-metal, Gramine, and Occlum). In order to run Bowtie2, it is necessary to specify the basename of the index for the reference genome and the two files containing the paired-end reads (short DNA sequences). An example command for performing the alignment against the human hg38 genome is: b o w t i e 2 −S " o u t . sam " −x " H o m o _ s a p i e n s _ a s s e m b l y 3 8 " \ −1 " s a m p l e . r _ 1 _ v a l _ 1 . f q . gz " −2 " s a m p l e . r _ 2 _ v a l _ 2 . f q . gz " \ −p n u m _ o f _ t h r e a d s</p><p>In this command, the -x option is used to specify the reference genome. The -S option designates the output file in .sam (Sequence Alignment/Map) format, and the -1 and -2 options are for the compressed paired-end reads in .fq (FASTQ) format. The -p option specifies the number of parallel threads to be used for searching; each thread runs on a different core, enabling all threads to find alignments in parallel.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Gramine</head><p>A manifest must be compiled to run an unmodified Linux binary inside an SGX enclave using Gramine. This manifest contains all the configuration information about the LibOS and the SGX enclave. In the Gramine toolchain, the gramine-manifest executable processes a manifest template, which can include Jinja 5 syntax for customization. Using this template simplifies the creation of the manifest and allows for more flexible configuration. To streamline the process of creating the manifest required to run Bowtie2 (bow.manifest), a Makefile was written that also includes the recipe below: bow . m a n i f e s t : m a n i f e s t . t e m p l a t e gramine − m a n i f e s t − D t h r e a d s = n u m _ o f _ t h r e a d s $ &lt; &gt;$@ As can it be observed from the previous recipe, a manifest.template must be prepared in order to generate bow.manifest. In the template file, all the arguments needed for execution are passed as environment variables in the following Gramine option: l o a d e r . a r g v = [ " / b o w t i e 2 − a l i g n − s " , " − S " , " / o u t . sam " , " − x " , " / H o m o _ s a p i e n s _ a s s e m b l y 3 8 " , " − 1 " , " / s a m p l e . r _ 1 . f q . gz " , " − 2 " , " / s a m p l e . r _ 2 . f q . gz " , " − p " , " { { t h r e a d s } } " ] The options specified in the manifest.template are self-explanatory in relation to the bare-metal execution of Bowtie2. It is important to note that the bowtie2-align-s binary is run directly, rather than Bowtie2 itself. The latter is a Perl wrapper that selects the appropriate aligner to use. The wrapper is bypassed to simplify the process and ensure a smoother comparison with Occlum. For this reason, the bowtie2-align-s binary is executed directly. Consequently, bowtie2-align-s is set as the LibOS entry point in the manifest.template, meaning it is the code executed immediately after the enclave is ready: l i b o s . e n t r y p o i n t = " / b o w t i e 2 − a l i g n − s " For handling the EDMM feature, Jinja syntax was used, still within manifest.template. If the environment variable edmm is set to 1, the feature is enabled; otherwise, it is not. This configuration also allows specifying the size of the enclave and the number of threads available inside the enclave. The semantics of these configurations differ depending on whether the EDMM function is enabled. With EDMM enabled, sgx.enclave_size refers to the maximum size the enclave can reach, and sgx.max_threads represents the number of TCS EPCs allocated before execution. If more threads are required during execution, additional TCS pages will be created on demand. If EDMM is disabled, the options are straightforward: sgx.enclave_size sets the fixed size of the enclave, and sgx.max_threads specifies the total number of threads that can be used, both set at the time of enclave creation. The following snippet implements what has just been described: {% i f env . g e t ( ' edmm ' , 0 ) == ' 1 ' %} s g x . edmm_enable = t r u e s g x . e n c l a v e _ s i z e = " m a x _ e n c l a v e _ s i z e " s g x . m a x _ t h r e a d s = n u m b e r _ o f _ p r e a l l o c a t e d _ t h r e a d s {% e l s e %} s g x . edmm_enable = f a l s e s g x . e n c l a v e _ s i z e = " e n c l a v e _ s i z e " s g x . m a x _ t h r e a d s = m a x _ n u m b e r _ o f _ t h r e a d s {% e n d i f %} Once the bow.manifest is obtained from the Makefile, the SGX manifest (bow.manifest.sgx) is also created using the Gramine toolchain, and finally, the application is run simply with the command: gramine − s g x bow</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Occlum</head><p>To launch a Linux executable inside Occlum, it is necessary to create a workspace that includes the LibOS image that will host the executable inside the enclave. Occlum provides a comprehensive toolchain to facilitate the deployment of this instance. First, the workspace is created using the occlum init command. Subsequently, the file system inside the LibOS must be configured. This configuration is achieved using the copy_bom tool, where an input file bow.yaml specifies that the bowtie2-align-s executable is to be mounted inside the /bin folder. This process ensures the executable is correctly placed within the LibOS image for execution inside the SGX enclave. To achieve what has just been described, the file bow.yaml must contain the following configuration: Table <ref type="table">1</ref> Experiment configurations. Both LibOSes have been created starting from the authors' public Docker images: Gramine (gramineproject/gramine:stable-focal) and Occlum (occlum/occlum:0.30.0-ubuntu20.04). Some options are N/A because they cannot be specified in the referenced LibOS.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>No EDMM</head><p>Gramine Next, it is necessary to configure the Occlum.json file, which describes all the characteristics of the SGX enclave. This configuration includes essential information. In cases where EDMM is not active, it is possible to specify the enclave size and the maximum number of threads in this way:</p><p>" r e s o u r c e _ l i m i t s " : { " u s e r _ s p a c e _ s i z e " : " e n c l a v e _ s i z e " , " max_ n u m _ o f _ t h r e a d s " : n u m _ m a x _ o f _ t h r e a d s } Instead, the following options should be additionally specified to configure EDMM: " r e s o u r c e _ l i m i t s " : { . . . " u s e r _ s p a c e _ m a x _ s i z e " : " e n c l a v e _ m a x _ s i z e " , " i n i t _ n u m _ o f _ t h r e a d s " : n u m _ o f _ p r e a l l o c a t e d _ t h r e a d s } Thus, a single Occlum.json file can turn EDMM features on or off. Consequently, two different .json configuration files were created to delineate the desired features for the experiments. The occlum build command is used to construct the Occlum SGX enclave and generate its associated file system image according to the specifications in the Occlum.json configuration file. Finally, to run Bowtie2, the following command must be executed, specifying all the necessary options: occlum run b o w t i e 2 − a l i g n − s −x " / H o m o _ s a p i e n s _ a s s e m b l y 3 8 " \ −1 " s a m p l e . r _ 1 _ v a l _ 1 . f q . gz " −2 " s a m p l e . r _ 2 _ v a l _ 2 . f q . gz " \ −S " o u t . sam " −p n u m _ o f _ t h r e a d s</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>To assess the performance of Bowtie2 across various environments, we utilized the configurations detailed in Table <ref type="table">1</ref>. The experimental setup involved a machine powered by an Intel Xeon Gold 6346 CPU operating at 3.10 GHz, with an available memory capacity of approximately 400 GB RAM. Figure <ref type="figure" target="#fig_2">3</ref> illustrates the execution times of Bowtie2 under various configurations for both small and large input sizes. Each experiment was performed 10 times. Since no significant variance or outliers were observed, the mean value was considered representative of the configurations. A small input size refers to aligning approximately 10, 000 reads, while a large input size involves aligning about 3 million reads. As shown in Figure <ref type="figure" target="#fig_2">3a</ref>, native execution completes rapidly within seconds for small workloads. However, both LibOSes exhibit poor performance in this scenario, although, as can be noticed, Occlum outperforms Gramine. Furthermore, there is a lack of scalability: increasing the number of threads does not significantly enhance execution time, even in the bare-metal configuration. Enabling EDMM generally leads to a stable and acceptable increase in execution times across most cases, except when Bowtie2 necessitates 32 threads on Gramine. The excessive overhead observed may result from the dynamic management of the TCS enclave pages. As indicated in Table <ref type="table">1</ref>, Gramine's EDMM configuration preallocates 32 threads. Although Bowtie2 operates with exactly 32 threads, Gramine requires at least three additional threads for managing inter-process communication (IPC), asynchronous tasks, and secure TLS communication within the LibOS, and the overhead likely arises from the effort needed to allocate these supplementary threads. These findings discourage the adoption of SGX technologies due to the unacceptable overhead compared to the native case and the absence of scalability. However, it is worth noting that scalability is also lacking in the native case. Consequently, the experiment was repeated with the same configurations detailed in Table <ref type="table">1</ref> but applied to a much larger number of sequences, and the results are depicted in Figure <ref type="figure" target="#fig_2">3b</ref>. Some patterns evident in the small input size scenario are also observed here. For instance, in the bare-metal environment, execution times are significantly faster compared to those in the LibOSes, and Gramine's dynamic thread management severely impacts performance when Bowtie2 uses 32 threads. However, unlike the small input size case, the plot indicates some scalability. All configurations exhibit good scaling, with Occlum performing slightly better than Gramine again.</p><p>The critical consideration is justifying using trusted hardware techniques such as Intel SGX. Although SGX provides privacy guarantees, it also significantly increases execution times. In the case illustrated in Figure <ref type="figure" target="#fig_2">3a</ref>, the technology appears unfeasible due to the uneven trade-off between overhead and privacy. Conversely, 3b suggests that if the application is parallelizable, good scaling can be achieved even with SGX computations as the number of threads increases. In detail, empirical evidence indicates that running Bowtie2 on bare metal and then re-running the same workload on SGX with twice as many threads often increases performance. This effect is further highlighted in Figure <ref type="figure" target="#fig_3">4</ref>, which presents scalability comparison plots. Figure <ref type="figure" target="#fig_3">4a</ref> demonstrates the performance gains of bare-metal execution when the number of threads is doubled. Figures <ref type="figure" target="#fig_3">4b and 4c</ref> provide comparisons between bare-metal and Gramine, and between bare-metal and Occlum, respectively, under the same conditions. As observed, using SGX doubling threads often results in a performance gain compared to the native case. In some instances, the gain can be substantial; for instance, Occlum shows a 38.96% performance increase when using two threads compared to single-thread in the bare-metal setup. Nevertheless, performance gains are not always achievable, particularly when approaching the scalability limits of the problem. For example, in this bioinformatics workload, the performance gain from 16 to 32 threads is marginal, even in the native case, yielding just a 67% improvement compared to the average 95% increase. Specifically, when comparing 16 native threads to 32 threads in Gramine, there is a performance decrease of 46%, while Occlum shows a decrease of 23% in the same conditions. However, excluding the latter case, SGX with twice as many threads not only eliminates the overhead compared to non-confidential native execution but also achieves, on average, a 15% performance gain. A final consideration that emerges from the experiments is that Occlum generally outperformed Gramine in terms of execution time and scalability. However, it is essential to note that Gramine supports multiprocess applications, unlike Occlum, which makes Gramine particularly attractive for the portability of legacy workloads. A similar argument applies to the EDMM feature. Although, on average, EDMM increases execution time, it simplifies the configuration of LibOSes by eliminating the need to estimate the memory footprint, thus facilitating the portability of existing applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and future work</head><p>This study provides a foundational analysis of Intel SGX's performance for parallel executions. Introductory empirical observations obtained in our study offer essential insights into the feasibility of employing SGX for this kind of execution. The results indicate that doubling the threads almost invariably improves performance compared to an environment without hardware encryption techniques. This scenario is entirely plausible in remote environments managed by external providers, as users typically offload computations to remote systems due to insufficient local computational resources. In addition, these findings suggest that SGX could effectively mitigate the inherent overhead associated with encryption, thereby preserving privacy at runtime.</p><p>Future work may expand this performance analysis in two directions. The first direction involves a deeper exploration of SGX technologies as highlighted in Section 3.1, and a broader examination of other types of hardware that enable the establishment of a TEE, such as AMD SEV or Intel Trust Domain Extensions (TDX). The second direction focuses on analyzing multiprocess applications extensively designed for HPC centers, extending beyond bioinformatics to encompass more general applications. By pursuing these two avenues, future research can provide a more comprehensive understanding of the capabilities and limitations of various hardware-based security technologies in different computational environments.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Remote computing scheme with or without confidential computing involved</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Different ways of handling parallelism using SGX enclaves. (a) Enclave-Isolated Processes (EIP), where each enclave is a separate process. (b) SFI-Isolated Processes (SIP), where a single enclave is used, and tasks are executed within a single address space.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Comparison of execution times of Bowtie2 on different environments. The results are the mean of 10 executions. (a) The input for Bowtie2 consists of 9, 997 reads (small input size). (b) The input for Bowtie2 consists of 2, 886, 533 reads (big input size)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Bowtie2 with big input size: comparison of the scalability of different environments in respect of the bare-metal execution. (a) Performance gain in bare-metal VS bare-metal with double threads. (b) Gramine performance gain in bare-metal VS Gramine with double threads. (c) Occlum performance gain in bare-metal VS Occlum with double threads</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/BenLangmead/bowtie2</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://github.com/lorenzobrescia/performance-SGX-Bowtie2</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://bioconda.github.io</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://docs.conda.io/en/latest/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://jinja.palletsprojects.com</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by the Spoke 1 "FutureHPC &amp; BigData" of ICSC -Centro Nazionale di Ricerca in High-Performance Computing, Big Data and Quantum Computing, funded by European Union -NextGenerationEU.</p></div>
			</div>


			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>(M. Aldinucci) https://alpha.di.unito.it/lorenzo-brescia/ (L. Brescia); https://alpha.di.unito.it/iacopo-colonnelli/ (I. Colonnelli); https://alpha.di.unito.it/marco-aldinucci/ (M. Aldinucci) 0009-0005-1147-496X (L. Brescia); 0000-0001-9290-2017 (I. Colonnelli); 0000-0001-8788-0829 (M. Aldinucci)</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Intel SGX Explained</title>
		<author>
			<persName><forename type="first">C</forename><surname>Victor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Srinivas</surname></persName>
		</author>
		<ptr target="https://eprint.iacr.org/2016/086" />
		<imprint>
			<date type="published" when="2016">2016. 2024-07</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Trusted Execution Environment: What It is, and What It is Not</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sabt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Achemlal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bouabdallah</surname></persName>
		</author>
		<idno type="DOI">10.1109/Trustcom.2015.357</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Trustcom/BigDataSE/ISPA</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="57" to="64" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="https://download.01.org/intel-sgx/latest/linux-latest/docs" />
		<title level="m">Intel, Intel Software Guard Extensions (Intel SGX) SDK for Linux* OS</title>
				<imprint>
			<date type="published" when="2024-07">2024. 2024-07</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Intel software guard extensions (intel sgx) support for dynamic memory management inside an enclave</title>
		<author>
			<persName><forename type="first">F</forename><surname>Mckeen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Alexandrovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Anati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Caspi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Leslie-Hurd</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rozas</surname></persName>
		</author>
		<idno type="DOI">10.1145/2948618.2954331</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Hardware and Architectural Support for Security and Privacy 2016</title>
				<meeting>the Hardware and Architectural Support for Security and Privacy 2016</meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Scarlata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rozas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Brickell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mckeen</surname></persName>
		</author>
		<ptr target="https://community.intel.com/legacyfs/online/drupal_files/managed/57/0e/ww10-2016-sgx-provisioning-and-attestation-final.pdf" />
		<title level="m">Intel software guard extensions: EPID provisioning and attestation services</title>
				<imprint>
			<date type="published" when="2016">2016. 2024-07</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Supporting third party attestation for intel sgx with intel data center attestation primitives</title>
		<author>
			<persName><forename type="first">V</forename><surname>Scarlata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Beaney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zmijewski</surname></persName>
		</author>
		<ptr target="https://www.intel.com/content/dam/develop/external/us/en/documents/intel-sgx-support-for-third-party-attestation-801017.pdf" />
		<imprint>
			<date type="published" when="2018">2018. 2024-07</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Graphene-SGX: a practical library OS for unmodified applications on SGX</title>
		<author>
			<persName><forename type="first">C.-C</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Porter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vij</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference, USENIX Association</title>
				<meeting>the 2017 USENIX Conference on Usenix Annual Technical Conference, USENIX Association</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="645" to="658" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Cooperation and security isolation of library OSes for multi-process applications</title>
		<author>
			<persName><forename type="first">C.-C</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Arora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bandi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Jannen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Kalodner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kulkarni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Oliveira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Porter</surname></persName>
		</author>
		<idno type="DOI">10.1145/2592798.2592812</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth European Conference on Computer Systems</title>
				<meeting>the Ninth European Conference on Computer Systems<address><addrLine>Amsterdam The Netherlands</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Occlum: Secure and Efficient Multitasking Inside a Single Enclave of Intel SGX</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3373376.3378469</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems</title>
				<meeting>the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems</meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="955" to="970" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Ultrafast and memory-efficient alignment of short DNA sequences to the human genome</title>
		<author>
			<persName><forename type="first">B</forename><surname>Langmead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Trapnell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pop</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Salzberg</surname></persName>
		</author>
		<idno type="DOI">10.1186/gb-2009-10-3-r25</idno>
	</analytic>
	<monogr>
		<title level="j">Genome Biology</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">R25</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Fast gapped-read alignment with Bowtie 2</title>
		<author>
			<persName><forename type="first">B</forename><surname>Langmead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Salzberg</surname></persName>
		</author>
		<idno type="DOI">10.1038/nmeth.1923</idno>
	</analytic>
	<monogr>
		<title level="j">Nature Methods</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="357" to="359" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Scaling read aligners to hundreds of threads on general-purpose processors</title>
		<author>
			<persName><forename type="first">B</forename><surname>Langmead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wilks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Antonescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Charles</surname></persName>
		</author>
		<idno type="DOI">10.1093/bioinformatics/bty648</idno>
	</analytic>
	<monogr>
		<title level="j">Bioinformatics</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="421" to="432" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Shielding Applications from an Untrusted Cloud with Haven</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Peinado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hunt</surname></persName>
		</author>
		<idno type="DOI">10.1145/2799647</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation</title>
				<meeting>the 11th USENIX Symposium on Operating Systems Design and Implementation</meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="267" to="283" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">SCONE: Secure Linux Containers with Intel SGX</title>
		<author>
			<persName><forename type="first">S</forename><surname>Arnautov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Trach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gregor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Knauth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Priebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Muthukumaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>O'keeffe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Stillwell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Goltzsche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Eyers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kapitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pietzuch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Fetzer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation</title>
				<meeting>the 12th USENIX Conference on Operating Systems Design and Implementation</meeting>
		<imprint>
			<publisher>USENIX Association</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="689" to="703" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Panoply: Low-TCB Linux Applications With SGX Enclaves</title>
		<author>
			<persName><forename type="first">S</forename><surname>Shweta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">T</forename><surname>Dat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Shruti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Prateek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">NDSS Symposium</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Priebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Muthukumaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Sartakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pietzuch</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1908.11143</idno>
		<title level="m">SGX-LKL: Securing the Host OS Interface for Trusted Execution</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Ryoan: A Distributed Sandbox for Untrusted Computation on Secret Data</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hunt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Peter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Witchel</surname></persName>
		</author>
		<idno type="DOI">10.1145/3231594</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Comput. Syst</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="533" to="549" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Analyzing the Performance Impact of HPC Workloads with Gramine+SGX on 3rd Generation Xeon Scalable Processors</title>
		<author>
			<persName><forename type="first">S</forename><surname>Miwa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Matsuo</surname></persName>
		</author>
		<idno type="DOI">10.1145/3624062.3624267</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SC &apos;23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, SC-W &apos;23</title>
				<meeting>the SC &apos;23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, SC-W &apos;23</meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1850" to="1858" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Performance Analysis of Scientific Computing Workloads on General Purpose TEEs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Akram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Giannakou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Akella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lowe-Power</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Peisert</surname></persName>
		</author>
		<idno type="DOI">10.1109/IPDPS49936.2021.00115</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Parallel and Distributed Processing Symposium (IPDPS)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="1066" to="1076" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Secure Generic Remote Workflow Execution with TEEs</title>
		<author>
			<persName><forename type="first">L</forename><surname>Brescia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Aldinucci</surname></persName>
		</author>
		<idno type="DOI">10.1145/3642978.3652834</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Workflows in Distributed Environments, WiDE &apos;24</title>
				<meeting>the 2nd Workshop on Workflows in Distributed Environments, WiDE &apos;24</meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="8" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Porting the Variant Calling Pipeline for NGS data in cloud-HPC environment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Mulone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Awad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chiarugi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Aldinucci</surname></persName>
		</author>
		<idno type="DOI">10.1109/COMPSAC57700.2023.00288</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)</title>
				<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="1858" to="1863" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
