<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Y. Kim, R. Daly, J. Kim, C. Fallin, J. H. Lee, D. Lee, C. Wilkerson, K. Lai, O. Mutlu, Flip-
ping bits in memory without accessing them: an experimental study of DRAM distur-
bance errors, ACM SIGARCH Computer Architecture News</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/SP46214.2022.9833768</article-id>
      <title-group>
        <article-title>Confidential Computing: A Security Overview and Future Research Directions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>AlessandroBertan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>DaniloCaracci</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>StefanoZanero</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MarioPolino</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Confidential Computing, Trusted Execution Environments, Cloud Security, CXL</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Milano, Department of Electronics</institution>
          ,
          <addr-line>Information and Bioengineering</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>42</volume>
      <issue>2014</issue>
      <fpage>03</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>By performing computations within hardware-based Trusted Execution Environments (TEEs), Confidential Computing protects data in use, which has been a longstanding challenge in data security. This paper provides an overview on Confidential Computing technologies, with a focus on security implications and recent developments. We begin with an introduction to Confidential Computing, its principles, and its relevance to data security. We outline the threat model for Confidential Computing, considering in-scope and out-of-scope attack vectors. We analyze published attacks, their complexities, and mitigation approaches in the context of Confidential Computing. We analyze data security within TEEs, including encryption, access control, and memory protection mechanisms across diferent technologies (e.g., Intel TDX, AMD SEV, Arm CCA). Finally, we explore future research directions, including the challenges related with the integration of TEEs and emerging technologies like Compute Express Link (CXL) to further enhance data-in-use security and the use of Confidential Computing in Machine Learning applications.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Data may exist in three diferent states: datian transit, dataat rest, and datain use. Data is in transit
when it is traversing the network, it is at rest when it is on a storage or memory device, and it is in
use while it is being processed. Protecting sensitive data in all of these states is of critical importance,
and while cryptography has been successfully applied for the protection of data in transit and data
at rest, the protection of data in use is still an open problem, with a few proposals that aim at solving it.</p>
      <p>Confidential Computing</p>
      <p>
        [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is the protection of data in use by performing computation on
a hardware-based and attested Trusted Execution EnvironmeTnEtE)(.
      </p>
      <p>The definition specifies
hardware-based for a valid reason: security in any layer of the computing stack could be circumvented
by exploiting a vulnerability at a lower level. By providing security at the lowest possible level, it is
possible to reduce the required trusted parties.</p>
      <p>Confidential Computing is particularly relevant in the context of third-party cloud services.
Confidential Computing-enabled hardware allows Cloud Service ProviCdeSrPss)( to give users the
possibility to create, deploy, and manage Virtual MachiVnMess(), with guarantees on the confidentiality
and integrity of the data on which they perform their computations.</p>
      <p>In a traditional settingC,SPs give their users the possibility to creaVteMs that share the same
hardware and are managed byhaypervisor. The data transferred to thVeM by the users is then
managed by theCSP, which is trusted with the confidentiality and integrity of this data. This setting
requires trust at multiple levels: the user needs to trust both the provider of the hypervisor and the
CSP not to share their data or tamper with them. In a Confidential Computing-enabled setting, the
only point of trust is the hardware manufacturer, i.e., the one that provides the authenticated firmware
which guarantees the confidentiality and integrity of the data in use insidesetchuere VM.</p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>Virtual Machine</p>
      <p>Virtual Machine</p>
      <p>
        Gradually, all the main players in the processor industry are designing new chips with hardware
extensions to support Confidential Computing in their server CPUs. At the moment, there are mainly
four technologies that can be used to provide Confidential Computing capabilities to users: Intel Trust
Domain ExtensionsT(DX) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], AMD Secure Encrypted VirtualizatioSnE(V) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], IBM Protected
Execution FacilitiesP(EF) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and Arm Confidential Compute ArchitectureC(CA) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. These technologies
have been gradually introduced and refined in server-side processors (e.g., Intel Xeon or AMD EPYC
processors) since 2016 when AMDSEV was first introduced. They enableCSPs to give users the possibility
to create and deploy confidentiaVlMs, and have been lately adopted by the maCinSPs. At the moment
of writing, Microsoft Azure allows deploying confidentiValMs with TDX-enabled fourth-generation
Intel Xeon processors6][, while Google Cloud and Amazon Web Services use the last iteration of AMD
SEV, named Secure Encrypted Virtualization with Secure Nested PagSinEVg-(SNP) [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. A detailed
description of the technologies mentioned above is available in AppAen. dix
      </p>
      <p>In this paper, we analyze the current status of existing Confidential Computing technologies and
provide insights about the future research directions in the field. In particular, the contributions of
this work are the following:
• A security-oriented survey on Confidential Computing technologies, which focuses existing
attacks and mitigations.
• A comparison of the threat models of commercially available Confidential Computing solutions.
• Future research directions on Confidential Computing, the integration of TEEs with the CXL
protocol and Machine Learning applications.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Threat model</title>
      <p>As defined by the Confidential Computing Consortium (CCC), the goal of Confidential Computing is to
reduce the ability for the owner, operator, or exploiter of a platform to access the private data and code
inside TEEs so that it is not economically or logically viable to attack the platform during execution.</p>
      <p>There are various threat vectors that can be used to exploit the vulnerabilities in a system: not all of
them are addressed by Confidential Computing, in fact, some are explicitly considered in scope while
others are considered out of scope. In particular, the following threat vectors are considered to be in
scope for Confidential Computing: 1 software attacks, i.e., attacks on software and firmware installed
on the host, including the OS, the hypervisor, BIOS, and so o2n;protocol attacks, i.e., attacks on
protocols associated with attestation as well as workload and data tran3spcorrytp;tographic attacks;
4 basic physical attacks: cold DRAM extraction, bus and cache monitoring, plugging of attack devices
into an existing port;5 basic upstream supply-chain attack, i.e., attacks that compromTEisEes such
as adding debugging ports.</p>
      <p>There is a set of threat vectors for which the mitigations vary significantly based on the silicon
implementation, and there are some grey areas (such as integrity, rollback, and replay attacks) that may
be considered in scope by some vendors, and out of scope by others. Sophisticated physical attacks
are out of scope, as well as availability attacks oTnEtEhse. A key assumption behind the guarantees
provided by Confidential Computing is that there are no exploitable side-channels that the owner (or
other entities with access to the system) could use to infer information about the data or execution.
Any existing side-channel could allow attackers to infer information about data or operations inside a
TEE by exploiting the knowledge of the architecture ofTtEhEeitself. TheCCC states that preventing
side-channel attacks depends not only on the TEE manufacturers but also on third-party vendors and
application developers, thus considering this class of attacks out of scope.</p>
      <p>Diferences among the threat models of commercial Confidential Computing solutions are
summarized in Table1.</p>
      <sec id="sec-2-1">
        <title>2.1. Data Security and Memory Protection</title>
        <p>The main memory, i.e., where data in use resides, is the main asset that Confidential Computing aims
to protect. Diferent Confidential Computing solutions make diferent assumptions about the threats
that the main memory is subject to and this, as explained in the previous section, leads to slightly
diferent threat models between them.</p>
        <p>There are three security requirements that all Confidential Computing technologies must meet:
• Data confidentiality : unauthorized entities cannot view data while it is in use withinTtEhEe.
• Data integrity: unauthorized entities cannot add, remove, or alter data while it is in use within
the TEE.</p>
        <p>• Code integrity: unauthorized entities cannot add, remove, or alter code executing in the TEE.</p>
        <p>Among the diferent technologies, the most used solution to provide confidentiality and integrity
to the data residing in the main memory is a combination of encryption and access control. Every
technology implements this in a diferent way, with diferent results on the guarantees.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Existing attacks and mitigation proposals</title>
      <p>In the past years, researchers from both industry and academia have found vulnerabilities in Confidential
Computing platforms and published attacks that exploit them. Mitigations for the previously mentioned
attacks have been proposed in research papers or distributed from the manufacturers in the form of
microcode updates. In this section, we describe existing attacks and mitigation proposals foTrDIXntel
and AMDSEV-SNP. To the best of our knowledge, no attacks have been published regarding Arm CCA.</p>
      <sec id="sec-3-1">
        <title>3.1. Intel TDX</title>
        <p>The main vulnerabilities that have been disclosed for TInDtXelcome from a report published by
Google Project Zero and Google Cloud Security in 2092].3T[o the best of our knowledge, no academic
work has been published regarding attacks on Intel TDX.</p>
        <p>Exit Path Interrupt Hijacking The attack described in this section exploits vulnerabilities in one of
the Attested Code ModulesA( CMs) provided by Intel, which are thTeDX module, the Non-Persistent
SEAM Loader N(P-SEAMLDR), and the Persistent SEAM LoaderP-(SEAMLDR). These two are code
modules whose function is to ultimately load the TDX module in secure memory.</p>
        <p>Since the startup BIOS code is outside the Trusted Computing BasTeC(B) for IntelTDX, Intel
designed theNP-SEAMLDR to dynamically establish a root of trust on which the rest oTfDtXhe
infrastructure is loaded. ThNeP-SEAMLDR performs two main tasks before returning control to
the hypervisor: it validates the system configuration and installsPt-hSeEAMLDR into theSEAMRR
memory region. The hypervisor can then interact with the truPs-tSeEdAMLDR to install the signed
TDX module into theSEAMRR memory region.</p>
        <p>All the code outside thesAeCMs is outside theTCB, and can thus attackNP-SEAMLDR. For this
reason, theACM protects itself from exploitation in diferent ways. First of all, all external interrupts
are masked and hardware breakpoints are disabled. Then, software exceptions are inhibited by setting
the Interrupt Descriptor Table RegisteIDr T(R) limit to zero, which leads to any exception causing a
triple fault and system shutdown. Finally, the binary is loaded at a known virtual address and no ASLR
is applied, unlike P-SEAMLDR and the TDX module.</p>
        <p>From an attacker perspective, there are two interesting windows during the execution of the
NP-SEAMLDR: shortly after ACM entry the host’s Interrupt Descriptor TabIlDeT() is still configured
beforeIDTR is set to zero, and shortly befoArCeM exit theIDT is restored to host’s. If an exception
can be forced to occur within these windows, the attacker can gain control over the instruction pointer
while in privileged mode. Intel fixed this vulnerability in the 1.0 releaTseDoXfby checking that every
return address in the exit path is canonical and non-malicious.</p>
        <p>ECC Disablement Vulnerability This vulnerability depends on the ability of the attacker to
misconfigure the system. If a privileged attacker can successfully disable Error-CorrectinEgCCCo)d,e (
Rowhammer 1[0] bit flips could be more likely.</p>
        <p>This is not an issue iTfDX cryptographic integrity is enabled: the HMAC provides a protection
that is similar to ECC, with respect to memory integrity attacks. However, ifToDnXlylogical integrity
is enabled, there is a single bit per cache line: if the attacker is able to EdCisCa,btlheen they would
only need to flip a single bit in order to bypass the TDX logical integrity checks. This leaTdDsXto
being vulnerable to Rowhammer-style attacks, where a malicious VMM tries to flip bits in memory
owned by the TDX module or by Trust Domains (TDs).</p>
        <p>Intel resolved this issue in fourth-generation Intel Xeon Scalable CPUs so that the control registers
that contain configuration values foErCC are locked beforMeCHECK runs. Then,MCHECK validates that
their values are configured properly before enabling TDX.
3.2. AMD SEV, SEV-ES, SEV-SNP
Since the introduction of AMSDEV in 2016, several attacks against this new architectural extension
have been published. Some of these attacks have been mitigated with the introduction of Secure
Encrypted Virtualization with Encrypted StatSeE V(-ES) and SEV-SNP, while others may still be
applicable under the right conditions. Google Project Zero and Google Cloud Security published a
security report on AMD SEV-SNP as wel1l1[].</p>
        <p>Crossline When a secureVM is created, it is assigned an Address Space IdentifierAS(ID) by the
hypervisor, which then notifies the AMD Secure Processor (SP) that a new secVuMrehas been created. The
AMD SP creates the ephemeral encryption key associated with the newly assAigSnIeDd, which is used to
look up the key whenever a private memory page belonging toVtMheneeds to be decrypted. TheASIDs
are not authenticated: this is the logic flaw behinCdrossline [12], a class of attacks that rely on the
ability of a maliciouVsM to change itsASID into the one of the victimVM, thus being able to decrypt its
memory. There are no assumptions of the adversary’s knowledge about the contentsVoMf:ththee only
assumption is that they control the hypervisor. This assumption is in line with the threat mSoEdVel. of</p>
        <p>Two versions ofCrossline attacks have been proposedC. rosslinev1 explores the use of nested
page table walks to decrypt the victim’s memory, whCilreosslinev2 is a more powerful variant that
allows the attacker VM to execute an instruction inside the encrypted memory of the victim VM. The
huge advantage of these attacks is that they’re stealthy: they rely on modifying the state of the attacker
VM alone, and these changes are not propagated to the victim VM’s state, so there is no way for it
to notice the ongoing attack. Moreover, it is even possible for the attacker to rewind the state of their
VM to eliminate any trace of the attack.</p>
        <p>The introduction oSfEV-ES, which also encrypts the control structures ofVtMhes, increased
the dificulty of successfully executing these attacks. In fact, while version 1 is still applicable with
some further steps, the impossibility of manipulating the values in specific registers makes version 2
unfeasible. With the introductionSoEfV-SNP, specifically aimed at preventing attacks against memory
integrity, both versions of this attack have been rendered unfeasible due to the new Reverse Map
Table (RMP) table walk mechanism.</p>
        <p>Ciphertext Side-Channel Attacks As explained in AppendixA.2, the RMP is used to perform
diferent checks depending on who is requesting access to a memory page. Specifically, if the hypervisor
is requesting read access to a memory page, there will be no RMP table walk, even if the requested
page belongs to a secure VM.</p>
        <p>This behavior opens the door tCoipherleaks [13]: this attack, called “ciphertext side-channel
attack”, allows the privileged hypervisor to monitor the changes of the ciphertext blocks on the guest
VMs memory pages and exfiltrate secrets from the guest. This is also possible thanks to the mode of
operation used bySEV’s memory encryption: XOR-Encrypt-XOR (XEX) encrypts each 16-byte memory
block independently and preserves the one-to-one mapping between the plaintext and ciphertext pairs
for each physical address.</p>
        <p>In particular, thCeipherleaks attack monitors the ciphertext of tVhMe Save Area V(MSA) area
duringVMEXITs, then by comparing the ciphertext blocks with the ones collected during previous
VMEXITs the adversary can learn that the corresponding register values have changed, and infer the
execution state of the guesVtM. Moreover, by looking up a dictionary of plaintext-ciphertext pairs
collected during thVeM startup, the adversary is able to recover some selected values of the registers.
Due to the severity of this attack, AMD released a microcode patch to mitigate it. This patch enables
the third-generation AMD EPYC processors to include a nonce into the encryption of the VMSA area,
thus breaking the link between the plaintext and the ciphertext. However, even if this patch is enough
to makeCipherleaks unfeasible, it is not enough to prevent other ciphertext side channel attacks that
exploit memory leakage from any other memory page thanV MthSeA. In fact, it was demonstrated that
ciphertext side-channel attacks can be successfully performed by targeting any memory1r4e]g.ion [</p>
        <p>
          As a defense against generalized ciphertext side-channel attackCs,iptherfix framework1[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
has been recently proposed. This solution is based on binary instrumentation and tracks secret data
to identify critical memory accesses, that are then safeguarded by randomizing observable write
patterns. This way, the resulting binary does not leak information through the ciphertext side-channel.
Being Cipherfix a software-based defense mechanism, there is a trade-of to be made in terms of
performance: in the worst possible case, the slowdown is up to 40 times the original runtime.
RMP Degradation Attack An unchecked write is defined as a memory write that does not go
through theRMP access control. Such an operation can be achieved by exploiting programming errors
in the code of the AMD trusted firmware: researchers from Google Project Zero and Google Cloud
Security found a bug that allowed writes in a 2 MiB reserved memory area when only 1 MiB of this
memory area was actually used by firmware.
        </p>
        <p>The RMP contains self-protecting entries, i.e., entries that contain the address wheRrMe Pthreesides,
marked as belonging to the trusted firmware so that the hypervisor cannot modify them. An unchecked
write could be leveraged modify these entries to be marked as belonging to the hypervisor. In this state,
a malicious hypervisor can transition any page to any state simply by writingRtoMtPh:eallSEV-SNP
security features are lost. This bug has been fixed by modifying the size of the memory region initialized
by the firmware from 1 MiB to 2 MiB. However, any possible bug that allows to perform unchecked
writes could, under the right conditions, be leveraged to perform an RMP degradation attack.
Microarchitectural Side Channel Attacks A common technique when attackinTgEEs, especially
Intel Software Guard ExtensionsS(GX), is calledsingle-stepping. This technique involves using the system’s
APIC timer to interrupt the enclave after the execution of each instruction, to increase the temporal
resolution of microarchitectural attacks. The same technique has been successfully appliedStEoVAVMMDs,
with SEV-Step [16]. This is a framework that allows to perform single-stepping iSnEsVidVeMs, and
gives access to common attack primitives like page fault tracking and cache attacks SaEgVai.nSsitnce
side-channel attacks are out of scope, with respecStEtVo, so there is no specific countermeasure in place.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Future Research Directions</title>
      <p>Confidential Computing technologies are rapidly evolving, with revisions and new iterations being
developed year after year. The proliferation of Confidential Computing technologies has introduced
a demand for interoperability across a wide variety of devices spanning CPUs, GPUs, accelerators,
and memory, which requires the introduction of standards to allow these devices to be compatible with
all Confidential Computing solutions. In this section, we propose and analyze two research directions
related with Confidential Computing: the integration of TEEs with the Compute eXpress Link standard
and the use of Confidential Computing in Machine Learning applications.</p>
      <sec id="sec-4-1">
        <title>4.1. Compute eXpress Link and TEEs</title>
        <p>Compute eXpress Link (CXL) is a multi-protocol technology designed to support accelerators and
memory devices over the PCIe protoco1l7[]. The purpose of CXL is sharing computing or memory
resources in datacenters. Since its introduction in 2019, several revisions of the standard have been
released: the CXL 3.1 specification, released in 2023, introduced the support of Confidential Computing
technologies in CXL.</p>
        <p>TEE Security Protocol The TEE Security Protocol defines the architecture to support workload
confidentiality in a CXL system. Its scope is limited to directly connected CXL Type 3 (memory
expansion device) Single Logical Devices (SLDs) or Multi-Headed SLDs (MH-SLDs) that might support
dynamic capacity features for memory pooling architectures.</p>
        <sec id="sec-4-1-1">
          <title>VVVMMM</title>
          <p>VMM/OS</p>
          <p>VVTMMVM
TSM</p>
          <p>DSM</p>
          <p>TSM RoT</p>
          <p>TEE Capable Target</p>
          <p>The diagram in Figur2e outlines the components of a confidential computing architecture in a CXL
system. The TEE Security Manager (TSM) and the TEE Security Manager Root-of-Trust (TSM RoT)
are responsible for the authentication and attestation of the device and for the exchange of security
protocol transactions in order to discover security properties or to configure and securely lock the
device. These operations are performed via the SPDM protocol that guarantees the confidentiality
of the messages exchanged. The transactions on the device are processed by the Device Security
Manager (DSM). TSM and TSM RoT are the agents that can assign security properties (such as memory
encryption and access control) on a VM-basis distinguishing between secure VMs and legacy VMs.
Memory Encryption The memory encryption feature is introduced to support data-in-use
confidentiality. Two modes are defined: 1 initiator-based encryption, where data are encrypted through the CXL
host and exchanged encrypted with the target, a2ndtarget-based encryption, where data are exchanged
in clear text and encrypted/decrypted by the device. The recommended encryption algorithm is AES-XTS
256 but the standard is open to other types or future vendor-specific solutions. The main requirement
of the encryption engine is to minimize the impact on the memory access latency, power, and costs.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>TEE Exclusive State tracking and Access Control To meet integrity protection requirements,</title>
        <p>TEE Exclusive State (TE State) is used to indicate whether the content of the memory is for TEE or
non-TEE data. Initiators that generate memory accesses shall determine the TEE status of each memory
transaction (TE Intent). TEEs are permitted to access both exclusive and non-exclusive memory, while
non-TEE entities are permitted to access only non-TEE memory.</p>
        <p>Threats to mitigate are related to integrity attacks: changing data of a VM in memory (even with
encryption) and replay attacks. Integrity violations impact software running on trusted VMs in an
unpredictable way. The basic principle of integrity protection is that if a trusted VM can read a private
(encrypted) page of memory, it must always read the last value it wrote.</p>
        <p>Access control is the verification of TE Intent against TE State in the memory being accessed and the
resulting behavior if the verification fails. Access control can be on read and/or write. The device can
advertise the supported access control types. The host can enumerate and enable access control types.
TE State can be changed with diferent methods. One method called Implicit TE State Change, uses
memory write operations to change the TE State at the host cache line level (64 bytes). A second method
is based on specific commands that can have larger granularities (typically 4 kilobytes). The main
architectural challenges on the CXL Type 3 device related to TE State Tracking and Access control are
the impact on the memory space to store TE state on a 64 bytes basis and the performance degradation
due to the storage of the TE state and to the checks to be performed when the device is accessed.</p>
        <sec id="sec-4-2-1">
          <title>Hypervisor</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>NVIDIA Driver</title>
        </sec>
        <sec id="sec-4-2-3">
          <title>Confidential VM</title>
          <p>Encrypted</p>
          <p>Transfers
GPU
No R/W
Access
(b) NVIDIA CC On.</p>
          <p>NVIDIA</p>
          <p>Driver</p>
          <p>Hypervisor
Complete
access
Unencrypted
Transfers</p>
          <p>GPU
(a)NVIDIA CC Of.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.2. Confidential Computing in Machine Learning Applications</title>
        <p>Machine learning and data-driven technologies have been subject to rapid and pervasive development
in the last few years. Machine learning models may be trained using sensitive information and, as the
use of cloud-based machine learning platforms has increased, robust privacy and security guarantees
have become a necessity.</p>
        <p>
          The security of training and inference processes of machine learning models in cloud environments
is a relevant topic1[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] because there are many diferent parties involved in the process: the data
owners, the model owners, the result receivers, and the host of the ML computation. This implies
that there are diferent entities that need to be trusted not to divulge, tamper with, or steal data, and
therefore the attack surface is quite large.
        </p>
        <p>Confidential Computing has emerged as a promising approach to achieve secure and trustworthy
machine learning, mainly because of its high performance coming from hardwaTreE-Ebsa.sMedost of
the computation in the context of machine learning models’ training is performed on external devices,
like Graphics Processing UnitGsP(Us) or other accelerators. For this reason, in the last few years, there
have been eforts to provide solutions for the integration of external devices in Confidential
Computingenabled environments both in academia and in industry. Even if the research interest on this topic
converges on machine learning applications because of their difusion, any application that ofloads its
computation toGPUs or other external devices could also benefit from the application of these solutions.
NVIDIA Confidential Computing NVIDIA has recently announced its new H1G00PU, based
on the Hopper architecture, which is the first commercGiaPlU solution introducing hardware-based
confidential computing capabilities. In July 2023, they released a whitepaper explaining what are
the goals of NVIDIA Confidential Computing, which features are enabled on the HG1P0U0, and how
it integrates into theTEEs created by Confidential Computing-enabled CPUs. NVIDIA specifically
requires that the CPU used together with their H100 needs to be from Intel, AMD, or Arm, and must
support TDX, SEV-SNP orCCA, respectively. With NVIDIA Confidential Computing enabled, all the
data transferred between the secuVrMe and theGPU will be fully encrypted, with no possibility from
the hypervisor to read or write the data. This behavior is represented in F3i.gure</p>
        <p>The main goals set by NVIDIA for the H10G0PU are to provide data and code confidentiality, data
and code integrity, and to provide protection against basic physical attacks so that interposers on buses
such as PCIe and DDR memory cannot leak data or code.</p>
        <p>NVIDIA outlines three main modes of deliveriGnPgUs to aVM: assigning an entire confidential
GPU to a single trustedVM (mainly used for inference, HPC or lightweight training), assigning multiple
confidential GPUs to a single trustedVM (with NVLink support and multiple possible topologies,
typically used for training) and, finally, assigning each confidenGtiPaUl to multiple tenants. The threat
model of NVIDIA Confidential Computing is essentially the same asTfDorX, CCA andSEV-SNP. Also
in this scenario, sophisticated hardware attacks are out of scope, as well as denial of service attacks.
Other Solutions The NVIDIA H100 GPU is the only available commercial solution that provides
Confidential Computing capabilities onGPUs. However, researchers from both academia and industry
published several proposals for the extension of confidential computinGgP Utos or, more generally,
accelerators.</p>
        <p>
          Graviton 1[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] is a joint work from Microsoft and the University of Lisbon, and is one of the first
proposals of a Confidential Computing extension to accelerators. Graviton is an architecture that
supports TEEs on GPUs. It enables applications to ofload security-sensitive kernels and data GtoPaU
and execute them in isolation from other code running onGPthUeand on the host, including the
device driver that communicates with thGePU, the operating system, and the hypervisor. In Graviton,
a TEE is a set ofGPU resources that are cryptographically bound to a public/private key pair and
isolated from untrusted software on the host and all otGhPeUr contexts. Graviton then guarantees
that once a secure context has been created, its resources can only be accessed by a user application
in possession of the corresponding private key. Graviton works by modifying the interface between the
GPU driver and the hardware: the driver can no longer access security-sensitive resources (e.g., page
tables, page directories, and memory in general) because Graviton forces all the resource allocation
requests coming from the driver to pass through GthPeU’s command processor. This component
tracks ownership of resources and ensures that no resource owned by a secure context can be accessed
by other entities. This design has low hardware complexity and low performance overheads, requiring
minimum changes for it to be integrated into an existing GPU architecture.
        </p>
        <p>Acai [20] is a CCA-based solution that allows confidentiValMs to use accelerators while relying
on hardware-based memory protection to preserve security. There are three modes inVwMhsich
can access accelerators Aocnai. If the SoC has integrated accelerators, tAhecnai uses existingCCA
primitives to enable VM access. For PCIe devicAesc,ai supports encrypted access. This mode creates
redundant data copies (at least three). For this reason, the third mode pisrottheceted mode, which
reduces the number of copies to one by allowing accelerators to directly access the VM memory.s This
requires careful consideration, as in the oriCgiCnAalspecification external accelerators connected over
PCIe cannot access realm memorAyc.ai addresses this by modifying the granule protection mechanism
to disallow any other software from accessing the normal world shared memoVryM: tahned accelerator
can communicate over a shared memory area in normal world meAmcoari ya.lso establishes mutual
attestation between accelerators and VMs leveraging existing attestation mechanisms from PCIe5.</p>
        <p>
          IPU Trusted Extensions (ITX)2[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] is another proposal made by researchers from several companies
(Microsoft, Meta, Graphcore, and others) that presents a set of hardware extensions to eTnEaEbsle
in Graphcore’s GC200 Intelligence Process Unit (IPU), a state-of-the-art AI accelerator. ITX isolates
workloads from untrusted hosts and ensures their data and models remain encrypted at all times (except
for when they’re inside the accelerator’s chip). Trust in ITX is rooted Cinonthfideential Compute Unit
(CCU), a new hardware root of trust on the IPU board: the CCU provides each device with a unique
identity based on a hardware secret. The new execution mode that ITX introduces, ctraulslteedd mode,
guarantees that all security-sensitive information is isolated from a potentially malicious host. Once
the IPU enters this mode, its configuration registers and tile memory can only be accessed by the CCU
and the IPU Control Unit (ICU). In the paper, the authors present a specific use case which they call
“ofline mode”: in this mode, ITX requiresno CPU-based TEE. Suppose that there are multiple parties:
the model provider, the data providers, and the untrusted cloud provider. In trusted ofline mode, the
model provider and the data providers upload the encrypted model and data, verify the attestation
report coming from the CCU, and provide their encryption key to the CCU, encrypting them with
the CCU’s public key. Then, they can be ofline while the training of the model goes on, with strong
guarantees on the security of their model and data.
        </p>
        <p>All these models have diferent approaches in including external devices inside the CPU TEE’s trust
boundary. However, devices implementing these approaches are not commercially available at the time
of writing, and, while they are surely promising, it’s impossible for us to say whether these approaches
are taken into account in the development of the new generation of Confidential Computing-enabled
AI accelerators.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Confidential Computing technologies are rapidly evolving. New use cases and scenarios, like the
integration with the CXL standard and the use of Confidential Computing with Machine Learning
applications, need careful threat modeling and security analyses to avoid that the introduction of
external devices into the Trust Boundary of Confidential Computing systems cause their security
guarantees to be weakened. In this paper we have provided a comparison of the threat models of
commercially available Confidential Computing technologies, as well as a detailed analysis of their
inner workings. We have provided an overview on the attacks and mitigations for these systems, and
an analysis on future research directions in this field, identifying the integration with the CXL standard
and Machine Learning applications as the most interesting and promising.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was initially accepted to be published in the proceedings of the 8th Italian Conference on
Cybersecurity (ITASEC 2024). Alessandro Bertani’s research grant was partially funded by Micron
Technology and Project FARE (PNRR M4.C2.1.1 PRIN 2022, Cod. 202225BZJC, CUP D53D23008380006,
Avviso D.D 104 02.02.2022), which is under the Italian NRRP MUR program funded by the European
Union - NextGenerationEU.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT, Google Gemini and Grammarly to
perform spelling and grammar checks. After using these tools, the authors reviewed and edited the
content as needed and take full responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-8">
      <title>A. Confidential Computing Technologies</title>
      <p>Even if commercially available Confidential Computing solutions are similar to one another, in this
appendix we highlight their diferences. We also add details with respect to other works, as well as
a description of RISC-V AP-TEE, which was not included in the work of Guanciale2e2t]a.l. [</p>
      <sec id="sec-8-1">
        <title>A.1. Intel Trust Domain Extensions</title>
        <p>IntelTDX is a set of architectural extensions that enable the creation and management of
hardwareisolated, secureVMs calledTDs. TDX is designed to isolateVMs from every nonT-D component,
including the hypervisor.</p>
        <p>IntelTDX uses a CPU-attested software module (theTDX module), Intel Virtual Machine Extensions
(VMX) and Intel Multi-Key Total Memory Encryption (MK-TME), which is used to create and manage
a diferent private key for eachTD, that is then used to encrypt the memory pages of eaTcDh and
their control structures.</p>
        <p>IntelTDX introduces a new CPU mode that helps enforce the security policies foTrDtsh,ecalled
Secure Arbitration ModeSE(AM), that is the CPU mode in which the privileged and trustTeDdX
module is executed. Control transfers between the hypervisor anTdDthXemodule happen when
a SEAMCALL instruction is executed. This instruction causes the CPU mode to switchSEtAoM,
transferring the control to the TDX module.</p>
        <p>To guarantee memory isolation betweeTnDs and from the hypervisor, a Guest Physical
Address (GPA) can beprivate orshared, depending on theSHARED bit of theGPA. The CPU translates
sharedGPAs using theshared Extended Page TableE(PT), which resides unencrypted in host VMM
memory and is directly managed by the VMM. The CPU translates priGvaPAtes using thesecure EPT,
which is unique perTD and is encrypted with the private key of its associaTteDd. The secure EPT is
designed not to be directly accessible by any other software thanTthDeX module, nor by any devices.</p>
        <p>During TD launch, its initial contents and configuration are recorded byTtDhXe module. At
runtime, the IntelTDX architecture reuses the IntSeGlX attestation infrastructure to support attesting
to these measurements. Software running insideTaD can request theTDX module to generate an
integrity-protectedTDREPORT structure that includeTsD’s measurements and an asymmetric key
that is used to establish a secure channel with the software running insideT Dth. eAn SGX Quoting
Enclave can be used to check the integrity of the report produced in the previous step. If integrity
is successfully verified, the Quoting Enclave can insert the guTeDst’s measurements in a quote, which
is crucial in establishing trust between the user and the platform.</p>
        <p>Memory Confidentiality and Integrity TDX uses MK-TME to enable cache-line-level memory
encryption. The TDX module assigns eacThD a unique, privateKeyID which corresponds to an
AES-128 bit cryptographic key managed by the memory controller. The keys saved into the memory
controller are not accessible by software or by using external interfaces to an SoC.</p>
        <p>TDX also provides two memory integrity modecsr:yptographic integrity andlogical integrity.
When cryptographic integrity is enabled, each cache line is protected with a 28-bit MAC (obtained
by truncating the output of a SHA-3-256-based MAC generation function), in addition to AES-XTS-128
encryption. Moreover, a 1-bTitD ownership tag is maintained with each cache line to identify if the
line is associated with a memory page assigned to a TD. When cryptographic integrity is enabled, the
ownership tag is included in the computation of the MAC. When only logical integrity is enabled, the
TD ownership tag is maintained but there is no MAC computation.</p>
      </sec>
      <sec id="sec-8-2">
        <title>A.2. AMD Secure Encrypted Virtualization</title>
        <p>AMD SEV aims at isolatingVMs and the hypervisor from one another. It uses one key per virtual
machine to isolate guests and the hypervisor from one another. The keys are managed by the AMD
(a)TDX Architecture.</p>
        <p>(b) SEV-SNP Architecture.</p>
        <p>(c) CCA Architecture.</p>
        <p>Secure Processor (SP) and are used to encrypt the memory pages owned by each gSuEeVst-.ES is an
improvement overSEV: it encrypts all CPU register contents when a VM stops running, thus preventing
the leakage of information in CPU registers to untrusted components, and can detect malicious
modifications to a CPU register state.SEV-SNP is the last iteration of this technology and the focus of this
section. It adds strong memory integrity protection to help prevent malicious hypervisor-based attacks
like data replay, memory re-mapping, and more. The architectuSrEeVo-SfNP is depicted in Figure4b.
The basic principle oSfEV-SNP integrity is that if a VM is able to read a private (i.e., encrypted) page
of memory, it must always read the last value it wrote: if this is not possible, it should get an exception
indicating that the value could not be read. This principle is enforced by a component called RMP.</p>
        <p>As happens for InteTlDX, the guarantees of memory confidentiality and integrity are enforced by
a trusted firmware component, provided by AMD, that runs on the AMD SP. The AMD SP, the SoC
hardware, and the secure VM itself are the only trusted components in this technology’s threat model.</p>
        <p>WhileSEV and SEV-ES only supported attestation during the launch of a guVeMst, SEV-SNP is
more flexible: a guest VM can request an attestation report from the AMD AP at any time. Attestation
reports contain system information and a block of arbitrary data supplied by thVeMguaesstpart
of the request and are signed by the AMD SP. Attestation reports enable third parties, e.g., the guest
owner, to validate that specific data came from a specific VM
Memory Confidentiality and Integrity SEV-SNP uses Multi Key Secure Memory Encryption to
provide memory confidentiality to secure VMs. At boot, the keys fVoMrs are randomly generated
and stored in the AMD SP. SEV-SNP also uses AES-128 with XEX as mode of operation.</p>
        <p>Many of the integrity guaranteesSoEfV-SNP are enforced through thReMP, a single data structure,
shared across the system, which contains one entry for every page of DRAM that may be uVseMdsb.y
The purpose of theRMP is to track the owner of each page of memory: the hypervisor, a spVecMific,
or the AMD SP. Memory accesses are controlled in a way that only the owner of the page can write
it. The RMP is only checked when the hypervisor is performing write accesses to memory pages: since
SEV-SNP encrypts all the memory pages belonging to secVuMres, the hypervisor being able to read
the encrypted content of a memory page is not considered as a threat. Both read and write accesses
inside anSEV-SNP VM requireRMP checks. Figure5 shows the diferences inRMP table-walks when
the hypervisor and an SEV-SNP VM request write access to a memory page.</p>
      </sec>
      <sec id="sec-8-3">
        <title>A.3. Arm Confidential Compute Architecture</title>
        <p>Arm CCA allows to deployVMs while preventing access by other software components, like the
hypervisor.CCA allows the hypervisor to control VthMe but removes any right to access code, register
states, or data used by theVM. This separation is enabled by protectVedM execution spaces called
Realms. A Realm is completely isolated from a “normal” execution environment in terms of code
execution and data access. The separation is achieved through a combination of hardware extensions
and trusted firmware. The architecture of Arm CCA is depicted in Fig4ucr.e</p>
        <p>Armv8-A already introduced the concept woofrld, i.e., a combination of a security state of a
processing element and physical address space. The security state a processing element is executing
in determines which physical address it can access. The Arm CCA introduceRseathlme Management
Extension (RME) [23], which adds two new worlds1: the Root world is the world with the highest
privilege (the Monitor runs in Root worl2d),the Realm world is composed of the Realm security
state and the Realm Physical Address range.</p>
        <p>The RME is composed of a set of hardware extensions that are required by the architecture to allow
isolated Realm VM execution, while the software component that is used to manage the RVeaMlsm
is calledRealm Management Monitor (RMM), which is part of the TCB of Arm CCA, is the trusted
component that executes in the Root world and is in charge of ensuring the isolation among Realm,
Normal, and Secure worlds.</p>
        <p>CCA remote attestation allows the user of a service provided by a Realm to determine the
trustworthiness of the Realm and of the implementation of the CCA platform. The protocols that
should be used for attestation are implementation-specific and are not discussed in the guidelines
provided by Arm. However, the desired outcome of successful attestation is a secure point-to-point
connection between an attested endpoint in the Realm, and the reliant party (the user).
Memory Confidentiality and Integrity In the case of Arm CCA, the guarantees on the security
of the data is implementation-dependent. Arm provides some rules that must be strictly followed, and
some suggestions. Arm only suggests using encryption algorithms waditdhress tweaking, leaving the
choice of the algorithm and mode of operation to the hardware manufacturer. This allows the selection
of an algorithm depending on the specific power and area requirements. Memory integrity should
also be provided as a guarantee to the Realm owner, but the details are implementation-dependent.</p>
      </sec>
      <sec id="sec-8-4">
        <title>A.4. Other Architectures</title>
        <p>There are other Confidential Computing architectures that have been proposed, both in academia
and in industry. IBMPEF and RISC-V Application PlatformT-EE (AP-TEE) are two examples of these
proposals. The former doesn’t have publicly available information, apart from the paper published
by IBM in which they describe the architectural extension at a high level. The latter is the result of
the efort of several contributors, whose aim is to provide Confidential Computing capabilities to the
RISC-V open-source platform.</p>
        <p>IBM Protected Execution Facility In 2021, IBM published the description of its own implementation
of a Confidential Computing extension to the OpenPOWER architectur4]e a[nd, to the best of our
knowledge, this paper is the only documentation available on this TEE.</p>
        <p>The goal ofPEF is to enable users to create and manage secVurMes, guaranteeing the confidentiality
and integrity of their memory. To do sPoE,F utilizes a Trusted Platform Module (TPM), and a new
trusted firmware called theProtected Execution Ultravisor (or justultravisor).</p>
        <p>PEF achieves isolation between secuVrMes and the outside through hardware-enforced access control
policies, and memory confidentiality and integrity with the use of cryptography. It also introduces a new
CPU state, called thseecure state, which is managed by the Ultravisor: this firmware component manages
all security-related hardware features in the processor, and is the only component that can do so.</p>
        <p>The access control mechanismPiEnF is based on the assignment oVfMs to security domains: each
secureVM has its own security domain in secure memory, while the hypervisor is in another security
domain in normal memory. This approach ensures that the seVcMurseare protected from the hypervisor
and one another and that the hypervisor’s security domain is protected from all the secure VMs.</p>
        <p>The Ultravisor protects the confidentiality of the secVurMe when the hypervisor is paging or dumping
it. When data from secure memory are made available to software that is not running in secure memory,
the Ultravisor performs encryption with integrity using Galois Counter Mode as the mode of operation,
prior to allowing a page to be moved to normal memory. When a page is accessible to the hypervisor,
it is not accessible to the securVeM: when the latter wants to access the page, the Ultravisor performs
an integrity check and, if it is successful, decrypts the page and allows access from the secure VM.
RISC-V Application Platform - TEE This section describes the first proposal of a Confidential
Computing extension to the RISC-V architecture, calCleodnfidential Virtual Machine Extension
orCoVE [24].</p>
        <p>As for previous architectures, CoVE relies on the presence of a trusted software module called the
TEE Security Manager, or TSM, which manages security properties for workload assets to protect
against access from the OS or the hypervisor. The isolation of the TSM from the host is supported
by Instruction Set Architecture (ISA) extensions.</p>
        <p>The CoVETCB consists of the TSM that acts as the TCB intermediary between TEE and non-TEE
components, and of hardware elements that enforce confidentiality and integrity properties for
workload data-in-use. As for all other technologies, the hypervisor is untrusted and manages the
resources for all workloads, both confidential and non-confidential.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A</given-names>
            <surname>Technical Analysis</surname>
          </string-name>
          of Confidential Computinhgt,tps://confidentialcomputing.io/wp-content/ uploads/sites/10/2023/03/
          <string-name>
            <surname>CCC-A-Technical-</surname>
          </string-name>
          Analysis-
          <article-title>of-Confidential-Computing-v1.3_unlocked</article-title>
          . pdf,
          <year>2022</year>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Intel</given-names>
            <surname>Trust Domain Extensions (TDXh)</surname>
          </string-name>
          ,ttps://cdrdv2.intel.com/v1/dl/getContent/6904,
          <fpage>129022</fpage>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>AMD</given-names>
            <surname>SEV-SNP</surname>
          </string-name>
          , https://www.amd.com/content/dam/amd/en/documents/epyc-business
          <article-title>-docs/ white-papers/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-mo,re</article-title>
          .pdf
          <year>2020</year>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G. D. H.</given-names>
            <surname>Hunt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jamjoom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattiprolu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Boivie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Dufour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Frey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kapur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Goldman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Grimm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Janakirman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Ludden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mackerras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>May</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. R.</given-names>
            <surname>Palmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. B.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Starke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stuecheli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Valdez</surname>
          </string-name>
          , W. Voigt,
          <article-title>Confidential computing for OpenPOWER</article-title>
          ,
          <source>in: Proceedings of the Sixteenth European Conference on Computer Systems</source>
          , EuroSys '21,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2021</year>
          , pp.
          <fpage>294</fpage>
          -
          <lpage>310</lpage>
          . URL: https://doi.org/10.1145/3447786.3456243. doi:
          <volume>10</volume>
          .1145/3447786.3456243.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Introducing</given-names>
            <surname>Arm Confidential Compute Architecturhet</surname>
          </string-name>
          ,tps://developer.arm.com/documentation/ den0125/lates,t2023. [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Azure</given-names>
            <surname>Confidential</surname>
          </string-name>
          <article-title>Computing on 4th Gen Intel Xeon Scalable Processors with Intel TDX</article-title>
          , https://azure.microsoft.com/en-us/blog/ azure-confidential
          <article-title>-computing-on-4th-gen-intel-xeon-scalable-processors-with-in,tel</article-title>
          -tdx/
          <year>2023</year>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[7] Amazon EC2 now supports AMD SEV-SNP</article-title>
          ,https://aws.amazon.com/about-aws/whats-new/
          <year>2023</year>
          / 04/amazon-ec2
          <string-name>
            <surname>-</surname>
          </string-name>
          amd
          <string-name>
            <surname>-</surname>
          </string-name>
          sev-snp,/
          <year>2023</year>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Oh</surname>
            <given-names>SNP</given-names>
          </string-name>
          !
          <article-title>VMs get even more confidential</article-title>
          , https://cloud.google.com/blog/products/ identity-security/
          <article-title>rsa-snp-vm-more-</article-title>
          <string-name>
            <surname>confiden</surname>
          </string-name>
          ,
          <year>ti2a0l23</year>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Intel</given-names>
            <surname>Trust Domain Extensions (TDX) Security</surname>
          </string-name>
          <string-name>
            <surname>Reviehwt</surname>
          </string-name>
          ,tps://services.google.com/fh/files/misc/ intel_tdx_-_
          <source>full_report_041423</source>
          .p,
          <year>d2f023</year>
          . [Accessed:
          <volume>24</volume>
          /02/2025].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>