<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A survey on shared disk I/O management in virtualized environments under real time constraints</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ignacio Sañudo</string-name>
          <email>@unimore.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Cavicchioli</string-name>
          <email>roberto.cavicchioli@unimore.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicola Capodieci</string-name>
          <email>@unimore.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Valente</string-name>
          <email>paolo.valente@unimore.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marko Bertogna</string-name>
          <email>marko.bertogna@unimore.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Modena</institution>
          ,
          <addr-line>and Reggio Emilia</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Modena, and Reggio Emilia</institution>
          ,
          <addr-line>ignacio.sanudoolmedo</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Modena, and Reggio Emilia</institution>
          ,
          <addr-line>nicola.capodieci</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <fpage>3</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>In the embedded systems domain, hypervisors are increasingly being adopted to guarantee timing isolation and appropriate hardware resource sharing among di erent software components. However, managing concurrent and parallel requests to shared hardware resources in a predictable way still represents an open issue. We argue that hypervisors can be an e ective means to achieve an e cient and predictable arbitration of competing requests to shared devices in order to satisfy real-time requirements. As a representative example, we consider the case for mass storage (I/O) devices like Hard Disk Drives (HDD) and Solid State Disks (SSD), whose access times are orders of magnitude higher than those of central memory and CPU caches, therefore having a greater impact on overall task delays. We provide a comprehensive and up-to-date survey of the literature on I/O management within virtualized environments, focusing on software solutions proposed in the open source community, and discussing their main limitations in terms of realtime performance. Then, we discuss how the research in this subject may evolve in the future, highlighting the importance of techniques that are focused on scheduling not uniquely the processing bandwidth, but also the access to other important shared resources, like I/O devices.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>A hypervisor, also called Virtual Machine Manager (VMM),
is a combination of software and hardware components that
allow emulating the execution of multiple virtual machines
upon the same computing platform by properly arbitrating
the concurrent access to shared hardware resources. Most of
the available open source hypervisors are speci cally tailored
to server applications and cloud computing. In these areas,
hypervisors are mainly designed to provide isolation, load
balancing, server consolidation and desktop virtualization
within the managed virtual machines. However, the
emerging of new potential areas for VMMs, such as automotive
applications and other embedded systems, and the
possibility to exploit multi-core embedded processors are
posing new challenges to real-time systems engineers. This is
the case of next-generation automotive architectures, where
cost-e ective solutions ever more require sharing an on-board
computing platform among di erent applications with
heterogeneous safety and criticality levels, e.g., the
infotainment part on one side, and a safety-critical image processing
module on the other side. These domains are independent,
with di erent period, deadline, safety and criticality
requirements. However, they need to be properly isolated with no
mutual interference, or a misbehaving module may endanger
the timely execution of a high-criticality domain, a ecting
safety quali cation.</p>
      <p>
        In order to provide real-time guarantees, hypervisors
either dynamically schedule virtual machines according to a
given on-line policy, or they statically partition virtual
machines to the available hardware resources. An example of
the rst category is RT-Xen [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] (now merged into mainline
Xen [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]), which implements a hierarchical virtual machine
scheduler managing both real-time and non-real-time
workloads using the Global Earliest Deadline First (G-EDF)
algorithm. On the other hand, statically partitioned solutions
tend to isolate virtual machines onto dedicated cores, with
an exclusive assignment of hardware resources. An
example of this approach is given by Jailhouse [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], which does
not allow multiple virtual machines to share the same core.
An advantage of this latter approach is that the resulting
hypervisors have a typically smaller code footprint,
implying much lower certi cation costs. Indeed, reducing the code
size is a prominent characteristic of other recent VMMs, like
NOVA [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] and bhyve1).
      </p>
      <p>
        No matter which virtualization approach is taken, most
of the current literature on resource access arbitration for
virtualized environments mainly focuses on CPU scheduling
(see for example surveys [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]), neglecting the huge
impact that the access to other shared hardware resources,
like Hard Disk Drives (HDD) and Solid State Disks (SSD),
may have on time-critical tasks. In view of this
consideration, this paper provides a survey on the state-of-the-art
on I/O virtualization and concurrent HDD/SSD read/write
operations. We will discuss the applicability of previously
introduced solutions to I/O arbitration for enhancing the
real-time guarantees that may be provided in a virtualized
environment. Main limitations of classic fair provisioning
1https://wiki.freebsd.org/bhyve
schemes to resource sharing will be highlighted.
      </p>
      <p>We are interested in software-based solutions that do not
require customized device controllers and hardware
mechanisms to obtain the desired behavior. Therefore, most of
the addressed works deal with virtualized approaches that
schedule the access to storage devices by means of a
hypervisor or similar mechanisms, shaping the I/O requests to
guarantee a given I/O bandwidth to multiple partitions/cores.
For each of the presented works, we will highlight the main
weaknesses and limitations, in order to stimulate the
realtime research community to undertake a more rigorous and
structured e ort towards achieving the required
predictability guarantees.</p>
      <p>Contributions are divided by contexts. In this respect, a
rst coarse-grained distinction is made considering the
technology used for data storage: rotational or non rotational.
HDDs and ash-based devices, such as SSDs, may have
similar issues when it comes to arbitration of concurrent
accesses, but their radically di erent operating principles
entail di erent problems to solve in order to ensure a
predictable behavior. A ner-grained distinction is related to
arbitration policies, examining how di erent I/O scheduling
algorithms behave in a virtualized environment and whether
they are able to satisfy hard/soft real-time guarantees.</p>
      <p>The rest of the paper is organized as follows. The next
section introduces the motivation behind the presented survey.
Section 3 describes the existing solution based on the Xen
hypervisor for I/O management. Section 4 discusses
statically partitioned solutions for multi-core platforms.
Section 5 highlights performance, predictability and security
issues related to the layered scheduling systems implied by
many virtualization techniques. Existing works introducing
real-time parameters within the I/O scheduler are
summarized in Section 6, while Section 7 discusses the additional
predictability problems incurred with current SSD devices.
A nal discussion is provided in Section 8 showing
promising research lines to improve the predictable management
of shared hardware resources by means of properly designed
hypervisor mechanisms.</p>
    </sec>
    <sec id="sec-2">
      <title>MOTIVATION</title>
      <p>There are multiple motivations under this document. The
initial reason triggering this study relates to the problems
encountered when trying to guarantee bounded shared
resource access times to tasks concurrently executing on a
multi-core platform. Even if often neglected by
theoretical works on real-time scheduling, a great share of the
predictability problems of modern multi-core platforms is due
to potentially con icting requests to shared hardware
resources like caches, bus, main memory, I/O devices,
network controllers, acceleration engines, etc. The arbitration
of the access to the mentioned shared resources is often
hardwired and cannot be easily controlled via software. The
enforced policies are mostly tailored to improve average case
performance and throughput, often con icting with the
predictability requirements of real-time applications. Finally,
low level details on the arbitration policies are di cult to
obtain and may signi cantly vary on di erent architectures.
This makes it extremely di cult to develop a tight
timing analysis even for simpler platforms. To sidestep these
problems, we are studying scheduling solutions that aim at
avoiding con icts on the device arbiter, by properly shaping
the device requests from the di erent cores. Hypervisors are
natural candidates in this sense, providing a centralized
decision point with a global view of the requests from the
various partitions. This would allow taking the unpredictable
arbiters out of the scheduling loop, leaving the resource
management at hypervisor level. Before implementing such a
solution, we examined the existing related approaches for
managing shared resources in virtualized environments,
taking storage devices as a representative example.</p>
      <p>
        This choice is mainly due to the large interest in I/O
scheduling within the open source community. Modi
cations to the current Linux schedulers are constantly being
proposed and evaluated. For example, at the time of
writing, a new scheduler denoted as BFQ (Budget Fair
Queuing) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] is undergoing evaluation for being merged into
mainline Linux. This I/O scheduler is based on CFQ, the
default I/O scheduler in most Linux systems. Among other
goals, BFQ is designed to outperform CFQ in terms of the
soft real-time requirements that can be guaranteed to
multimedia applications. However, it remains unclear how the
proposed modi cations can deal with harder real-time
constraints, given the unpredictable technical characteristics of
storage devices.
      </p>
      <p>A second motivation descends from the consideration that
sub-optimal arbitration policies of an I/O storage device can
be the primary cause of blocking delays and performance
drops. This is due to the considerably worse latencies and
bandwidths of storage devices with respect to other shared
resources, such as central memory or CPU caches. As an
example, the random access times to L1, L2, L3 and DDR
main memory on an Intel R i7 architecture are in the order
of 1ns, 10ns, 50ns and 100ns, respectively. In contrast, the
random access times to SSDs and HDDS are considerably
higher, in the order of 100us and 10ms, respectively.
Despite the cost of SSDs random accesses is predicted to drop
to 10/50us in the next years, the gap from the main memory
access times would still be of at least two orders of
magnitude. Due to this di erence, it is of paramount importance
to properly schedule and coordinate the access to storage
peripherals.</p>
      <p>
        A third motivation is related to the abundant presence
of I/O scheduling research in cloud and server virtualized
scenarios. The major concerns in these elds are
performance and fairness, rather than real-time constraints. Also,
the concept of fairness is mainly applied to CPU scheduling,
rather than on the access to shared resources. Consider the
widely known Xen hypervisor [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Xen allows the system
administrator to specify the policies that regulate how guests
are scheduled on the various cores. We can specify that a
VM can be scheduled with RTDS and a di erent CPU with a
Credit scheduler. By doing so, we are not going to arbitrate
access to the CPU (as they are scheduled in di erent cores)
but we want to specify requirements for the rst VM and
a non real-time domain for the credit scheduled VM.
However, a Credit-scheduled virtual machine that runs an I/O
intensive task can cause a priority inversion towards other
RTDS-scheduled machines, even if the latter require much
less I/O bandwidth. In order to further prove the
validity of this motivation, we reproduced this priority inversion
with a simple experiment in a Xen virtualized environment.
The experiment involved a workstation managed with Xen
4.5.0 and equipped with a quad-core Intel R i7 processor,
disabling hyperthreading. We setup two virtualized disk
partitions using LVM (Logical Volume Manager) on a rotative
HDD. The read peak rate of the HDD was 130MB/s. The
device was paravirtualized. We created two guests pv1 and
pv2, pinned to two di erent cores, each accessing one of the
two partitions. The workload executed by these two virtual
machines is as follows:
pv1 is a Credit-scheduled virtual machine, associated
with the Xen default scheduling weight (see section 3
for a brief introduction of the Xen Credit scheduler
and the description of its parameters). pv1 executes a
non-critical, non-real-time, I/O-intensive application,
sequentially reading a single 1GB- le. Such an
application acts as an interfering workload to other
realtime tasks on a di erent domain.
pv2 is an RTDS-scheduled virtual machine that runs
a single task with a 50ms period. The end of the
period is assumed to coincide with its relative deadline.
Within its period, this guest has to read a
memorypage-sized (4KB) chunk of data, randomly chosen out
of a 1GB- le in its partition. This setup allows
reproducing the worst-case HDD access latency, which, for
random reads, has a bandwidth of 0.63 MB/s,
corresponding to an average latency of around 5ms for a
4KB memory-page read.
      </p>
      <p>In order to rule out any performance bottlenecks into
the privileged domain we assign the remaining memory and
cores to Dom0. Despite the large slack of the RTDS guest
(pv2) to complete its memory read and its higher priority,
the I/O interference causes pv2 to experience a large
number of deadline misses. Figure 1 shows the results of the
experiment sampling 40 periodic I/O read accesses (x axis)
by the RTDS domain. The y axis represents the time taken
by each request in s. Each bar represents the actual I/O
request time. The horizontal black line indicates the period
between subsequent requests, while the vertical green line
corresponds to the instant when the interference operated
by the Credit scheduled domain (pv1) is over. As can be
easily seen, pv2's requests starve during the read process of
the Credit guest, which almost monopolizes the access to
the HDD. In contrast, when the interference created by pv1
is over, pv2 does not experience any deadline miss. In
retrospect, this behavior is not surprising, as the higher priority
provided to an RDTS guest a ects only the scheduling on
di erent CPUS, but it has no e ect on I/O scheduling. In
other words, the priority is not transferred from CPU to
I/O.</p>
    </sec>
    <sec id="sec-3">
      <title>3. I/O SCHEDULING IN VIRTUALIZED EN</title>
    </sec>
    <sec id="sec-4">
      <title>VIRONMENTS</title>
      <p>A signi cant number of contributions addressed I/O
virtualization issues by modifying the existing Xen Credit
scheduler. The Credit scheduler is the default CPU scheduler,
and it works by distributing credits among virtual machines
in proportion to their weights. Weights can be freely set
on guest creation. A virtual machine consumes its credits
while running on a physical CPU, and is in an over or
under priority status depending on whether it has exceeded
or not its share of CPU resource within a considered time
window. Credits are redistributed for each virtual machine
by a speci cally designed system-wide thread.</p>
      <p>
        The part of the credit scheduler that relates to I/O
scheduling is connected to what is known as boosting mechanism,
i.e., an additional boost priority status that allows
performance improvements for I/O-intensive guests [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. A very
demanding I/O task running on a guest causes the
virtual machine to get blocked often, leading to a very
limited credit consumption, with the guest always in the under
state. When waking up after completing an I/O request,
the VMM will grant the guest a boost priority, allowing it
to preempt other running virtual machines to process the
requested data.
      </p>
      <p>
        Di erent works tried to improve I/O scheduling in
virtualized environments by acting on the mechanisms used
by Xen to assign priority statuses within guests and on the
above described boosting mechanism. In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], the authors
developed a solution that extends the mechanisms of the
Xen Credit scheduler. They introduced the notion of
taskaware (I/O) scheduling arguing that a task-aware model is
bene cial for scheduling purposes, especially in situations
where mixed resource usage and I/O-intensive tasks are
concentrated in speci c domains. Once the VMM has
knowledge of which guest has higher I/O bandwidth requirements
or speci c latency-related constraints, the hypervisor will
use this information to decide how and when to assign the
boost priority to those I/O-bound guests. In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the authors
followed a somehow similar, but more complex, approach.
They developed a technique for speeding up I/O
virtualization using direct I/O with hardware IOMMU. To support a
real-time response for high quality I/O virtualization, a new
REAL_TIME priority state is added to the Xen Credit
scheduler supporting preemption. Consider a latency-sensitive
application running inside a guest to which the associated
latency-critical pass-through device is assigned. Whenever
the pass-through device res an interrupt, the associated
virtual machine is automatically promoted to the REAL_TIME
state, triggering a preemption of any non REAL_TIME guest
to schedule that particular machine right away. While the
rst contribution [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] mainly focuses on achieving a fair
behavior among domains, the latter [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] shows promising results
in terms of I/O throughput and latencies. Due to the low
latency values obtained, the authors in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] claim to have
designed a real-time virtualized environment, but no
experimental or analytical evidence has been provided to support
these claims using classic real-time metrics, such as
schedulability ratio, worst-case response times, deadline misses, etc.
      </p>
      <p>
        Another interesting approach is presented in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
Both works are focused on adaptive time-slice sizing in Xen.
In the rst contribution, the authors modi ed the Xen Credit
scheduler to guarantee Quality of Service (QoS)
requirements for streaming audio applications in virtualized
environments. They designed an adaptive modi er of the Xen
Credit scheduler that allows exible time-slices and real-time
priority ags to be dynamically assigned to guests.
According to their presented results, the authors were able to
improve the responsiveness of latency-sensitive applications,
achieving some kind of soft real-time guarantee. They tested
their implementation by pinning multiple virtual CPUs
(vCPUs) to the same physical core, hence testing concurrent I/O
requests rather than parallel I/O operations. In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the
authors adopt a similar mechanism for an on-the- y
adaptation of the time slices within the I/O scheduling policies
(mainly CFQ and Anticipatory) of the Linux kernels that
are executing within the Xen unprivileged domains. Here,
parallel HDD requests are explicitly considered, showing an
improved latency. However, even if improving latencies is
an important step towards predictability, a system that
dynamically adapts scheduling constraints, such as time slices,
makes it very di cult to identify worst-case scheduling
settings where to build a tight timing and schedulability
analysis.
      </p>
    </sec>
    <sec id="sec-5">
      <title>MULTI-CORE PARTITIONING AND VIR</title>
    </sec>
    <sec id="sec-6">
      <title>TUALIZATION</title>
      <p>
        Another promising direction to obtain a predictable
behavior is to exploit the multi-core nature of today's CPUs,
assigning speci c I/O handling functions to speci c cores.
This can be accomplished with CPU hardware extensions
and/or a di erent virtualization paradigm using
partitioningbased hypervisors. The work in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposes a Xen
implementation monitoring runtime information of the bandwidth
requirements of the di erent guests. Speci c functions
related to I/O handling are pinned to speci c cores, e.g., one
core is used for driver-related aspects, another one to handle
I/O events, another one for generic computations.
Performance improvements are claimed in terms of bandwidth and
latencies with a slight drop in the performance of
computeintensive tasks.
      </p>
      <p>
        Another interesting contribution that relates to core
specialization is presented in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. A VMM based on hardware
resource partitioning is taken into account, proposing a
hypervisor (SplitX) that resembles the operating mechanisms
of Jailhouse2. Specialized cores can handle I/O related
interrupt and hypervisor instructions. The authors claim that
I/O level performance is expected to reach near bare-metal
performance, by means of hardware extensions to allow
directed inter-core signals for events noti cation but also for
managing resources belonging to other cores. An example of
this latter feature may allow a core to assign speci c values
to the registers of a di erent core. Unfortunately, this latter
batch of related works mainly deals with performance
improvements. Even if a considerable drop on latency values is
a promising start for achieving real-time guarantees, these
approaches are not concerned with obtaining worst-case
delay bounds and a tight timing analysis.
      </p>
      <p>
        In a recent publication [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], a scratch pad centric
non2Jailhouse does not yet support any mechanism to
predictably manage the concurrent access to shared resources
like I/O devices, each of which is statically pinned to a given
partition/core having exclusive access to it.
virtualized architecture is presented in which real-time
requirements are explicitly taken into considerations.
Similarly to the other approaches presented in this section, a
speci c core is delegated for I/O operations exploiting a
dedicated I/O bus. Task executions are decoupled from
instruction and data loading using a Time Division Multiplexing
(TDM) approach. I/O operations are included in the same
time slice used for task loading/unloading. While this
latter contribution present a very rigorous and sound timing
analysis, it does not explore I/O intrinsics threats to
predictability in virtualized environments, nor it addresses the
problems of having multiple I/O tasks hogging the dedicated
core.
5.
      </p>
    </sec>
    <sec id="sec-7">
      <title>PERFOMANCE AND SECURITY ISSUES</title>
    </sec>
    <sec id="sec-8">
      <title>INTRODUCED BY I/O VIRTUALIZATION</title>
      <p>It is straightforward to observe that a hypervisor
allowing its guests to run entire operating systems can easily
introduce noticeable overheads due to the local CPU and
I/O schedulers. Virtualized platforms, such as Xen, have
their own CPU arbitration policies for scheduling guests,
but also privileged domains have to go through their own
block layer, while each guest runs its own kernel with di
erent local scheduling policies for both CPU and I/O, hence
providing an added level of complexity when accessing the
storage device. This hierarchical structure is known to cause
performance drops compared to bare-metal systems, but it
also exposes a more complex architecture that dramatically
increases the di culties of deriving a sound timing analysis.</p>
      <p>
        The performance overhead issue has been studied in di
erent works [
        <xref ref-type="bibr" rid="ref16 ref26 ref5">26, 16, 5</xref>
        ]. In a recent paper [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] the authors
measured the overhead of I/O stack duplication between host
and virtual machines running KVM as VMM. It also
provided a very complete survey on previous tests on di erent
VMMs that eliminated a layer of the IO scheduler. A simple
QEMU + virtIO solution is shown to outperform almost all
tested scenarios.
      </p>
      <p>
        The hierarchical organization of these kinds of hypervisors
also poses signi cant security threats. In [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], an untamed
I/O intensive task running within a compromised/malicious
guest is used to slow down and interfere with other
supposedly separated domains. For this reasons, the authors
recommend to adopt a separate and unique I/O scheduler
for virtualized environments.
      </p>
      <p>We believe that such an I/O scheduler should be designed
with the same guidelines considered when implementing
efcient CPU real-time schedulers, ensuring a proper isolation
among tasks that require disk access, while allowing them
to complete their workload within given deadlines. On this
latter consideration, it has to be pointed out that the Linux
kernel provides features such as control groups (cgroups)
that can be used to isolate, limit and control disk
(rotational or SSD storage device) accesses of sets of processes.
For example, cgroups can be set within privileged domains
to limit resource usage by unprivileged guests. However,
this feature does not provide for speci c scheduling policies,
rather it relies on the underlying I/O scheduler, and on its
policy, for enforcement. In this respect, current Linux I/O
schedulers implement too coarse policies for typical real-time
requirements.</p>
    </sec>
    <sec id="sec-9">
      <title>DEADLINE-AWARE I/O SCHEDULING</title>
      <p>
        The need to provide tighter real-time guarantees to tasks
accessing disk I/O has been a problem addressed since the
early 90's. In [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], Reddy et al. presented a scheduling
algorithm called SCAN-EDF that combined the Earliest
Deadline First (EDF) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and SCAN schedulers for minimizing
request latency in HDD while serving deadline constrained
tasks. During the years, this algorithm has also been
modied and improved by means of heuristics, such as batching
and delaying requests, or aggregating multiple queues of
requests. In [
        <xref ref-type="bibr" rid="ref10 ref14">10, 14</xref>
        ] and [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the Xen I/O architecture has
been modi ed to include deadline-based scheduling for the
storage in a virtualized environment. In [
        <xref ref-type="bibr" rid="ref10 ref14">10, 14</xref>
        ], a two level
scheduler called Flubber is introduced. The rst level
denes the throughput using a credit-rate controller to ensure
performance isolation, while the second level applies Batch
and Delay EDF (BD-EDF) to manage the request queues
from the di erent guests. Even if the authors claim that
Flubber improves Xen performance and allows the system
administrator to specify deadlines, no results are provided
to evaluate the worst-case delays and blocking times needed
to establish a sound timing analysis. In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a similar
approach is used, where the rst level accumulates the amount
of I/O requests in a xed time slice while analyzing the
disk bandwidth, and the second level exploits the
deadlinemodi ed SCAN algorithm reordering the requests
according to the deadline group and to their location on the disk.
While there is a performance enhancement for I/O intensive
workloads, neither this work appears to lend itself to the
analytical guarantees required in a real-time setting.
      </p>
    </sec>
    <sec id="sec-10">
      <title>REAL-TIME ISSUES IN SSDS</title>
      <p>
        Solid State Drive based storage devices deliver from 5
to 10 times the bandwidth of a HDD, while maintaining
a low power consumption and a much stronger resistance to
shocks and vibrations. These features make SSDs the
primary choice for applications in the automotive and
avionics sectors, in which embedded platforms have to sustain
prolonged vibrations while still delivering high performance.
This makes it particularly interesting to understand whether
the previous considerations coming from experiments
executed on HDDs equivalently hold for guests sharing access
to a SSD. In this respect, it has to be pointed out that
the intrinsic operating mechanisms of SSDs pose signi cant
problems towards the design of predictable hard real-time
systems. This is due to the fact that ash memories are
a write-once and bulk-erase medium, that implies that a
ash translation layer (FTL) and a Garbage Collection (GC)
mechanism are needed to provide applications a
transparent storage service. A nave best e ort GC policy can
unpredictably start segmentation operations causing tasks to
wait for potentially long blocking times. The authors in
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] focused on providing hard real-time guarantees for the
GC phase in small NAND ash devices by proposing a
token based garbage collection system. The presented results
showed that the implemented system is predictable and
robust to interferences introduced by non real-time tasks. A
prototype is tested in a 16MB NAND- ash drive with two
real-time tasks and one non real-time task in a
manufacturing system scenario, with no deadline violations until high
CPU utilization. A more recent contribution [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] observed
that the previous solution does not scale well, making it
impossible to apply to modern SSDs having a much larger
storage capacity. An FTL implementation (KAST) is then
proposed to allow the user to control the worst case blocking
time by tuning some GC parameters.
8.
      </p>
    </sec>
    <sec id="sec-11">
      <title>CONCLUSIONS</title>
      <p>Hypervisors represent a possible solution to bypass
unpredictable scheduling policies enforced by o -the-shelf
arbiters for the access to shared hardware resources. By
taking informed decisions on the scheduling of the di erent
requests coming from multiple tasks, a hypervisor may provide
stronger timing guarantees to real-time tasks, predictably
limiting the delays due to interfering requests on the shared
devices. Taking I/O scheduling as a representative case for
resource sharing, we highlighted the main results concerned
with improving the delays due to competing accesses to
storage devices in virtualized environments. We showed how I/O
intensive tasks within non-critical virtual machines can
easily cause more critical partitions to experience high blocking
delays, leading to repeated deadline misses. This was the
case with the Xen hypervisor, whose critical partitions are
\privileged" only when assigning processing bandwidth, but
not when arbitrating the access to other shared resources.
We argued that smarter scheduling policies are needed, that
take into account the timing requirements of the di erent
tasks/partitions also when arbitrating the access to shared
devices.</p>
      <p>We showed that existing mechanisms to improve the
blocking delays are mainly tailored to obtain better average
performance or achieve a fairer behavior, but cannot be
leveraged to develop a sound timing and schedulability analysis.
While it would be possible to manually tune the bandwidth
allocated to each partition when accessing an I/O device,
e.g., by playing with cgroups parameters in Xen, such a
solution has clear limits in terms of exibility, e ciency and
responsiveness, preventing a tight timing analysis.
Moreover, we pointed out that hypervisors like Xen and KVM
add further layers of complexity to guest operating systems,
with repeated scheduling and block layers coupled with
paravirtualized driver architectures, making it very di cult to
formalize the I/O scheduling model. Partitioned
hypervisors seem more suitable in this sense, especially when the
number of cores increases and each domain can be statically
assigned to one or more dedicated cores. Still, most of the
available partitioned VMMs do not allow for a predictable
and concurrent access to shared devices, but they either
exclusively pin each resource to a selected domain, preventing
tasks running on other partitions to access it, or they
implement para-virtualized schemes that are not aware of the
di erent real-time requirements.</p>
      <p>To conclude, we believe that guaranteeing hard real-time
requirements within embedded virtualized platforms requires
the hypervisors to be made aware of the I/O requirements of
their guests. Performance-oriented considerations aiming at
improving average latencies need to be sacri ced to achieve
a more predictable behavior. We hope this paper may
stimulate the research on predictable I/O scheduling policies for
virtualized environments, paving the way towards simpler
and tighter timing analysis.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Barham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dragovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Fraser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Neugebauer</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          <article-title>Pratt, and</article-title>
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>War eld. Xen and the art of virtualization</article-title>
          .
          <source>ACM SIGOPS Operating Systems Review</source>
          ,
          <volume>37</volume>
          (
          <issue>5</issue>
          ):
          <volume>164</volume>
          {
          <fpage>177</fpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.-P.</given-names>
            <surname>Chang</surname>
          </string-name>
          , T.-W. Kuo, and
          <string-name>
            <given-names>S.-W.</given-names>
            <surname>Lo</surname>
          </string-name>
          .
          <article-title>Real-time garbage collection for ash-memory storage systems of real-time embedded systems</article-title>
          .
          <source>ACM Trans. Embed. Comput. Syst.</source>
          ,
          <volume>3</volume>
          (
          <issue>4</issue>
          ):
          <volume>837</volume>
          {
          <fpage>863</fpage>
          ,
          <string-name>
            <surname>Nov</surname>
          </string-name>
          .
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Yuan</surname>
          </string-name>
          .
          <article-title>Adaptive audio-aware scheduling in xen virtual environment</article-title>
          .
          <source>In Proceedings of the ACS/IEEE International Conference on Computer Systems and Applications - AICCSA</source>
          <year>2010</year>
          , AICCSA '
          <volume>10</volume>
          , pages
          <fpage>1</fpage>
          <lpage>{</lpage>
          8, Washington, DC, USA,
          <year>2010</year>
          . IEEE Computer Society.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.-Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          , H.-W. Wei,
          <string-name>
            <given-names>Y.-J.</given-names>
            <surname>Chen</surname>
          </string-name>
          , W.-K. Shih, and T.-s. Hsu.
          <article-title>Integrating deadline-modi cation scan algorithm to xen-based cloud platform</article-title>
          .
          <source>In Cluster Computing (CLUSTER)</source>
          ,
          <source>2013 IEEE International Conference on, pages 1{4</source>
          . IEEE,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cherkasova</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Gardner</surname>
          </string-name>
          .
          <article-title>Measuring cpu overhead for i/o processing in the xen virtual machine monitor</article-title>
          .
          <source>In Proceedings of the Annual Conference on USENIX Annual Technical Conference, ATEC '05</source>
          , pages
          <fpage>24</fpage>
          {
          <fpage>24</fpage>
          , Berkeley, CA, USA,
          <year>2005</year>
          . USENIX Association.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shin</surname>
          </string-name>
          , and
          <string-name>
            <surname>Y. I. Eom.</surname>
          </string-name>
          <article-title>Kast: K-associative sector translation for nand ash memory in real-time systems</article-title>
          .
          <source>In Proceedings of the Conference on Design, Automation and Test in Europe, DATE '09</source>
          , pages
          <fpage>507</fpage>
          {
          <fpage>512</fpage>
          , 3001 Leuven, Belgium, Belgium,
          <year>2009</year>
          . European Design and
          <string-name>
            <given-names>Automation</given-names>
            <surname>Association</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tian</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiang</surname>
          </string-name>
          .
          <article-title>Towards high-quality i/o virtualization</article-title>
          .
          <source>In Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference, page 12. ACM</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhao</surname>
          </string-name>
          .
          <article-title>A state-of-the-art survey on real-time issues in embedded systems virtualization</article-title>
          .
          <source>Journal of software Engineering and Applications</source>
          ,
          <volume>5</volume>
          (
          <issue>4</issue>
          ):
          <volume>227</volume>
          {
          <fpage>290</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>He</surname>
          </string-name>
          , and
          <string-name>
            <surname>L. Xia.</surname>
          </string-name>
          <article-title>I/o scheduling model of virtual machine based on multi-core dynamic partitioning</article-title>
          .
          <source>In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC '10</source>
          , pages
          <fpage>142</fpage>
          {
          <fpage>154</fpage>
          , New York, NY, USA,
          <year>2010</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ibrahim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Antoniu</surname>
          </string-name>
          . Flubber:
          <article-title>Two-level disk scheduling in virtualized environment</article-title>
          .
          <source>Future Generation Computer Systems</source>
          ,
          <volume>29</volume>
          (
          <issue>8</issue>
          ):
          <volume>2222</volume>
          {
          <fpage>2238</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kesavan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gavrilovska</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Schwan</surname>
          </string-name>
          .
          <article-title>On disk i/o scheduling in virtual machines</article-title>
          .
          <source>In Proceedings of the 2nd conference on I/O virtualization, pages 6{6. USENIX Association</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jeong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>Task-aware virtual machine scheduling for i/o performance</article-title>
          .
          <source>In Proceedings of the 2009 ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE '09</source>
          , pages
          <fpage>101</fpage>
          {
          <fpage>110</fpage>
          , New York, NY, USA,
          <year>2009</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Landau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ben-Yehuda</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Gordon</surname>
          </string-name>
          . Splitx:
          <article-title>Split guest/hypervisor execution on multi-core</article-title>
          .
          <source>In WIOV</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>X.</given-names>
            <surname>Ling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ibrahim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Cao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          .
          <article-title>E cient disk i/o scheduling with qos guarantee for xen-based hosting platforms</article-title>
          .
          <source>In Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (Ccgrid</source>
          <year>2012</year>
          ),
          <source>CCGRID '12</source>
          , pages
          <fpage>81</fpage>
          {
          <fpage>89</fpage>
          , Washington, DC, USA,
          <year>2012</year>
          . IEEE Computer Society.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Liu</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Layland</surname>
          </string-name>
          .
          <article-title>Scheduling algorithms for multiprogramming in a hard-real-time environment</article-title>
          .
          <source>Journal of the ACM (JACM)</source>
          ,
          <volume>20</volume>
          (
          <issue>1</issue>
          ):
          <volume>46</volume>
          {
          <fpage>61</fpage>
          ,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ongaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Cox</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Rixner</surname>
          </string-name>
          .
          <article-title>Scheduling i/o in virtual machine monitors</article-title>
          .
          <source>In Proceedings of the Fourth ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE '08</source>
          , pages
          <fpage>1</fpage>
          {
          <fpage>10</fpage>
          , New York, NY, USA,
          <year>2008</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Reddy</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Wyllie</surname>
          </string-name>
          .
          <article-title>Disk scheduling in a multimedia i/o system</article-title>
          .
          <source>In Proceedings of the rst ACM international conference on Multimedia</source>
          , pages
          <volume>225</volume>
          {
          <fpage>233</fpage>
          . ACM,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sinitsyn</surname>
          </string-name>
          . Jailhouse.
          <source>Linux Journal</source>
          ,
          <year>2015</year>
          (
          <volume>252</volume>
          ):
          <fpage>2</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>U.</given-names>
            <surname>Steinberg</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Kauer</surname>
          </string-name>
          .
          <article-title>Nova: A microhypervisor-based secure virtualization architecture</article-title>
          .
          <source>In Proceedings of the 5th European Conference on Computer Systems</source>
          , EuroSys '
          <volume>10</volume>
          , pages
          <fpage>209</fpage>
          {
          <fpage>222</fpage>
          , New York, NY, USA,
          <year>2010</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tabish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mancuso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wasly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alhammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Phatak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pellizzoni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Caccamo</surname>
          </string-name>
          .
          <article-title>A real-time scratchpad-centric os for multi-core embedded systems</article-title>
          .
          <source>In 2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)</source>
          , pages
          <fpage>1</fpage>
          <lpage>{</lpage>
          11. IEEE,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>G.</given-names>
            <surname>Taccari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Taccari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fioravanti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Spalazzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Claudi</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. B.</given-names>
            <surname>SA.</surname>
          </string-name>
          <article-title>Embedded real-time virtualization: State of the art</article-title>
          and research challenges.
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>P.</given-names>
            <surname>Valente</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Checconi</surname>
          </string-name>
          .
          <article-title>High throughput disk scheduling with fair bandwidth distribution</article-title>
          .
          <source>IEEE Transactions on Computers</source>
          ,
          <volume>59</volume>
          (
          <issue>9</issue>
          ):
          <volume>1172</volume>
          {
          <fpage>1186</fpage>
          ,
          <string-name>
            <surname>Sept</surname>
          </string-name>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Xi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. T. X.</given-names>
            <surname>Phan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sokolsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and I.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>Real-time multi-core virtual machine scheduling in xen</article-title>
          .
          <source>In Proceedings of the 14th International Conference on Embedded Software, EMSOFT '14</source>
          , pages
          <issue>27:1</issue>
          {
          <fpage>27</fpage>
          :
          <fpage>10</fpage>
          , New York, NY, USA,
          <year>2014</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Huang</surname>
          </string-name>
          .
          <article-title>Understanding the e ects of hypervisor i/o scheduling for virtual machine performance interference</article-title>
          .
          <source>In Cloud Computing Technology and Science (CloudCom)</source>
          ,
          <year>2012</year>
          IEEE 4th International Conference on, pages
          <volume>34</volume>
          {
          <fpage>41</fpage>
          ,
          <string-name>
            <surname>Dec</surname>
          </string-name>
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kim</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y. I.</given-names>
            <surname>Eom</surname>
          </string-name>
          .
          <article-title>Performance analyses of duplicated i/o stack in virtualization environment</article-title>
          .
          <source>In Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication, page 26. ACM</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhao</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Tan</surname>
          </string-name>
          .
          <article-title>Evaluating i/o scheduling in virtual machines based on application load</article-title>
          .
          <source>Engineering Journal</source>
          ,
          <volume>17</volume>
          (
          <issue>3</issue>
          ):
          <volume>105</volume>
          {
          <fpage>112</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>