<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Supporting Virtualization Standard for Network Devices in RTEMS Real-Time Operating System</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jin-Hyun Kim</string-name>
          <email>jinhyun@konkuk.ac.kr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sang-Hun Lee</string-name>
          <email>shunlee@konkuk.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hyun-Wook Jin</string-name>
          <email>jinh@konkuk.ac.kr</email>
          <email>jinhyun@konkuk.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer, Science and Engineering, Konkuk University</institution>
          ,
          <addr-line>Seoul 143-701</addr-line>
          ,
          <country country="KR">Korea</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Smart ICT, Convergence, Konkuk University</institution>
          ,
          <addr-line>Seoul 143-701</addr-line>
          ,
          <country country="KR">Korea</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <abstract>
        <p>The virtualization technology is attractive for modern embedded systems in that it can ideally implement resource partitioning but also can provide transparent software development environments. Although hardware emulation overheads for virtualization have been reduced signi cantly, the network I/O performance in virtual machine is still not satisfactory. It is very critical to minimize the virtualization overheads especially in real-time embedded systems, because the overheads can change the timing behavior of real-time applications. To resolve this issue, we aim to design and implement the device driver of RTEMS for the standardized virtual network device called virtio. Our virtio device driver can be portable across di erent Virtual Machine Monitors (VMMs) because our implementation is compliant with the standard. The measurement results clearly show that our virtio can achieve comparable performance to the virtio implemented in Linux while reducing memory consumption for network bu ers.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Network virtualization</kwd>
        <kwd>RTEMS</kwd>
        <kwd>Real-time operating system</kwd>
        <kwd>virtio</kwd>
        <kwd>Virtualization</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        The virtualization technology provides multiple virtual
machines on a single device, each of which can run own
operating system and applications over emulated hardware in an
isolated manner [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The virtualization has been applied to
large-scale server systems to securely consolidate di erent
services with high system utilization and low power
consumption. As modern complex embedded systems are also
facing the size, weight, and power (SWaP) issues, researchers
are trying to utilize the virtualization technology for
temporal and spatial partitioning [
        <xref ref-type="bibr" rid="ref12 ref21 ref5">5, 21, 12</xref>
        ]. In the partitioned
systems, a partition provides an isolated run-time
environment with respect to processor and memory resources; thus,
virtual machines can be exploited to e ciently implement
partitions. Moreover, the virtualization can provide a
transparent and e cient development environment for embedded
software [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. For example, if the number of target
hardware platforms is smaller than that of software developers,
they can work with virtual machines that emulate the target
hardware system.
      </p>
      <p>
        A drawback of virtualization, however, is the overhead for
hardware emulation, which causes higher software execution
time. Although the emulation performance of instruction
sets has been signi cantly improved, the network I/O
performance in virtual machine is still far from the ideal
performance [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. It is very critical to minimize the virtualization
overheads especially in real-time embedded systems, because
the overheads can increase the worst-case execution time and
jitters, thus changing the timing behavior of real-time
applications. Few approaches to improve the performance of
network I/O virtualization in the context of embedded
systems have been suggested, but these are either proprietary
or hardware-dependent [
        <xref ref-type="bibr" rid="ref6 ref7">7, 6</xref>
        ].
      </p>
      <p>In order to improve the network I/O performance, usually a
paravirtualized abstraction layer is exposed to the device
driver running in the virtual machine. Then the device
driver explicitly uses this abstraction layer instead of
accessing the original I/O space. This sacri ces the transparency
of whether the software knows it runs on a real machine or
a virtual machine, but can improve the network I/O
performance avoiding hardware emulation. It is desirable to
use the standardized abstraction layer to guarantee
portability and reliability; otherwise, we would have to modify
or newly implement the device driver for di erent Virtual
Machine Monitors (VMMs) and have to manage di erent
versions of device driver.</p>
      <p>
        In this paper, we aim to design and implement the
virtio driver for RTEMS [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a Real-Time Operating System
(RTOS) used in spacecrafts and satellites. virtio [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is the
standardized abstraction layer for paravirtualized I/O
devices and is supported by several well-known VMMs, such as
KVM [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and VirtualBox [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. To the best of our knowledge,
this is the rst literature that presents detail design issues
of the virtio front-end driver for RTOS. Thus, our study
can provide insight into design choices of virtio for RTOS.
The measurement results clearly show that our virtio can
achieve comparable performance to the virtio implemented
in Linux. We also demonstrate that our implementation can
reduce memory consumption without sacri cing the network
bandwidth.
      </p>
      <p>The rest of the paper is organized as follows: In Section
2, we give an overview of virtualization and virtio. We also
discuss related work in this section. In Section 3, we describe
our design and implementation of virtio for RTEMS. The
performance evaluation is done in Section 4. Finally, we
conclude this paper in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. BACKGROUND</title>
      <p>In this section, we give an overview of virtualization and
describe virtio, the virtualization standard for I/O devices.
In addition, we discuss the state-of-the-art for network I/O
virtualization.</p>
    </sec>
    <sec id="sec-3">
      <title>2.1 Overview of Virtualization and virtio</title>
      <p>
        The software that creates and runs the virtual machines is
called VMM or hypervisor. The virtualization technology is
generally classi ed into full-virtualization and
paravirtualization. The full-virtualization allows legacy operating
system to run in virtual machine without any modi cations.
To do this, VMMs of full-virtualization usually perform
binary translation and emulate every detail of physical
hardware platforms. KVM and VirtualBox are examples of
fullvirtualization VMMs. On the other hand, VMMs of
paravirtualization provide guest operating systems with
programming interfaces, which are similar to the interfaces provided
by hardware platforms but much simpler and lighter. Thus,
the paravirtualization requires modi cations of guest
operating systems and can present better performance than
fullvirtualization. Xen [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and XtratuM [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] are examples of
paravirtualization VMMs.
virtio is the standard for virtual I/O devices. It was initially
suggested by IBM [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and recently became an OASIS
standard [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The virtio standard de nes paravirtualized
interfaces between front-end and back-end drivers as shown in
Fig. 1. The paravirtualized interfaces include two virtqueues
to store send and receive descriptors. Because virtqueues are
located in a shared memory between front-end and back-end
drivers, the guest operating system and VMM can directly
communicate each other without hardware emulation. Many
VMMs, such as KVM, VirtualBox, and XtratuM, support
virtio or its modi cation. General-purpose operating
systems, such as Linux and Windows, implement the virtio
front-end driver.
      </p>
    </sec>
    <sec id="sec-4">
      <title>2.2 Related Work</title>
      <p>
        There has been signi cant research on network I/O
virtualization. The most of existing investigations are, however,
focusing on the performance optimization for general-purpose
operating systems [
        <xref ref-type="bibr" rid="ref15 ref16 ref20 ref22 ref4">20, 15, 22, 16, 4</xref>
        ]. Especially, the
approaches that require supports from network devices are not
suitable for embedded systems, because their network
controllers are not equipped with su cient hardware resources
to implement multiple virtual network devices. Though
there is an architectural research on e cient network I/O
virtualization in the context of embedded systems [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], it
also highly depends on the assist from network controller.
The software-based approach for embedded system has been
studied in a very limited scope that does not consider the
standardized interfaces for network I/O virtualization [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Compared to existing research, virtio can be di erentiated
in that it does not require hardware support and can be
more portable [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. The studies for virtio have mainly
dealt with the back-end driver [
        <xref ref-type="bibr" rid="ref11 ref14">11, 14</xref>
        ]. However, there are
several additional issues for the front-end driver on RTOS
due to inherent structural characteristics of RTOS and the
resource constraint of embedded systems. In this paper, we
focus on the design and implementation issues of the virtio
front-end driver for RTOS.
      </p>
    </sec>
    <sec id="sec-5">
      <title>3. VIRTIO FOR RTEMS</title>
      <p>In this section, we suggest the design of virtio front-end
driver for RTEMS. Our design can e ciently handle
hardware events generated by the back-end driver and mitigate
memory consumption for network bu ers. We have
implemented the suggested design on the experimental system
that runs RTEMS (version 4.10.2) over the KVM
hypervisor as described in Section 4.1, but it is general enough to
apply to other system setups.</p>
    </sec>
    <sec id="sec-6">
      <title>3.1 Initialization</title>
      <p>The virtio network device is implemented as a PCI device.
Thus, the front-end driver obtains the information of the
virtual network device through PCI con guration space. Once
the registers of the virtio device are found in the con
guration space, the driver can access the I/O memory of the
virtio device by using the Base Address Register (BAR). The
virtio header, which has the layout shown in Fig. 2, locates
in that I/O memory region and is used for initialization.
Our front-end driver initializes the virtio device through the
virtio header as speci ed in the standard. For example,
the driver decides the size of the virtqueues by reading the
value in Queue Size region. Then the driver allocates the
virtqueues in the guest memory area and lets the back-end
driver know the base addresses of the virtqueues by writing
these to the Queue Address region. Thus, both front-end
and back-end drivers can directly access the virtqueues by
means of memory referencing without expensive hardware
emulation.</p>
      <p>The front-end driver also initializes the function pointers of
the general network driver layer of RTEMS with the actual
network I/O functions implemented by the front-end driver.
For example, the if_start pointer is initialized by the
function that transmits a message through the virtio device.
This function adds a send descriptor to the TX virtqueue
and noti es it to the back-end driver. If the TX virtqueue
is full, this function intermediately queues the descriptor to
the interface queue described in Section 3.2.</p>
    </sec>
    <sec id="sec-7">
      <title>3.2 Event Handling</title>
      <p>The interrupt handler is responsible for hardware events.
However, since the interrupt handler is expected to nish
immediately relinquishing the CPU resources as soon as
possible, the actual processing of hardware events usually takes
place later. In general-purpose operating systems, such
delayed event handling is performed by the bottom half that
executes in interrupt context with a lower priority than the
interrupt handler. In regard to network I/O,
demultiplexing of incoming messages and handling of acknowledgment
packets are the examples that the bottom half performs.
However, RTOS usually do not implement a framework for
bottom half; thus, we have to use a high-priority thread as
a bottom half. The interrupt handler sends a signal to this
thread to request the actual event handling, where there
is a tradeo between signaling overhead and size of
interrupt handler. If the bottom half thread handles every
hardware event aiming for a small interrupt handler, the
signaling overhead can increase in proportional to the number of
interrupts. For example, it takes more than 70 s per
interrupt in RTEMS for signaling and scheduling between a
thread and an interrupt handler on our experimental system
described in Section 4.1. On the other hand, if the interrupt
handler takes care of most of events to reduce the signaling
overhead, the system throughput can be degraded, because
interrupt handlers usually disable interrupts during its
execution.</p>
      <p>In our design, the interrupt handler is only responsible for
moving the send/receive descriptors between interface queues
and virtqueues when the state of virtqueues changes. Fig. 3
shows the sequence of event handling, where the interface
queues are provided by RTEMS and used to pass network
messages between the device driver and upper-layer
protocols. When a hardware interrupt is triggered by the
backend driver, the interrupt handler rst checks if the TX
virtqueue has available slots for more requests, and moves the
send descriptor that is stored in the interface queue waiting
for the TX virtqueue to become available (steps 1 and 2 in
Fig. 3). Then the interrupt handler sees whether the RX
virtqueue has used descriptors for incoming messages, and
moves these to the interface queue (steps 3 and 4 in Fig. 3).
Finally, the interrupt handler sends a signal to the bottom
half thread (step 5 in Fig. 3) so that the actual processing for
received messages can be processed later (step 6 in Fig. 3).
It is noteworthy that the interrupt handler handles multiple
descriptors at a time to reduce the number of signals. In
addition, we suppress the interrupts with the aid from the
back-end driver.</p>
    </sec>
    <sec id="sec-8">
      <title>3.3 Network Buffer Allocation</title>
      <p>On the sender side, the network messages are
intermediately bu ered in the kernel due to the TCP congestion and
ow controls. As the TCP window moves, the bu ered
messages are sent as many as the TCP window allows. Thus, a
larger number of bu ered messages can easily ll the
window size and can achieve higher bandwidth. On the receiver
side, received messages are also bu ered in the kernel
until the destination task becomes ready. A larger memory
space to keep the received messages also can enhance the
bandwidth, because it increases the advertised window size
in ow control. Although a larger TCP bu er size is bene
cial for network bandwidth, the operating system limits the
total size of messages bu ered in the kernel to prevent the
messages from exhausting memory resources. However, we
have observed that the default TCP bu er size of 16 KByte
in RTEMS is not su cient to fully utilize the bandwidth
provided by Gigabit Ethernet. Therefore, in Section 4.2,
we heuristically search the optimal size of the TCP bu er
that promises high bandwidth without excessively wasting
memory resources.</p>
      <p>Moreover, we control the number of preallocated receive
bu ers (i.e., mbuf ). The virtio front-end driver is supposed
to preallocate a number of receive bu ers that matches the
RX virtqueue, each of which occupies 2 KByte of memory.
The descriptors of the preallocated bu ers are enqueued at
the initialization phase so that the back-end driver can
directly place incoming messages in those bu ers. This can
improve the bandwidth by reducing the number of
interrupts, but reserved memory areas can waste memory
resources. Therefore, it is desirable to size the RX virtqueue
based on the throughput of the front and back-end drivers.
If the front-end driver can process more messages than the
back-end driver, we do not need a large number of
preallocated receive bu ers. However, we need a su cient number
of bu ers if the front-end driver slower than the back-end
driver. The back-end driver of KVM requires 256
preallocated receive bu ers, but we have discovered that 256 bu ers
are excessively large for Gigabit Ethernet as discussed in
Section 4.2.</p>
      <p>Fig. 4 shows how we control the number of preallocated
receive bu ers. The typical RX virtqueue has the used ring
and available ring areas, which store descriptors for used
and unused preallocated bu ers, respectively. Our front-end
driver introduces the empty ring area to the RX virtqueue
in order to limit the number of preallocated bu ers. At the
initialization phase, we ll the descriptors with preallocated
bu ers until desc_head_idx reaches to the threshold de ned
as sizeof (virtqueue) sizeof (empty ring). Then, whenever
the interrupt handler is invoked, it enqueues the descriptors
of new preallocated bu ers as many as vring_used.idx
used_cons_idx (i.e., size of used ring). The descriptors in
the used ring are retrieved by the interrupt handler as
mentioned in Section 3.2. Thus, the size of empty ring is
constant. We show the detail analysis of tradeo between the
RX virtqueue size and network bandwidth in Section 4.2.</p>
    </sec>
    <sec id="sec-9">
      <title>4. PERFORMANCE EVALUATION</title>
      <p>In this section, we analyze the performance of our design
suggested in the previous section. First, we measure the
impact of network bu er size on network bandwidth. Then, we
compare the bandwidth and latency of our implementation
with those of Linux.</p>
    </sec>
    <sec id="sec-10">
      <title>4.1 Experimental System Setup</title>
      <p>We implemented the suggested design in RTEMS version
4.10.2. The mbuf space was con gured in huge mode so
that it is capable of preallocating 256 mbufs for the RX
virtqueue. We measured the performance of the virtio on
two nodes that were equipped with Intel i5 and i3
processors, respectively, as shown in Figure 5. The Linux version
3.13.0 was installed to the former, and we installed the Linux
version 3.16.6 on the other node. The two nodes are
connected directly through Gigabit Ethernet.</p>
      <p>
        We ran the ttcp benchmarking tool to measure the network
bandwidth with 1448 Byte messages, which is the maximum
user message size that can t into one TCP segment over
Ethernet. We separately measured the send bandwidth and
receive bandwidth of virtio by running a virtual machine
only on the i5 node with KVM. We reported the bandwidth
on the i3 node that ran Linux without virtualization. The
intention behind such experimental setup is to measure the
performance with the real timer because the virtual timer
in virtual machines is not accurate [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
    </sec>
    <sec id="sec-11">
      <title>4.2 Impact of Network Buffer Size</title>
      <p>As mentioned in Section 3.3, we analyzed the impact of
network bu er size on bandwidth. Fig. 6 shows the variation
of send bandwidth with di erent sizes of the TCP bu er.
We can observe that the bandwidth increases as the kernel
bu er size increases. However, the bandwidth does not
increase anymore after 68 KByte of TCP bu er size, because
68 KByte is su cient to ll the network pipe of Gigabit
Ethernet. Fig. 7 shows the experimental results for receive
bandwidth, which also suggests 68 KByte as the minimum
bu er size for the maximum receive bandwidth. Thus, we
increased the TCP bu er size from 16 KByte to 68 KByte
for our virtio.</p>
      <p>We also measured the network bandwidth varying the size
of the RX virtqueue as shown in Fig. 8. This gure shows
that the network bandwidth increases until the virtqueue
size becomes only 8. Thus, we do not need more than 8
preallocated bu ers for Gigabit Ethernet though the default
virtqueue size is 256.</p>
      <p>In summary, we increased the in-kernel send and receive
TCP bu er sizes from 16 KByte to 68 KByte, which requires
additional memory resources of 104 KByte (= (68 16) 2)
for higher network bandwidth. However, we reduced the
number of preallocated receive bu ers from 256 to 8 without
sacri cing the network bandwidth, which saved 496 KByte
(= 256 2 8 2) of memory, where the size of mbuf is
2 KByte as mentioned in Section 3.3. Thus, we saved 392
KByte (= 496 104) in total while achieving the maximum
available bandwidth over Gigabit Ethernet.</p>
    </sec>
    <sec id="sec-12">
      <title>4.3 Comparison with Linux</title>
      <p>We compared the performance of RTEMS-virtio with that of
Linux-virtio to see if our virtio can achieve comparable
performance to the optimized one for general-purpose operating
system. As shown in Fig. 9, the unidirectional bandwidth
of RTEMS-virtio is almost the same with that of
Linuxvirtio, which is near the maximum bandwidth the physical
hardware can provide in one direction. Thus, these results
show that our implementation can provide quite reasonable
performance with respect to bandwidth.</p>
      <p>We also measured the round-trip latency in a way that two
nodes send and receive the same size message in a
pingpong manner repeatedly for a given number of iterations.
We reported the average, maximum, and minimum
latencies for 10,000 iterations. Fig. 10 shows the measurement
results for 1 Byte and 1448 Byte messages. As we can see in
the gure, the average and minimum latencies of
RTEMSvirtio are comparable to those of Linux-virtio. However, the
maximum latency of RTEMS-virtio is signi cantly smaller
than that of Linux-virtio (the y-axis is log scale) meaning
RTEMS-virtio has a lower jitter. We always observed the
maximum latency in the rst iteration of every
measurement for both RTEMS and Linux. Thus, we presume that
the lower maximum latency of RTMES is due to its smaller
working set.</p>
    </sec>
    <sec id="sec-13">
      <title>5. CONCLUSIONS AND FUTURE WORK</title>
      <p>In this paper, we have suggested the design of the virtio
front-end driver for RTEMS. The suggested device driver
can be portable across di erent Virtual Machine Monitors
(VMMs) because our implementation is compliant with the
virtio standard. The suggested design can e ciently
handle hardware events generated by the back-end driver and
can reduce memory consumption for network bu ers, while
achieving the maximum available network bandwidth over
Gigabit Ethernet. The measurement results have showed
that our implementation can save 392 KByte of memory
and can achieve comparable performance to the virtio
implemented in Linux. Our implementation also has a smaller
jitter of latency than Linux thanks to smaller working set of
RTEMS. In conclusion, this study can provide insights into
virtio from the viewpoint of the RTOS. Furthermore, the
discussions in this paper can be extended to apply to other
RTOS running in virtual machine to improve the network
performance and portability.</p>
      <p>As future work, we plan to measure the performance of our
virtio on a di erent VMM, such as VirtualBox, to show
that our implementation is portable across di erent VMMs.
Then we will release the source code. In addition, we
intend to extend our design for dynamic network bu er sizing
and measure the performance on real-time Ethernet, such
as AVB.</p>
    </sec>
    <sec id="sec-14">
      <title>6. ACKNOWLEDGMENTS</title>
      <p>This research was partly supported by the National Space
Lab Program (#2011-0020905) funded by the National
Research Foundation (NRF), Korea, and the Education
Program for Creative and Industrial Convergence (#N0000717)
funded by the Ministry of Trade, Industry and Energy
(MOTIE), Korea.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Oracle</surname>
            <given-names>VM</given-names>
          </string-name>
          VirtualBox. http://www.virtualbox.org.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>RTEMS</given-names>
            <surname>Real</surname>
          </string-name>
          <article-title>Time Operating System (RTOS)</article-title>
          . http://www.rtems.org.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Barham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dragovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Fraser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Neugebauer</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          <article-title>Pratt, and</article-title>
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>War eld. Xen and the art of virtualization</article-title>
          .
          <source>ACM SIGOPS Operating Systems Review</source>
          ,
          <volume>37</volume>
          (
          <issue>5</issue>
          ):
          <volume>164</volume>
          {
          <fpage>177</fpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tian</surname>
          </string-name>
          , and
          <string-name>
            <surname>H. Guan.</surname>
          </string-name>
          <article-title>High performance network virtualization with sr-iov</article-title>
          .
          <source>Journal of Parallel and Distributed Computing</source>
          ,
          <volume>72</volume>
          (
          <issue>11</issue>
          ):
          <volume>1471</volume>
          {
          <fpage>1480</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Han and H.-W. Jin</surname>
          </string-name>
          .
          <article-title>Resource partitioning for integrated modular avionics: comparative study of implementation alternatives</article-title>
          .
          <source>Software: Practice and Experience</source>
          ,
          <volume>44</volume>
          (
          <issue>12</issue>
          ):
          <volume>1441</volume>
          {
          <fpage>1466</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Herber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Richter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wild</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Herkersdorf</surname>
          </string-name>
          .
          <article-title>A network virtualization approach for performance isolation in controller area network (can)</article-title>
          .
          <source>In 20th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.-W.</given-names>
            <surname>Jin</surname>
          </string-name>
          .
          <article-title>Fieldbus virtualization for integrated modular avionics</article-title>
          .
          <source>In 16th IEEE Conference on Emerging Technologies &amp; Factory Automation (ETFA)</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kivity</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kamay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Laor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Lublin</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Liguori</surname>
          </string-name>
          .
          <article-title>kvm: the linux virtual machine monitor</article-title>
          .
          <source>In Linux Symposium</source>
          , pages
          <volume>225</volume>
          {
          <fpage>230</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Seok</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.-W.</given-names>
            <surname>Jin</surname>
          </string-name>
          .
          <article-title>Barriers to real-time network i/o virtualization: Observations on a legacy hypervisor</article-title>
          .
          <source>In International Symposium on Embedded Technology (ISET)</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Magnusson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Christensson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Eskilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Forsgren</surname>
          </string-name>
          , G. Hallberg,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hogberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Larsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moestedt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Werner</surname>
          </string-name>
          .
          <article-title>Simics: A full system simulation platform</article-title>
          .
          <source>IEEE Computer</source>
          ,
          <volume>35</volume>
          (
          <issue>2</issue>
          ):
          <volume>50</volume>
          {
          <fpage>58</fpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Masmano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Peiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sanchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Simo</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Crespo</surname>
          </string-name>
          .
          <article-title>Io virtualisation in a partitioned system</article-title>
          .
          <source>In 6th Embedded Real Time Software and Systems Congress</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Masmano</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ripoll</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Crespo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Metge</surname>
          </string-name>
          .
          <article-title>Xtratum: a hypervisor for safety critical embedded systems</article-title>
          .
          <source>In 11th Real-Time Linux Workshop</source>
          , pages
          <volume>263</volume>
          {
          <fpage>272</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Menon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Cox</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Zwaenepoel</surname>
          </string-name>
          .
          <article-title>Optimizing network virtualization in xen</article-title>
          .
          <source>In USENIX Annual Technical Conference</source>
          , pages
          <volume>15</volume>
          {
          <fpage>28</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G.</given-names>
            <surname>Motika</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Weiss</surname>
          </string-name>
          .
          <article-title>Virtio network paravirtualization driver: Implementation and performance of a de-facto standard</article-title>
          .
          <source>Computer Standards &amp; Interfaces</source>
          ,
          <volume>34</volume>
          (
          <issue>1</issue>
          ):
          <volume>36</volume>
          {
          <fpage>47</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>H.</given-names>
            <surname>Raj</surname>
          </string-name>
          and
          <string-name>
            <surname>K. Schwan.</surname>
          </string-name>
          <article-title>High performance and scalable i/o virtualization via self-virtualized devices</article-title>
          .
          <source>In ACM Symposium on High-Performance Parallel and Distributed Computing</source>
          ,
          <year>June 2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>K. K. Ram</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          <string-name>
            <surname>Santos</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Turner</surname>
            ,
            <given-names>A. L.</given-names>
          </string-name>
          <string-name>
            <surname>Cox</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Rixner</surname>
          </string-name>
          .
          <article-title>Achieving 10gbps using safe and transparent network interface virtualization</article-title>
          .
          <source>In ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE)</source>
          ,
          <year>March 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Russell</surname>
          </string-name>
          .
          <article-title>virtio: towards a de-facto standard for virtual i/o devices</article-title>
          .
          <source>ACM SIGOPS Operating Systems Review</source>
          ,
          <volume>42</volume>
          (
          <issue>5</issue>
          ):
          <volume>95</volume>
          {
          <fpage>103</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Tsirkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Huck</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Moll</surname>
          </string-name>
          .
          <article-title>Virtual I/O Device (VIRTIO) Version 1.0</article-title>
          . OASIS,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Seawright</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. A.</given-names>
            <surname>MacKinnon</surname>
          </string-name>
          . Vm/370
          <article-title>- a study of multiplicity and usefulness</article-title>
          .
          <source>IBM Systems Journal</source>
          ,
          <volume>18</volume>
          (
          <issue>1</issue>
          ):4{
          <fpage>17</fpage>
          ,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sugerman</surname>
          </string-name>
          , G. Venkitachalam, and
          <string-name>
            <given-names>B.-H.</given-names>
            <surname>Lim</surname>
          </string-name>
          .
          <article-title>Virtualizing i/o devices on vmware workstation's hosted virtual machine monitor</article-title>
          .
          <source>In USENIX Annual Technical Conference</source>
          ,
          <year>June 2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S. H.</given-names>
            <surname>VanderLeest</surname>
          </string-name>
          .
          <article-title>Arinc 653 hypervisor</article-title>
          .
          <source>In 29th IEEE/AIAA Digital Avionics Systems Conference (DASC)</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lange</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Dinda</surname>
          </string-name>
          .
          <article-title>Towards virtual passthrough i/o on commodity devices</article-title>
          .
          <source>In Workshop on I/O Virtualization (WIOV)</source>
          ,
          <year>December 2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>