=Paper=
{{Paper
|id=Vol-3776/paper13
|storemode=property
|title=Kubernetes edge/cloud continuum task offloading framework for vehicular computing
|pdfUrl=https://ceur-ws.org/Vol-3776/shortpaper13.pdf
|volume=Vol-3776
|authors=Alireza Bakhshi Zadi Mahmoodi,Ella Peltonen
|dblpUrl=https://dblp.org/rec/conf/tktp/MahmoodiP24
}}
==Kubernetes edge/cloud continuum task offloading framework for vehicular computing==
Kubernetes Edge/Cloud Continuum Task Offloading
Framework for Vehicular Computing
Alireza Bakhshi Zadi Mahmoodi, Ella Peltonen
Empirical Software Engineering in Software, Systems, and Services, University of Oulu, Finland
Abstract
Cars have significantly been transformed to the point of autonomously driving in complex situations by sensing their
surroundings and inferring insights based on sensor inputs. Even though smart cars can process the vast majority of data
coming from various in-vehicle-installed sensors such as radars, LiDAR, cameras, and so on, the amount of data processing
required is ever-growing, along with the demand for more real-time services for novel driving applications. In addition,
sustainability and battery-longevity perspectives appreciate the computation of the vehicle-sensor data to be offloaded to the
edge-cloud continuum. This article introduces a Kubernetes-based framework that can be utilized on cloud/edge servers to
facilitate various tasks and computation offloading for smart vehicles. The work is ongoing, and we present the preliminary
results about the framework’s validity by employing an object recognition task on both edge and cloud computing servers to
showcase the proposed architecture’s feasibility. An incidental finding regarding latency is also presented in the experiment.
Moreover, we discuss development challenges related to implementing an edge-cloud continuum on vehicular computing.
Keywords
Vehicular Computing, Task Offloading, Kubernetes, Edge-Cloud Continuum, Microservices, Software-Defined Vehicles (SDVs)
1. Introduction essential for autonomy.
Smart cars are empowered to handle the computational
A lot has changed since the first invention needs for such data via their internal systems, which
of the three-wheeled gasoline-powered Benz include high-performance processors like GPUs and
Patent-Motorwagen by Carl Friedrich Benz in 1885. CPUs, Field-Programmable Gate Arrays (FPGAs),
Today’s cutting-edge smart vehicles are capable of memory, communication interfaces, etc. However,
a multitude of autonomy regarding navigation and processing via the internal systems comes at the
manoeuvring, like being able to perceive and understand expense of the limited capacity of the battery that
their surroundings, plan and execute safe roads, handle provides the required energy for the entire car, not to
complex driving situations such as navigating busy mention computation-generated heat that requires
intersections, roundabouts, or even merging onto cooling, which in turn adds to the overall energy
highways seamlessly. Advancements in communication consumption. That is where computation offloading
and computing technologies have significantly impacted comes into the picture; it can be defined as transferring
how cars have morphed into such an autonomous a task from a resource-constrained device to a more
state. Information and Communication Technology robust system that can handle it officially [3]. By
(ICT)-enhanced vehicles, which include autonomous leveraging the transfer, the resource-constrained device
vehicles, connected vehicles, and the Internet of Vehicles can conserve resources to improve its performance and
(IoV), continue to appear increasingly on the horizon [1]. battery longevity [4, 5]. Smart vehicles can also benefit
While contemporary smart cars can provide such a degree from computation offloading by delegating the required
of autonomy, some considerations should also be given real-time computation to cloud or edge servers, which
to sustainability and energy efficiency perspectives, are more powerful and have “unlimited” capacities, to
especially for battery-operated vehicles. Constant save battery consumption, spare storage, and access to
data is provided by myriad sensors, such as LiDAR, more computation.
radar, cameras, GPS, Inertial Measurement Unit (IMU), Cloud is one of the candidates that can be employed
microphone, and ultrasonic sensors. The aggregate data for computation offloading by smart vehicles. However,
can reach as high as 20 TiB [2] and require constant some impediments are associated with the cloud,
computation to acquire the level of knowledge that is namely latency and network congestion. Edge
TKTP 2024: Annual Doctoral Symposium of Computer Science,
computing, particularly Multi-access Edge Computing
10.-11.6.2024 Vaasa, Finland (MEC), is a paradigm considered for vehicular
Envelope-Open Alireza.BakhshiZadiMahmoodi@oulu.fi (A. B. Z. Mahmoodi); computing environments to reduce latency and improve
Ella.Peltonen@oulu.fi (E. Peltonen) performance for latency-sensitive applications [6]. MEC
Orcid 0009-0004-3192-7711 (A. B. Z. Mahmoodi); 0000-0002-3374-671X servers are typically deployed at the cellular network’s
(E. Peltonen)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License edge, giving them low latency and high bandwidth. This
Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
makes them ideal for offloading tasks from autonomous
vehicles, such as path planning, object recognition, and
high-definition map processing, which are essential
for automated driving [7] when considering the design
of Software-Defined Vehicle paradigm as a whole [8].
However, consensus must be reached for scheduling
and optimising the offloaded computational tasks from
autonomous vehicles to edge/cloud servers.
Motivation: deciding what data to send, how to
split tasks, which servers to use, and how to manage
everything efficiently are all complex challenges in the
task-offloading realm, especially when considering
a multitude of underway vehicles in a flowing
traffic. Figuring out the best communication methods, Figure 1: Driving university testbed’s vehicle at Oulu during
winter, 2023. 1
infrastructure setup, and technologies like virtualization,
containers, microservices, and orchestration all add to
the complexity that must be considered and evaluated
in simulations and in practice with real-world scenarios. decelerate to adjust the speed, assist when parking the
Not much work has been done to tackle the concerns laid car, etc. They can function with greater efficiency
out so far, highlighting the research gap for our work. compared to vehicles driven by humans, smoothly
Research question: how can task-offloading be accelerating and decelerating while keeping a safe
optimized to improve efficiency and performance, following distance [10]. This heightened efficiency can
considering the complexities of data transmission, task result in shorter travel duration and decreased fuel usage,
distribution, server utilization, and the integration alleviating traffic congestion and reducing greenhouse
of technologies such as virtualization, containers, gas emissions [11]. The Internet of Vehicles (IoV) is
microservices, and orchestration in real-world scenarios envisioned as a decentralized transportation network
while considering all the concerns mentioned in the that can autonomously determine how to transport
motivation section? passengers to their intended destinations efficiently [12].
Contribution: our final goal is to implement, validate, Autonomous self-driving vehicles can retrieve
and benchmark a real-life vehicle-edge-cloud continuum extensive image and map data from the cloud, eliminating
with a real-life vehicle testbed (Figure 1) based on the need to store or process this data locally [13].
microservices technology orchestrated by Kubernetes While both Intelligent Transportation Systems and
(k8s for short; pronounced /keIts/). This article connected vehicles can benefit from the scalability and
presents a towards-the-final-goal work-in-progress cloud’s on-demand nature, restrictions in the network
Kubernetes-based framework that can be employed infrastructure connecting to cloud servers can hinder
on cloud and edge servers to facilitate task offloading certain services, particularly those that demand ultra-low
from smart vehicles. In addition, we discuss the main latency or high bandwidth [14]. With many benefits
lessons learned up until now: 1) We showcase the coming from the cloud, we still face some challenges
feasibility of the framework that can run through the that are crucial for some applications in autonomous
whole vehicle-edge-cloud continuum, enabling dynamic vehicles that need to be tackled. These applications
task offloading. 2) We underline that much of the could be categorized into either of three interactive,
promised potentiality of the vehicular edge is still not real-time, or auxiliary applications — an example of an
shown in practice with real-world test cases. And 3) we auxiliary application is a system diagnostic for error
call for considering and utilising such real-world testbeds prediction [15].
instead of naive simulations and synthetic data. Since the connection to the cloud is possible through
the Internet, latency and network congestion become
worrying factors for latency-sensitive applications
2. Literature Review because most autonomous driving applications are
delay-sensitive and are required to return the result
Advancements in automotive technology, particularly
in a short period [16]; like Simultaneous Localisation
connected and self-driving vehicles, have endowed
And Mapping (SLAM) which requires the result to be
cars with increased computing, storage, and sensor
ready within five milliseconds [17]. The conventional
capabilities [9]. Today’s cutting-edge cars can control
vehicle-cloud offloading makes tasks challenging to
the steering wheel to change lanes, accelerate and
complete in time [16]. Furthermore, the wide-area
1
Original image by Mikko Törmänen. © 2023 Mikko Törmänen. network (WAN) delays render it unfeasible to widely
introduce advanced services like augmented reality and limited computational capabilities, as well as energy,
virtual reality [18]. space, and weight constraints — and requirements
Tang et al. [16] provide a container-based offloading of future autonomous driving systems and the role
module framework that is composed of an offloading of sensor-equipped MECs in preparing driving tasks.
decision module, offloading scheduler module (aka They introduce a management prototype to show the
node coordinator), and edge offloading middleware (aka approach’s feasibility and provide a timing analysis
offloading service middleware). The offloading decision to investigate the introduced overhead. The focus is
module resides in the vehicle and is responsible for on the tool chain’s potential to extend and enhance
whether to offload a service to the edge server or not. computational capabilities for real-time tasks in vehicles.
This component checks for three criteria: Is there enough They also discuss the potential challenges in offloading
computing power in the edge server to handle the tasks to the MEC, including time delays and connection
offloaded application? Is the energy consumption of failures. The proposed approach aims to enable the
the offloaded task less than that of the task executed in complete self-sufficiency of both the vehicle and the
the vehicle? Is enough memory available at the edge MEC, ensuring the safety and predictability of task
server to handle the offloaded task? The offloading behaviour. It emphasizes using a decentralized MEC
scheduler module manages multiple edge servers within designed as a stationary twin of the car, focusing on task
a valid scope via the service management module reusability and scheduling. Overall, Blieninger et al.’s
responsible for monitoring edge servers. Edge offloading work is more about managing different clusters, and no
middleware resides in the edge servers and is responsible specific information about how offloading tasks are to be
for providing the requested services via launching done is presented by them. Compared to our proposed
containers. The authors use a greedy algorithm to framework, we eliminate any need for middle-man-like
maximize the utility of the Multiple Multidimensional components such as “prototype gateway” presented
Knapsack Problem (MMKP)-modelled problem and show in [19] as it checks for some criteria before enabling
that millisecond-level offloading is possible on the edge. offloading, which can add to the overall latency and
Their unrealistic evaluation is based on simulated and unnecessary complexity. As an additional difference,
lab-generated data for the suggested MMKP-modeled we can point out that for our proposed framework,
problem. Moreover, there are a lot of complicated in each region that is under the coverage of Radio
intermediary modules that exist on both the client Access Network (RAN), there exists one cluster, which is
(vehicle) and server side. In contrast, we try to eliminate composed of multiple powerful compute nodes (servers),
the single point of failure in Tang et al.’s architecture by that handles any offload requests by the nearby vehicles
leveraging the k8s cluster as it can manage an “unlimited” in that region without the need to communicate with
number of compute resources under its orchestration other clusters; simply put, we do not see any point
control and provide High Availability (HA) in case any in communicating between clusters since any car goes
control node fails. In other words, instead of having one from one RAN-covered region to the other; meaning
server to control other servers, multiple servers can be a car can directly communicate with clusters without
configured to coordinate other servers, hence providing the need of an intermediary. This is unlike the one
High Availability (HA). In addition, in the proposed presented by Blieninger et al., where they have multiple
architecture, smart cars are ignorant of the framework’s clusters that communicate with each other through the
intricacies; this means they don’t need to communicate middle-man-like “central status aggregation.”
back and forth between various edge servers and keep
track of their capabilities to offload their tasks. This
means that smart cars only offload their tasks to the 3. Method
k8s cluster residing in the edge/cloud, and the cluster
Kubernetes orchestrates containerized applications for
serves the offloading requests immediately without the
scalable, automated deployment and management across
need to find the right and capable worker node to handle
the infrastructure. It is considered the operating system
the task. This instantaneous serving is possible via the
of the cloud composed of different components like
ready-to-serve running microservices in the cluster. This
etcd, API server, scheduler, and control manager, all of
rapid availability of the services through containerization
which reside on the control plane and are responsible
eliminates any overhead imposed by other related works
for managing other nodes (worker nodes). The other
in which a car tries to keep track of various conditions
components, such as kubelet, kube-proxy, and container
and servers before offloading the tasks, which adds to
runtime, are part of worker nodes (compute nodes).
the overall complexity and intermediary levels.
Kubernetes, k8s for short, has a modular structure,
Blieninger et al. [19] describe a management approach
making it resilient and scalable by allowing dynamic
for real-time Kubernetes clusters in the automotive
addition or removal of servers as either control plane
mobile edge cloud. They discuss the challenges — like
nodes or worker nodes. The combination of multiple
worker nodes and control-plane nodes make up a k8s
cluster, which is under the management of control-plane
nodes.
3.1. Our Proposed Kubernetes Framework
for Computation Offloading
Our presented framework takes advantage of k8s’
benefits to provide a structure for computation offloading
to cloud and edge servers by smart vehicles. The
proposed framework supports the containerization of
various applications, scalability, automatic deployment
management, and robustness of the back-end services
provided by k8s. Figure 2 provides a general view of the
offered framework. The back-end servers that form a
cluster are all under the orchestration of k8s as either a
control plane or a worker node. The control plane is like
the brain of the cluster and is responsible for monitoring,
managing, and deploying containerized applications
on the worker nodes. On the other hand, the worker
nodes carry the burden of executing the offloaded tasks
requested by smart cars.
As depicted in Figure 2, various services that are Figure 2: Kubernetes framework for computation offloading
needed by smart vehicles, such as object recognition, path at the edge. Multiple servers can be orchestrated as a
planning, blind spot detection, traffic sign recognition, whole entity — known as a cluster — by operating as either
parking assistance, etc., can already be available at the control plane or worker node. Multiple control planes
a ready-to-serve state in the form of a containerized eliminate a single point of failure by becoming highly available,
application running on a worker node to be employed and multiple worker nodes add to the total computation power
by any vehicle that wants to receive such a service by of the whole cluster. Multiple cars can request their services
task offloading. For example, car A wants to receive an to be run on any nearby cluster through a wireless connection
object recognition service from the nearby edge server. to the connected Radio Access Network (RAN).
Car A sends its request to the edge server, managed
by k8s in our framework, and receives the requested
service already available in one of the worker nodes. At unmet by the already-established edge server, it needs to
the same time, any other nearby vehicle can receive the find a new edge server through the Node Coordinator
same or different services along with other vehicles with again. These communications require extra time, which
their offloaded tasks running in an isolated execution could be detrimental to some applications, such as
environment on the cluster of the proposed framework. object recognition, which is necessary for the safety
Multiple advantages can be attributed to the of the passengers inside the vehicle. None of these
framework empowered by k8s. First, the cars are unnecessary, time-consuming intermediaries is required
ignorant of the intricacies involved in how any service is in our proposed k8s framework presented in this article.
available for computation offloading. Smart vehicles can Secondly, the framework is highly scalable and reliable
request any service at any time and provide the required since multiple servers can be added to the cluster as
data for the service to be processed by the containerized either compute or control plane nodes, eliminating a
app running on the k8s cluster in one of the worker single point of failure that is present in Figure 3. The
nodes. The result of the requested task will be returned third advantage is that the running services in the cluster
to the requester’s vehicle after its execution is complete are instantaneously ready to serve smart cars as soon
in the cluster’s worker node. This ignorance of details as the required data for processing is provided over
is highly important compared to other architectures, the network. Fourth, the number of running services
such as the one presented in Figure 3 by [16]. There, on the cluster in the ready-to-serve state does not add
the car gains access to the nearby edge server through to the overall computation usage of the worker nodes,
the Node Coordinator, which is a single point of failure which is illustrated in a work by [16] using Docker
and can cause lethal problems, as explained in the technologies. Fifth, the offloaded tasks by smart cars
following. Whenever the vehicle offloads a task that is
offloading requests for images to the local edge server
and the Rahti cloud. The choice to substitute a computer
for a car is because of the lack of some devices for the
time being in our testbed. However, the experiment is
preliminary, and future works will explore more realistic
scenarios.
The preliminary task example: Both environments
provide an object recognition containerized service
implemented via YOLOv3 [20] algorithm. The example
was chosen due to its many possible application
Figure 3: The classic Node Coordinator architecture relies
on a single point of failure, the Node Coordinator, to connect areas in autonomous driving and extended service
to nearby edge servers. If the established edge server cannot capabilities, as image processing is widely utilised in
execute the requested task, the vehicle needs to establish vehicular computing. The service is employed under the
a new connection to another edge server via the Node control of the Deployment resource from k8s. Through
Coordinator. The communication overhead between the the Deployment resource, five replicas of the object
vehicle and the Node Coordinator can negatively affect critical recognition service are declared to always be available
tasks like real-time object recognition essential for passenger in the cluster for instant availability — meaning at any
safety. given time, five object recognition offloading requests
can be executed simultaneously in the cluster.
Whenever an object recognition service request is
run in an isolated environment inside the compute node sent along with the to-be-processed data — in this
operating inside the cluster — a feature that comes case, images — to the server residing in the edge
naturally with a micro-service execution environment. or cloud, the requested service is provided via one
This means that none of the various services running on of the already-available-to-be-employed services in
the same compute node can access each other’s resources, the edge/cloud. While the employed service in the
such as data, memory, processes, file descriptors, network edge/cloud server is busy computing the requested
sockets, etc., which is a positive point from the security task, the other available services can provide the same
perspective. functionality to new requesters.
In the experiment, the connectivity to the edge server
is established through WLAN; again, because of the
3.2. The Experimental Setup lack of some devices in the testbed. Smart vehicles
For the preliminary evaluation of the feasibility of the should be connected via cellular networks such as 5G,
framework, two identical environments were set up. The which aims for 1-millisecond latency [21], and future
first one in the main building of the University of Oulu’s cellular generations with even less latency and higher
5GTN test network 2 acted as the edge server. The server bandwidth than previous generations. A comparison
is located in the same building with k8s installed to form between different network technologies, as such, is again
the required cluster. The cluster provides the necessary on the agenda of our future works, as this paper focuses
environment to run multiple containerized applications, on the feasibility of the proposed framework.
leveraging the server’s processing power. Currently, we
have one service, YOLOv3 [20], which identifies objects
in pictures offloaded via layer-7 HTTP protocol and 4. Results
returns the results as recognized objects in the image.
The end result for the feasibility of the framework is
For this particular setup, five identical instances of the
shown in Figure 4 after requesting offloading task, object
set-up service are running simultaneously to handle
recognition in this case, to the framework and receiving
five concurrent offloading requests. Choosing number
the computed result. An offloading request for object
five suffices for our small-scale needs. The second
recognition takes about 30 seconds, more or less, for an
environment, the OKD/Kubernetes container cloud called
3 image to be processed by either the edge or cloud servers.
Rahti , served as a cloud located 200 km away from Oulu
This long time taken for object recognition is mainly due
in Kajaani, Finland, and provides the same service as our
to the service being run on a CPU of the worker node
edge server with same settings and configurations. A
instead of a GPU. It has been noted that GPU computation
client computer, located in Oulu and connected to the
of YOLOv3 can be done within 20 ms [20], which will be
University of Oulu’s WiFi, acts as a vehicle and sends
experimented with in our future works.
2
https://5gtn.fi/
3 4
https://rahti.csc.fi/ Original image by Mikko Törmänen. © 2023 Mikko Törmänen.
Figure 4: Recognized objects detected by the ready-to-serve
object recognition service in the cloud/edge. 4
Regarding the incidental finding, Figure 5 shows the
scatter plot of the latency in each request sent to the
edge server (the green plot) and the Rahti cloud (the blue
plot), which is less than 500 ms. The x-axis represents
time, and the y-axis represents latency in milliseconds.
From the plot, we observe that even though the edge
server is within the same vicinity, about 100 meters, as
the computer requesting the service, most of the time,
the latency lies between 45 ms and 75 ms. In contrast,
the latency for the cloud lies between 20 ms and 50 ms
even though the Rahti cloud is about 200 km away from
the requester of the service in the experiment. Standard
deviations for both cloud and edge are 23.66 and 14.17,
respectively.
Figure 6 shows the latency grouped into various 5 ms
intervals for the edge server. It can be observed that most
of the time (38 %), the latency lies within intervals of 55
to 60 ms. This insight is counter-intuitive compared to
the pie chart of the Rahti cloud shown in Figure 7, with
the dominant latency interval (33 %) lying between 25 to
30 ms. The non-obvious higher latency observed in the Figure 5: The scatter plot of the latency for both Rahti cloud
edge server reveals that the closeness of the requester of and the edge server that is less than 500 ms presented by blue
service to the edge server can still suffer from latency if and green colours, respectively. Latency for the Rahti cloud
the underlying network infrastructure fails to meet the is between 20 to 50 ms mostly, while for the edge, it is split
high expectations of the required connectivity. between 15 to 20 ms and 45 to 75 ms; it appears as if there is
a white line cutting through these two intervals for the edge
server latency.
5. Discussion & Conclusions
So, earlier on, we highlighted the research gap as the research gap concerning optimizing task-offloading for
complex challenges in task-offloading for vehicular efficiency enhancement and performance improvement
computing, such as deciding what data to send, how while considering all the complexities.
to split tasks, which servers to use and managing In this paper, we presented a Kubernetes framework
these processes efficiently. We then mentioned architecture for task offloading by self-driving cars to
the escalated complexity that arises from selecting overcome some challenges. The framework requires
appropriate communication methods, infrastructure the edge/cloud servers to have various replicas of
setups, and technologies like virtualization, containers, containerized services for multiple applications running
microservices, and orchestration. Then, we identified the in a ready-to-serve state, prepared to be employed
works, we will focus on these aspects individually to
evaluate whether the Kubernetes-based architecture is
right for vehicular task offloading scenarios or if more
underlying system design should be studied. Compared
to many previous use cases, such as robotics, IoT,
and manufacturing, smart vehicle mobility and latency
demands are significantly higher. Special considerations
should be given to the risk and safety management of the
system: a failure over task orchestration cannot, under
any circumstances, lead to a fatal failure in the vehicle’s
operations. In such a case, fatal can become lethal.
Experimental environments for edge and cloud with
the same settings were set up as means of validity
Figure 6: Pie chart of the latency for the edge server. 38 % of assessment of the framework. Thus, we can agree
the time, latency lies between 55 to 60 ms. 78 % of the time, that the step towards vehicular edge-cloud continuum
latency lies between 50 to 65 ms — the three exploded wedges
architecture is reasonable to consider with similar
of the pie chart. Counter-intuitively, latency is usually greater
than the edge server’s counterpart, the Rahti cloud.
settings, allowing for more flexibility of dynamic
decision-making on the continuum, which was foreseen
mainly in visions but less in the practical results [22].
At the same time, our results reveal some non-obvious
longer delays with geographically nearby edge servers
compared to the cloud. We highlight that, for future
work of ours and others, the network specifications
and settings should be carefully considered to make a
comparison between edge and cloud more reliable. We
agree that this is, indeed, easier to control in a simulated
environment than in the real-world case. However, we
wish to highlight that real-world problems cannot be
diminished forever and should be addressed in a timely
manner now that we see more and more autonomous
cars becoming ubiquitous. In this paper, we can “fix” the
network problems as researchers by running optimised
test cases for optimal latency. In reality, the wide variety
of different network configurations (bad and good ones)
of the real world should be considered as a design feature.
Figure 7: Pie chart of the latency for the Rahti cloud. Most of
the time (33 %), latency lies between 25 to 30 ms. 84 % of the
time, latency lies between 20 to 50 ms. Non-obviously, most
of the time, the latency is less than Rahti cloud’s compeer, the
Acknowledgments
edge server.
The work has been supported by the EU HORIZON
project CHIPS-JU CIA FEDERATE (grant number
101139749), Business Finland project 6G Visible (grant
by smart vehicles. Hence, vehicles offload their number 10743/31/2022, and the Finnish Research Council
computational tasks, such as object recognition, path project Northern Utility Vehicle Laboratory Consortium
planning, traffic sign recognition, blind spot detection, GO!-RI (grant number 352726).
etc., to edge/cloud and receive their required results
once the data is provided by the smart vehicles to the
containerized services. References
Based on the literature, numerous advantages can be
[1] C.-M. Huang, M.-S. Chiang, D.-T. Dao, W.-L.
claimed by using the k8s framework, like scalability,
Su, S. Xu, H. Zhou, V2v data offloading for
offloading simplicity from the vehicle’s side, constant
cellular network based on the software defined
availability of the containerized services, low overhead
network (sdn) inside mobile edge computing (mec)
on the edge/cloud servers for running ready-to-serve
architecture, IEEE Access 6 (2018) 17741–17755.
containerized services [16], and the isolation of various
[2] D. Katare, D. Perino, J. Nurmi, M. Warnier,
services while running on the same cluster. In our future
M. Janssen, A. Y. Ding, A survey on approximate
edge ai for energy efficient autonomous driving Department, University of California, Berkeley,
services, IEEE Communications Surveys & Tech. Rep. UCB/EECS-2013-5 (2013) 13–5.
Tutorials 25 (2023) 2714–2754. [14] P. Arthurs, L. Gillam, P. Krause, N. Wang, K. Halder,
[3] A. Islam, A. Debnath, M. Ghose, S. Chakraborty, A. Mouzakitis, A taxonomy and survey of edge
A survey on task offloading in multi-access edge cloud computing for intelligent transportation
computing, Journal of Systems Architecture 118 systems and connected vehicles, IEEE Transactions
(2021) 102225. on Intelligent Transportation Systems 23 (2022)
[4] J. Yang, A. A. Shah, D. Pezaros, A survey of energy 6206–6221. URL: https://doi.org/10.1109/tits.2021.
optimization approaches for computational task 3084396. doi:10.1109/tits.2021.3084396 .
offloading and resource allocation in mec networks, [15] Y. Wang, S. Liu, X. Wu, W. Shi, Cavbench: A
Electronics 12 (2023). benchmark suite for connected and autonomous
[5] P. K. Nandi, M. R. I. Reaj, S. Sarker, M. A. Razzaque, vehicles, in: 2018 IEEE/ACM Symposium on Edge
M. M. or Rashid, P. Roy, Task offloading to Computing (SEC), IEEE, 2018, pp. 30–42.
edge cloud balancing utility and cost for energy [16] J. Tang, R. Yu, S. Liu, J.-L. Gaudiot, A container
harvesting internet of things, Journal of Network based edge offloading framework for autonomous
and Computer Applications 221 (2024) 103766. driving, IEEE Access 8 (2020) 33713–33726. URL:
[6] C. Chen, Y. Zeng, H. Li, Y. Liu, S. Wan, A multihop https://doi.org/10.1109/access.2020.2973457. doi:10.
task offloading decision model in mec-enabled 1109/access.2020.2973457 .
internet of vehicles, IEEE Internet of Things Journal [17] J. Van Brummelen, M. O’Brien, D. Gruyer,
10 (2022) 3215–3230. H. Najjaran, Autonomous vehicle perception: The
[7] Q. Yuan, H. Zhou, J. Li, Z. Liu, F. Yang, X. S. Shen, technology of today and tomorrow, Transportation
Toward efficient content delivery for automated research part C: emerging technologies 89 (2018)
driving services: An edge computing solution, IEEE 384–406.
Network 32 (2018) 80–86. [18] H. Li, G. Shou, Y. Hu, Z. Guo, Mobile edge
[8] E. Peltonen, A. Sojan, T. Päivärinta, Towards computing: Progress and challenges, in: 2016
real-time learning for edge-cloud continuum with 4th IEEE international conference on mobile
vehicular computing, in: IEEE World Forum on cloud computing, services, and engineering
Internet of Things (WF-IoT), IEEE, 2021. (MobileCloud), IEEE, 2016, pp. 83–84.
[9] F. Sun, F. Hou, N. Cheng, M. Wang, H. Zhou, [19] B. Blieninger, A. Dietz, U. Baumgarten, Mark8s-a
L. Gui, X. Shen, Cooperative task scheduling for management approach for automotive real-time
computation offloading in vehicular cloud, IEEE kubernetes containers in the mobile edge cloud,
Transactions on Vehicular Technology 67 (2018) RAGE 2022 (2022) 10.
11049–11061. [20] J. Redmon, A. Farhadi, Yolov3: An incremental
[10] L. Hernandez, M. Hassan, V. P. Shukla, Applications improvement (2018).
of cloud computing in intelligent vehicles: A [21] G. A. Akpakwu, B. J. Silva, G. P. Hancke, A. M.
survey, Journal of Artificial Intelligence and Abu-Mahfouz, A survey on 5g networks for the
Machine Learning in Management 7 (2023) 10–24. internet of things: Communication technologies
[11] C. R. Charles, J. Savier, An overview on and challenges, IEEE Access 6 (2018) 3619–3647.
hybrid energy storage systems for electric vehicles, [22] A. Y. Ding, E. Peltonen, T. Meuser, A. Aral,
International Journal of Electric and Hybrid C. Becker, S. Dustdar, T. Hiessl, D. Kranzlmüller,
Vehicles 14 (2022) 56–64. M. Liyanage, S. Maghsudi, N. Mohan, J. Ott, J. S.
[12] M. Gerla, E.-K. Lee, G. Pau, U. Lee, Internet of Rellermeyer, S. Schulte, H. Schulzrinne, G. Solmaz,
vehicles: From intelligent grid to autonomous cars S. Tarkoma, B. Varghese, L. Wolf, Roadmap
and vehicular clouds, in: 2014 IEEE world forum on for edge ai: a dagstuhl perspective, SIGCOMM
internet of things (WF-IoT), IEEE, 2014, pp. 241–246. Comput. Commun. Rev. 52 (2022) 28–33. URL: https:
[13] K. Goldberg, B. Kehoe, Cloud robotics and //doi.org/10.1145/3523230.3523235. doi:10.1145/
automation: A survey of related work, EECS 3523230.3523235 .