=Paper=
{{Paper
|id=Vol-2530/paper5
|storemode=property
|title=SLA-aware Approach for IoT Workflow Activities Placement based on Collaboration between Cloud and Edge
|pdfUrl=https://ceur-ws.org/Vol-2530/paper5.pdf
|volume=Vol-2530
|authors=Awatif Alqahtani,Devki Nandan Jha,Pankesh Patel,Ellis Solaiman,Rajiv Ranjan
|dblpUrl=https://dblp.org/rec/conf/iot/AlqahtaniJPSR19
}}
==SLA-aware Approach for IoT Workflow Activities Placement based on Collaboration between Cloud and Edge==
SLA-aware Approach for IoT Workflow Activities Placement based on Collaboration between Cloud and Edge Awatif Alqahtani Devki Nandan Jha Pankesh Patel School of Computing, Newcastle School of Computing, Newcastle Pandit Deendayal Petroleum University University University Newcastle, UK Newcastle, UK Gandhinagar, INDIA awa.alqahtani1@gmail.com i.dnjha@gmail.com dr.pankesh.patel@gmail.com Ellis Solaiman Rajiv Ranjan School of Computing, Newcastle School of Computing, Newcastle University University Newcastle, UK Newcastle, UK ellis.solaiman@ncl.ac.uk raj.ranjan@ncl.ac.uk ABSTRACT the constraints of IoT devices (integrated with everyday objects In the Internet of Things (IoT) era, various nodes generate vast such as cars, utility parts, and sensors) and their limitations in quantities of records, and data processing solutions consist of a computing resources, memory capability, power, and bandwidth. number of activities/tasks that can be executed at the Edge of the Many of these problems could be solved by using Cloud-Assisted network or on the Cloud. Their management at the Edge of the net- Things or Cloud of Things (CoT) technology as it provides large- work may limit the time required to complete responses and return scale and on-demand computing resources for managing, storing, the final result/analytic to end users or applications. Also IoT nodes processing, and sharing IoT information and services [4] [5]. How- can perform a restricted amount of functionality over the contex- ever, with the increasing number of applications, and their time- tual information gathered, owing to their restricted computational sensitive nature, Cloud-based solutions are not enough and mostly and resource capacities. Whether tasks are assigned to an Edge or a suffer from latency due to the centralized nature of Cloud data Cloud is based on a number of factors such as: tasks’ constraints, the centers, which are mostly located at a far distance from the data load of nodes, and energy capacity. We propose a greedy heuristic sources. Therefore, utilizing Edge resources while benefiting from algorithm to allocate tasks between the available resources while the Cloud whenever it is required, is essential for time-sensitive minimizing the execution time. The allocation algorithm considers applications, and for overcoming the problems associated with factors such as the deadline associated with each task, location, and centralized control. However, there are a number of challenges to budget constraint. We evaluate the proposed work using iFogSim consider such as maximizing the utilization of Edge resources while considering two use case studies. The performance analysis shows considering the limitations of their computation capabilities. Also that the proposed algorithm has minimized cost, execution time, there is a possibility that some of the tasks can be time-sensitive control loop delay, networking, and Cloud energy consumption and it is therefore crucial to be allocated and executed immediately. compared to the Cloud-only approach. Executing forthcoming tasks requires proper task allocation and scheduling which satisfies the requirement of all the tasks while KEYWORDS maintaining the SLA (service level agreement). There are certain challenges that need to be considered while IoT, Allocation Algorithm, Placement Algorithm, SLA making the allocation and scheduling decisions. The main chal- lenges are given below. 1 INTRODUCTION Cyber-Physical Systems (CPSs) are integrated systems that aim to (1) The uncertainty of the data-in rate: The IoT captures data bring physical entities together with the all-present computing and from the physical environment, which is dynamic in na- networking systems [2]. An important concept in close association ture. For example, in camera-based applications that capture with CPS is the Internet of Things (IoT), which promotes the use people passing by a certain point, the number of pictures of emerging technologies and architectures for large-scale appli- taken varies for many reasons (e.g., rush hours vs peak hours, cations to define and virtualize physical objects [2]. The IoT is a weekdays vs weekends, or if there are certain events nearby). revolutionary technology that provides a completely linked "intelli- Therefore, it is essential to rerun the algorithm to reschedule gent" universe, accelerating the 4th industrial revolution in which the activities whenever the data-in rate exceeds the prede- millions of things are linked with each other on the globe [4]. These fined data-in rate. items share information and services to define, track, and handle (2) The conflict between objectives: To minimize the cost, it is the physical environment, enabling different applications e.g. smart better to deploy the activities on Edge resources; however, in cities, agriculture, and health care applications to transform our some cases, deploying activities on Edge resources affects the lifestyle and enhance both quality of life and human civilization. throughput of an application and it will suffer from resource However, the large-scale realization of IoT services is restricted by bottlenecks due to the limited processing capabilities. 1st Workshop on Cyber-Physical Social Systems (CPSS2019), October 22, 2019, Bilbao, Spain. Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 30 CPSS2019, October 22, 2019, Bilbao, Spain A. Alqahtani, et al. To address these challenges while making the allocation, we • IoT devices: the layer consists of devices with sensing and propose a layered-based algorithm that identifies and minimizes actuating purposes. It also has internet connection capability the global bottleneck, i.e., minimizing the processing time and the for transferring the data to Edge or Cloud. cost as well as maximizing the utilization of resource computation at • Edge Computing layer: this includes the upper layer of the the Edge layer. The main contributions of this work are summarized IoT devices which provides lightweight computation. Re- as follows: sources in this layer are distributed to cover different re- gions. In other words, resources can be clustered by the • We consider a task placement approach based on cooper- region which covers the IoT devices within that region to ation between the Edge and Cloud that supports service connect and transfer data. decentralization, leveraging context-aware information such • Cloud Layer: It is the centralized layer where heavy compu- as location and available computing and storage capacities of tation/storage processes can be delegated. Edge resources. The placement approach considers deadline constraints associated with each task as a way to emphasize utilizing Edge resources and a budget constraint at applica- tion level. This maximizes the utilization of Edge resources and minimizes latency, energy consumption, and cost. • We conduct a performance analysis with various case studies using iFogSim1 to reveal the effectiveness of the proposed ap- proach in terms of maximizing the utilization of Fog devices while reducing latency, energy consumption, and network load. 2 SLA- AND CONTEXT-AWARE APPROACH FOR IOT ACTIVITY PLACEMENT ACROSS CLOUD AND EDGE In this approach, we consider IoT applications, which mostly consist Figure 1: Tasks dependencies example for an IoT application of a set of activities, some of which requires high bandwidth and low computation. They can be performed on one of the resources at the Edge of the network, while if an activity requires more computation, 2.1.2 Task Graph: A task graph is represented by a directed acyclic then it can be offloaded to the Cloud. In the case where there is graph (DAG),TG = (T , E) where the set of nodesT = {t 1 , t 2 , t 3 , ..., tn } more than one activity, selecting which one to deploy at the Edge or represent n tasks. Between tasks, there is a set of edges belongs to E the Cloud level can be based on different criteria (e.g., cost, distance, which represents the data dependency between nodes. For example, location, processing time). Our aim is to provide a solution that tasks t 1 and t 2 are connected by e 1,2 . In another word, between any distributes the workflow activities in such order that optimize both ti and t j , there is ei, j belongs to E. Thus, we can define Edges and consumer requirements and utilizes computation capabilities at the Tasks as follow: Edge, meanwhile aiming to avoid any SLA violation. ∀(ti ∧ t j ∈ T ), ∃ ei, j = (ti , t j ) ∈ E 2.1 Problem Definition and Modeling Consider T represents a task which is defined as T = {Tid , ReqP- In IoT applications, task/activity placement is the problem of al- Capcty, deadline, region, level}.Tid represents the task Id, ReqPCapcty locating tasks/activities to a set of processors (resources) that are represents requested processing capacity, deadline represents the distributed across layers (Edge and Cloud). The input to the task deadline constraint of a single task while reдion reveals the re- placement controller is an activity graph and a processor graph, gion/location where this task is preferable to be deployed. Finally and the output is a placement plan that maps each activity to a level is used to denote how many hops this task is far from the suitable processor/resource. Whether the resources are located at starting point which in our case is sensing events.2 the Edge or the Cloud is based on each task/activity’s computation Each ei, j can be defined as ei, j = ( SrcI D, DisID, dataTransfer- and communication requirements. eRate, TupleProcessingReq, TupleLentgth) where SrcI D represent the source task id (ti node); DisID represents the destination task 2.1.1 Multi-layer Computing Architecture for IoT. This section presents id (t j node); dataTransfereRate expresses the data transfer rate the proposed multi-layer architecture for IoT. It also describes the between ti and t j , TupleProcessingReq represent the processing task placement information and the necessary concepts related to requirement of coming tuples, and TupleLength represents the total the proposed scheme. Figure 1 denotes the three main layers as length of the tuple. described below: Each task has one or more predecessors unless it is a start task as well as successors (one or more) unless if it is a finish task (see Figure 2 which depicts the start and finish tasks). Any task starts only after 1 The iFogSim is an open source toolkit for modeling and simulating resource management approaches for IoT, Edge and Fog Computing 2 Within the text, we used tasks, workflow activities and intermodules interchangeably https://github.com/Cloudslab/iFogSim for the same meaning. 31 SLA-aware Approach for IoT Workflow Activities Placement based on Collaboration CPSS2019, October 22, 2019, Bilbao, Spain between Cloud and Edge all its predecessor tasks have completed, so its earliest start time is • Level of activity: number of hops that separate a task/activity equal to the maximum finish time of all of its predecessors. from its starting point, in our case from the IoT devices that generate the data. It is essential to denote the dependency between tasks that helps to avoid assigning a task to the resource on the level lower than their predecessor, as well as to allow parallel processing for tasks that are within the same level. • Quality of Service: knowing in advance the constraints on the offered services plays a role in selecting the type and layer of resource. In our work we considered minimizing the end-to-end response time by considering the deadline constraint for each involved task/activity. To express our objective in maximizing the utilization of Edge resources, Each Task ti can be deployed on RCloud or on R Edдe , Figure 2: Tasks dependencies example for an IoT application however, deploying ti on a Cloud resource or Edge resource is a binary variable. If ti is deployed on Cloud, then 1 is assigned to Ti RCl oud while 0 to Ti R Edдe and vice versa. 2.1.3 Processor Graph. Consider the presented topology in Figure Each task is processed on either Edge or Cloud resources, thus, 3. A processor graph is represented by a DAG, PG = (P, D), where we try to maximize assigning tasks to Edge resources whenever it the set of nodes P = p1 , p2 , p3 , ..., pn represents n processors, a is appropriate which we represent mathematically as : processor belongs to P can be a Cloud or an Edge resource. Between processors, there is a link distance d that connects them, for example, i=n Õ processors p1 and p2 are connected by d 1,2 , so there is a set of links Maximize Ti R Edдe (1) di, j between any pi and p j belongs to P and di, j belongs to D. We i=0 can define the distance links and processors as follows: The processing time of a taski on a pj is calculated as given in equation 2. ∀pi ∧ p j ∈ P, ∃ di, j = (pi , p j ) ∈ D αptij fti (zi ) TEx ec (ti , p j′ ) = + upLnkLatencyp j (2) p9 p j′ Here, fti (zi ) represents computation requirement for task ti and d 7,9 Cloud Resource zi represents data into task ti and αptij represent number of current p7 GW1 GW2 modules running concurrently with ti on node p j′ . d 1,7 To calculate the CPU requirement for upcoming data/tuples for d 3,7 task ti is given in equation 3 for each edge has ti as its distention(i.e p1 p2 p3 DisID for edge e is ti ). Where DTR represents the data transfer rate dev1 dev2 dev3 dev4 dev5 dev6 of an edge e and TPR represents the required processing capacity for each transferred tuple by edge e. Figure 3: Tasks dependencies example for an IoT application edдee=y Õ DT R × T PR ∀ edдe e has DisID = ti (3) Each processor pi can be defined as pi =(pCapctyi , upLnkLatencyi , edдee=0 pmLoadi ). pCapctyi is considered to hold processing capacity of pi , communication bandwidth is upLnkLatencyi and pmLoadi is the current PM load. The total cost of of running ti over node p j is sum of memory cost memCostT i , communication cost commCostT i , storage cost 2.1.4 Objectives. The objective is to propose a placement mech- storдCostT i and node cost nodeCostT i as given in equation 4. Each anism which aims to maximize the utilization of Edge resource of these costs are explained in equation 5 - 8 respectively. as well as minimizing the cost and latency of an IoT application. In this work, the main information considered for task/workflow activity placement is as follows: Cost(Ti ) = memCostT i + commCostT i + storдCostT i + nodeCostT i • Network Topology: available resources and their computa- (4) tion capabilities. fti (zi ) memCostT i = memoryCostU nit (5) • Location of initiated requests or consumed services. pi • Service Type: data storing and data filtering are services that commCostT i = sizeDataIn ∗ commCostU nit (6) require different computation/storage capabilities, therefore, storдCostT i = DataToStore ∗ StoraдeCostU nit (7) approximate size of data (e.g. required Million Instructions Per Second (MIPS) required by an activity/task). nodeCostT i = Ti Ex ec ∗ NodeCost (8) 32 CPSS2019, October 22, 2019, Bilbao, Spain A. Alqahtani, et al. 2.1.5 Proposed Algorithm. In the following section, we present Table 1: Notations for the Offline Integer formulations and the proposed algorithm for placing intermoduls among available symbols used in the algorithm resources with considering the following objectives and constraints: Notations Meanings Offline Integer Programming Formulation. Here we aim to min- ti task/activity/intermodules i imize the cost of deploying intermodules on available resources pj a resource with processor pj and and the end to end response time. For the offline version of the ei j dependency edge between two tasks intermodules placement problem, integer programming formula- ti and t j tions are derived. These formulations are used to devise limits on li j Link between two resources Pi and Pj the suggested approach. The main objective aims at minimizing bw i j bandwidth of link li j the execution time while considering other constraints such as zi j data size transferred cost/budget constant. Table 1 summarizes the notations used in our over edge dependency ei j formulation. Our main decision variables, denoted as x i j is defined disi j distance between Pi and Pj as follows: di j Link delay of link li j between Pi and Pj pi c apac i t y Computation capacity of resources Pi i=n Õ ReqPCapcty Represents requested processing capacity obj = Minimize Timet aski (9) level Reflects how many hops this task is i=0 far from the starting point subject to ei, j Edge between ti and tj task nodes x i j ∈ 0, 1, ∀i = 0, 1, ..., n; ∀j = 0, 1, ..., m (10a) SrcI D Represent the source id i=n DisID Represents the destination id dataTrans f ereRate Expresses data transfer rate Õ CostTi <= CostConstraint (10b) i=0 between ti and t j i=n Õ i=n Õ TupleProcessinдReq Represent processing requirement Ti R Edдe > Ti RCl oud (10c) of the tuples i=0 i=0 TupleLentдth Represents the length of tuples Õ pCapctyi Holds processing capacity of pi in MIPS ti j <= 1 (10d) upLnkLatencyi Up Link Latency of pi i,l evel =l pLoadi Current CPU load of pi k=pr edecessor Õ List Size TupleLentдth Represents the length of tuples tk jt i e r <= ti jt i e r (10e) Ti R Edдe Task ti is running on Edge resource k =0,i, j Ti RCl oud Task ti is running on Cloud resource Constraint 1 enforces the binary nature of x i j . Constraints 2 TimeConstraintti j Time constraint of runing ti ensures that the cost of the deployment plan will not exceed the Timeti j Execution Time of running cost/budget limit. Constraints 3 ensures that number of assigned ti on resource p j intermodules to Edge resources is greater than the number of as- ti jt i e r Reflects the tier of resource p j that run ti signed intermodules to Cloud resources. Constraint 4 ensures that predecessorListSize predecessors of ti with ReqPCapcty no two intermodules from the same level are assigned to the same less than ti ’s resource. Constraint 5, for all predecessors of a ti ti that has re- SortedResorces Sort resources based on their quired processing capacity more than ti itself, constraint 5 ensures execution time of task ti that no intermodule ti is assigned to a resource that is located in a in ascending order tier lower than any of its predecessors. In Algorithm 1, the Tasks and resources graphs are inputs. Line 6,7 and 8 are defining the associated level of each intermodules which calculated based on number of hops between the intermod- ule and the source of captured interesting event. Line 10 to 12 are to calculate the corresponding execution time of deploying ti on resource p j and then sort all resources based on their execu- that matches the requirements and considered constraints then as- tion time. Starting with the least execution time do the constraints sign ti to Cloud resource. In checkConstraintsConsistencyFunction checking starting from Line 14 till 28: it check if p j resource has t checks that a resource p j can sustain the intermodule within its predecessors to check that no predecessor of an intermodule ti , deadline constraint, not exceeding the budget and it is not running is assigned to a resource that allocated in a layer higher than the other tasks that have the same level as the coming intermodule and current checked p j . If there is a resource p j , that has one of ti ’s pre- it is within the same region. If searching all resources within the decessor deployed on OR there is no predecessor of ti are deployed same region does not satisfy the constraints, then check the other on a resource with layer above the current p j , then do checking on regions, if there is none then return false. If false is returned, then the associated constraints with the interemodule ti by calling check- the task is assigned to Cloud resource if the cost constraint is not ConstraintsConsistencyFunction (Line 21). If there is no resource p j violated. 33 SLA-aware Approach for IoT Workflow Activities Placement based on Collaboration CPSS2019, October 22, 2019, Bilbao, Spain between Cloud and Edge Algorithm 1: SLA aware algorithm for application modules Algorithm 2: Checking Budget Constraints Consistency after placement cross layers mapping tasks to resources 1 Input: TG=(T,E) ; PG=(P,D) 1 checkConstraintsConsistencyFunction 2 region =-1 2 if region==-1 then 3 found= false 3 if ti r eдion == p j r eдion then 4 Output: a mapped applications modules to available resources 4 if that p j does not have tasks from the same level of ti 5 Objectives: Minimize Cost and Minimize Application latency then 6 foreach t i in TG do 5 Calculate the Execution Time of ti on p j by 7 define a deadline d constraint applying Equation 2 8 assign a level value to ti 6 if the requested CPU Less than available CPU then 7 //Check if the resource can sustain the place 9 foreach resource p j in PG do module check if the time is less than or equal 10 calculate T imeti j to the associated deadline with ti then 11 add(SortedResorces, p j ) 8 return true 12 end 9 else 13 foreach resource p j in SortedResorces do 10 change the region value from -1 to another region 14 if ti has a predecessor then not the same as region of ti return false 15 foreach tk in predecessor list of ti do 11 end 16 if tk is already assigned to a PG resource then 12 else 17 pl = the PG resource that tk is assigned to 13 Go To line 4 18 if pl l evel is greater than p j l evel then 14 end 19 break 20 else 21 call checkConstraintsConsistencyFunction complexity is θ (nm) where m represent number of IoT devices (e.g., if checkConstraintsConsistencyFunction mobile) in the available resources in PG. Cases where an intermod- then ule ti has predecessors requires more time to avoid assigning an 22 assign ti to p j update list of assigned intermodule to a resource in a layer lower than its predecessors’s modules of p j Found= true resource. Thus we perform a checking step that iterates all of an 23 else intermodule’s predecessors. As a result, in best case scenario, when 24 continue an intermodules has only one predecessor and finds a resource 25 end that matches intermodules’ constraints from the first attempt, the 26 end time complexity is θ (n)θ (1) and considering only the upper bound 27 end means time complexity equals to θ (n). In worst case scenario, when 28 end an intermodule ti has k predecessors and finding a resource that 29 if not found then matches intermodules’ constraints leads to checking all available m 30 Calculate the Cost of executing ti on Cloud as in Eq. 4 resources. In this case, time complexity can be calculated as θ (nmK), 31 if TotalCost+=cost of executing ti on a Cloud resource is so time complexity can be in worst case scenario: θ (nm)+θ (nmK) , less than budget then however, since k, which represents number of predecessors of an 32 update TotalCost intermodule, is less than total number of intermodules n, therefore 33 assign ti to a cloud resource the Time complexity is θ (nm). 34 else 35 Log The Cost exceeds the allowed Budget and break 3 EVALUATION 36 end To evaluate our proposed algorithm we run it using the iFogSim 37 end simulator. It simulates IoT applications where it can enable ap- plication modules to be allocated and scheduled among Fog and Cloud resources. There is a number of simulators for IoT, how- ever, we chose iFogSim due to the fact that it is built based on 2.2 Time Complexity Analysis CloudSim, which is popular among researchers for testing various We solved this problem as a context-aware approach, therefore, strategies/algorithms. Furthermore, iFogSim has been used by a if the intermodule does not have predecessors, then in best case considerable number of published works such as [7], [6] and [1]. scenario, first search attempt returns a resource that matches the requirement for each intermodule, thus the time complexity is θ (n), 3.1 Use Case Studies where n represents number of intermodules. However, in worst For evaluation purpose, we consider the following use case scenario: case scenario when finding a resource that matches intermodules’ constraints and performing it for each intermodule ti , this requires 3.1.1 Remote Health Monitoring Service (RHMS) Case Study 1. Con- checking all m resources in the resource list. Therefore, the time sider a Remote Health Monitoring Service (RHMS) where patients 34 CPSS2019, October 22, 2019, Bilbao, Spain A. Alqahtani, et al. (elderly people, people with long term treatments,...) can subscribe Table 4: Configuration Description of Fog Devices to be monitored on a daily basis. Data is collected and filtered and if there is an interested pattern or event that match a threshold value, Device Type CPU [GHz] RAM [GB] Power [W] then data can be analysed on a small scale if it is related to a specific Smart Phone 1.6 1 87.53(M) 82.44(I) patient, however, in cases that require comparing coming data with WiFi Gatway 3 4 107.339(M) 83.433(I) historical data, or in cases that same events come from different ISP Gatway 3 4 107.339(M) 83.433(I) subscribers such as fever signs, then the scenario can be considered Cloud VM 3 4 107.339(M) 83.433(I) as a large scale data analysis task that needs high computational power. Most interesting analysis results are then stored. In this use case, the following workflow activities; data collec- 3.3 Performance Evaluation Results tion, data filtering, small-scale real-time data analysis, large-scale 3.3.1 Analysis of Case Study1. We applied the proposed algorithm real-time data analysis, and storing data are considered as tasks for Case Study 1 and compared the performance result with placing t 1 , t 2 , t 3 , t 4 , t 5 . There are sensors, attached to patients, hand-wrist, the intermodules on Cloud. Execution Time, Energy Consumption, camera video for some patients and mobile accelerometers to cap- Network Usage and Cost are the metrics that are Captured simulat- ture activity patterns. They are connected to a smart phone as a ing the application and applying the proposed approach to place its gateway which is then connected to WiFi gateway. The WiFi gate- intermodules using iFogSim. The following subsections compare way is connected to an Internet Service Provider, which in turn is the results of applying the proposed approach with applying the connected to a Cloud datacenter. Description of intermodule of this Cloud only approach when considering configurations listed in 2. case study is depicted in Table 2. Execution Time. The overall execution time for Case Study 1 Table 2: Description of intermodule edges of Case Study 1 recorded less time when applying the proposed approach for task placement against the Cloud only approach. Figure 4. Tuple Type CPU Length[MIPS] N/W Length Sensor 2500 500 6000 COLLECTED_DATA 2500 500 FILTERED_DATA 1000 1000 Execution Time in Milliseconds 5000 ANALYSED_RESULT1 5000 2000 4000 ANALYSED_RESULT2 10000 2000 COMMANDS 28 100 3000 2000 3.1.2 Intelligent Surveillance Case Study 2. Intelligent Surveillance 1000 application comprises five main processing modules: Motion De- tector, Object Detector, Object Tracker, PTZ Control, and User 0 Cloud Proposed Interface. This case study is one of the case studies mentioned in Execution Time 5640 1345 [3]. We have used it because we are planning to compare our results with the Edge-ward module placement algorithm presented in [3]. Figure 4: Time Execution of Case Study1 For more details of the case study and the algorithm, readers are advised to refer to [3]. A description of the intermodule of this case study is depicted in Table 5. Networking Usage. As can be noted in Figure 5, there is not much difference in network usage, however, the proposed approach 3.2 Physical network reflects slightly more network usage at the Edge and this is probably For the case study, we have considered a physical topology with because of allocating most intermodules to the Edge resources. different types of Fog devices. Table 3 and Table 4 present the configuration of the used topology. This configuration is the same 200 for both case studies, except the number of Fog devices. Case study 180 Network Usage in KiloBytes 1 consists of four areas and each area has four fog devices; Case 160 140 study 2 consists of two areas and each area has four fog devices. 120 100 Table 3: Associated Latency of network links 80 60 40 Source Destination Latency [ms] 20 IoT device Smart Phone 1 0 Network on Cloud Network on Edge Network on Mobile smart phone WiFi Gatway 2 Cloud 0.099 163.076 3.1984 WiFi Gatway ISP Gatway 2 Proposed 0.099 173.5073 3.1984 ISP Gatway Cloud datacenter 100 Figure 5: Networking Usage of Case Study1 35 SLA-aware Approach for IoT Workflow Activities Placement based on Collaboration CPSS2019, October 22, 2019, Bilbao, Spain between Cloud and Edge Energy Consumption. In general, the Cloud-only approach as of time execution and the proposed approach has the least execution depicted in Figure 6 recorded a higher level of energy consumption time. on both Cloud and mobile layers compare with the proposed ap- proach and on Edge layer, there is a slightly difference between both approaches which might have the same reason which is because the propose approach allocates more intermodules on Edge rather 3500 than Cloud. 3000 Execution Time in Milliseconds 2500 16 2000 Energy Consumption in Megajoules 14 1500 12 1000 10 8 500 6 0 Cloud Edge-ward Proposed 4 Execution Time 2528 2930 868 2 0 Energy on Cloud Energy on Edge Energy on Mobile Figure 8: Time Execution of Case Study2 Cloud 15.13206392 4.171665 13.62222746 Proposed 13.53368436 4.230683148 13.21738922 Figure 6: Energy Consumption of Case Study1 Networking Usage. Networking usage is described for the three tiers: IoT devices (mobiles), Edge resources (WiFi Gateways) and Cloud Cost. Figure 7 depicts the cost of implementing case study Cloud. Edge-ward placement reflects the least network usage among 1 when applying the Cloud-only approach, and our proposed ap- all proposed approaches for network usage on Edge and mobile proach. Cloud cost is higher with the cloud-only approach, while tiers. The proposed approach has recorded no network usage on our approach is five times less costly than the Cloud-only approach Cloud, but has recorded high level of network usage on the Edge tier. because it only allocates "storing data" to the Cloud while the rest This seems to be because of placing intermodules on different Edge of deployed tasks are allocated to Edge resources. resources since it considered parallel processing for independent tasks as well as following a greedy approach might affect the overall 30000 optimality of resource allocation mechanism. 25000 20000 1200 Cloud Cost 15000 1000 Network Usage in KiloBytes 800 10000 600 5000 400 0 200 Cloud Proposed Cost on Cloud 25690.02014 3029.449143 0 Cloud Edge-ward Proposed Network on Cloud 0.099 0 0 Figure 7: Cloud Cost of Case Study1 Network on Edge 326.0964 16.4832 1038.61192 Network on Mobile 6.3968 0.3232 56.7716 3.3.2 Analysis of Case Study2. We applied the proposed algorithm Figure 9: Networking Usage of Case Study2 for case study 2 and compared the performance result with placing the intermodules on Cloud only as well as with the Edge-ward placement algorithm proposed in [3]. Execution Time, Control loop Latency, Energy Consumption, Network Usage and Cost are the Energy Consumption. Figure 10, shows the energy consumption metrics that have been Captured. The following sections describes for the three approaches; the Cloud-only approach demonstrates the the comparison results. highest value for energy consumption on the Cloud tier, however, Execution Time. Execution time of all three approaches: Cloud- on Edge and mobile/IoT devices all approaches reflect a similar level only, Edge-ward placement and our proposed placement approaches of energy consumption with a slightly low level for the proposed are reflected in Figure 8. Edge-ward placement has the highest level approach on mobile layer. 36 CPSS2019, October 22, 2019, Bilbao, Spain A. Alqahtani, et al. 16 each task involved). Ref [6] iFogStor seeks to take advantage of Energy Consumption in Megajoules 14 12 Fog nodes’ heterogeneity and location to reduce the overall latency 10 of storage and data retrieval in the fog. They have formulated the 8 data placement problem as a GAP ( Generalized Assignment Prob- 6 lem) and proposed two solutions: 1) an exact solution using integer 4 programming and 2) a geographically based solution to decrease 2 solving time. However, its focus is related to storing data at the 0 Edge-ward Placement Edge to ease data retrieving. In Ref [1] the authors propose an Cloud Proposed Algorithm Energy on Cloud 14.28120236 13.34970988 13.32 infrastructure and IoT application model as well as a placement Energy on Edge 2.502999 2.502999 2.563966048 approach taking into account the power consumption of a system Energy on Mobile 6.68893349 6.945115755 6.606916686 and minimizing delay violations using a Discrete Particles Swarm Optimization (DPSO) algorithm. iFogSim simulator is used to eval- Figure 10: Energy Consumption of Case Study2 uate the proposed approach. However, the authors of this approach consider the effect of their algorithm on energy consumption and Cloud Cost. Figure 11 shows the Cloud cost. Since the proposed delay only. Ref [4] suggests a smart decision-making system to approach placed all tasks on Edge, therefore Cloud cost is none assign tasks locally. The remaining tasks in the network or the Fog while Edge-ward placement approach reflect lower cost than Cloud / Cloud are transferred to peer nodes. However, their approach only approach. allows for task processing to be executed in sequential order (there is no parallel execution capability). 16000 . 14000 12000 6 CONCLUSION AND FUTURE WORK 10000 Due to the limited computational and resource capabilities of IoT Cloud Cost 8000 nodes, tasks can be allocated to Edge or Cloud resources taking into 6000 account a number of possibilities such as: task constraints, node 4000 load and computing capability. We suggested a heuristic algorithm 2000 for allocating tasks among the available resources. The allocation algorithm takes into account factors such as related time limits for 0 Cloud Edge-ward Proposed each task, location, and budget constraints. We utilized iFogSim Cost on Cloud 13627.17267 421.2034006 0 to evaluate the proposed approach for two use-case studies. The performance analysis demonstrates that the suggested algorithm Figure 11: Cloud Cost of Case Study2 has minimized costs, execution time, control loop delay, networking and cloud power usage compared to cloud-only and edge-ward 4 DISCUSSION positioning methods. For Future work, we will carry an evaluation of the proposed approach on real systems. We have applied our heuristic algorithm to decentralize task place- ment in a cooperative way between Edge and Cloud resources. We REFERENCES have considered an RHMS as Case Study and compared it with the [1] T. Djemai, P. Stolf, T. Monteil, and J. Pierson. 2019. A Discrete Particle Swarm Cloud-only approach (an approach where tasks are placed only on Optimization Approach for Energy-Efficient IoT Services Placement Over Fog the Cloud). The proposed approach demonstrates lower execution Infrastructures. In 2019 18th International Symposium on Parallel and Distributed Computing (ISPDC). 32–40. https://doi.org/10.1109/ISPDC.2019.00020 time, control loop, cost, network usage and energy consumption. [2] Ioan Dumitrache, Ioan Stefan Sacala, Mihnea Alexandru Moisescu, and Simona Iu- Furthermore, we considered comparing our approach with a built- liana Caramihai. 2017. A conceptual framework for modeling and design of Cyber-Physical Systems. Studies in Informatics and Control 26, 3 (2017), 325–334. in use case in iFogSim. Thus we compared the results with the [3] Harshit Gupta, Amir Vahid Dastjerdi, Soumya K. Ghosh, and Rajkumar Buyya. Edge-ward placement algorithm applied to Case Study 2. In general 2017. iFogSim: A toolkit for modeling and simulation of resource management our proposed approach shows better results, as has been presented techniques in the Internet of Things, Edge and Fog computing environments. Software: Practice and Experience 47, 9 (jun 2017), 1275–1296. https://doi.org/10. in the previous section. 1002/spe.2509 [4] Kostas Kolomvatsos and Christos Anagnostopoulos. 2019. Multi-criteria optimal 5 RELATED WORK task allocation at the edge. Future Generation Computer Systems 93 (2019), 358 – 372. https://doi.org/10.1016/j.future.2018.10.051 Ref [7] offers a new, multi-layered, IoT-based fog computing archi- [5] Kostas Kolomvatsos and Christos Anagnostopoulos. 2019. Multi-criteria optimal tecture. In particular, it has developed a service placement mecha- task allocation at the edge. Future Generation Computer Systems 93 (2019), 358 – 372. https://doi.org/10.1016/j.future.2018.10.051 nism that optimizes service decentralization in the fog landscape [6] M. I. Naas, J. Boukhobza, P. Raipin Parvedy, and L. Lemarchand. 2018. An Extension by using context-aware information like location, response time to iFogSim to Enable the Design of Data Placement Strategies. In 2018 IEEE 2nd International Conference on Fog and Edge Computing (ICFEC). 1–8. https://doi.org/ and service resource consumption. The approach is being used in 10.1109/CFEC.2018.8358724 an optimal way to increase the efficiency of IoT services in terms [7] Minh Quang Tran, Duy Tai Nguyen, Van An Le, Hai Nguyen, and Tran Vu Pham. of response time, energy and cost reduction. However, this work 2019. Task Placement on Fog Computing Made Efficient for IoT Application Provision. Wireless Communications and Mobile Computing 2019 (01 2019), 1–17. considers tasks to be independent, and deadline constraints to be https://doi.org/10.1155/2019/6215454 at application level only (i.e., there is no deadline constraints for 37