Journal of Network and Computer Applications

Fog computing has emerged as a complementary solution to address the issues faced in cloud computing. While fog computing allows us to better handle time/delay-sensitive Internet of Everything (IoE) applications (e.g. smart grids and adversarial environment), there are a number of operational challenges. For example, the resource-constrained nature of fog-nodes and heterogeneity of IoE jobs complicate eﬀorts to schedule tasks eﬃciently. Thus, to better streamline time/delay-sensitive varied IoE requests, the authors contributes by introducing a smart layer between IoE devices and fog nodes to incorporate an intelligent and adaptive learning based task scheduling technique. Speciﬁcally, our approach analyzes the various service type of IoE requests and presents an optimal strategy to allocate the most suitable available fog resource accordingly. We rigorously evaluate the performance of the proposed approach using simulation, as well as its correctness using formal veriﬁcation. The evaluation ﬁndings are promising, both in terms of energy consumption and Quality of Service (QoS).


Introduction
In the emerging Internet of Everything (IoE) paradigm, billions of devices are being connected to the Internet. The number, types and nature of Internet-connected devices will also increase for the foreseeable future. For example, it was reported that more than 50 billion devices will be linked to the Internet by 2020 (Mohan and Kangasharju, 2017), with an estimated market worth of $7.1 trillion (Wortmann and Flüchter, 2015). Quality of Experience (QoE) is one of several key metrics for the IoE users (Mahmud et al., 2019). To deal with limitations inherent in a cloud computing environment (e.g. privacy of users sourcing the data to the cloud, particularly hosted in an overseas jurisdiction, and performance issues such as latency), there has been attempts to 'push' the computing to the edge of the network via fog nodes (Choo et al., 2018;Osanaiye et al., 2017). Hence, the fog computing paradigm has been emerged to handle time/delay-sensitive IoE increase in the number of IoT devices while achieving a satisfactory QoE.
The existing proposed techniques in fog computing lacks the adaptive and intelligent behavior. They only consider the direct communication of end devices preferably with closest fog nodes. However, this type of architecture only supports individual node queue level task scheduling. Further, it is quite complex to implement adaptive and intelligent learning-based task scheduling in this kind of architecture. Subsequently, we cannot take the real advantage of heterogeneity of fog nodes surrounded by a set of IoT devices. In order to facilitate intelligent and adaptive task scheduling policies in this dynamic and heterogeneous environment, we need a smart layer (Aazam and Huh, 2014) between end devices and fog nodes that should have three main capabilities: 1) the ability to define whether the incoming request should be served by a cloud or a fog node, 2) the capability to schedule the incoming task to most appropriate fog node among the available fog devices, and 3) and for efficient task scheduling this layer should have the functionality that extensively implements adaptive and intelligent learning-based task scheduling. In this paper, we propose an intelligent and adaptive task based learning technique for optimal task scheduling in a fog computational paradigm. The key contributions of the paper are summarized as follows: • A proposition and implementation of a smart layer between IoE/IoT devices and Fog nodes.
• We propose a QoS-Aware approach (hereafter referred to as the learning repository fog-cloud -LRFC). The proposed service has been provisioned at multiple geographically distributed gateways deployed among IoE devices and their corresponding Fog nodes proximity. Consequently, making our proposed scheme highly scal-able, and thus reliving potential performance bottlenecks. Finally, our proposed deployment model significantly suits nearly all existing and futuristic environments (i.e., IoT, IoE, smart X (smart city, smart grid, smart building, smart forest etc.)).
• For a comprehensive and unbiased comparison, we implement the state-of-the-art task scheduling policies on our proposed smart layer.
• We thoroughly evaluate the performance of the proposed approach using simulation, and prove its correctness through formal verification. The proposed approach shows promising results in terms of processing delay, overall network propagation time and power consumption.
The remainder of this paper is organized as follows. In the next section, we present the background of the proposed approach and relevant works. Section 3 describes the complete methodology (i.e., system model and algorithms) of the proposed approach. The formal verification of our proposed scheme is detailed in Section 4. Performance evaluation is comprehensively elaborated in Section 5. Finally, Section 6 concludes the paper along with thoughts for future work.

Related work
The section briefly explains the background and related research.

Learning based approaches
Rule-based learning is the simplest form of artificial learning (Núñez et al., 2006). In general, rules are expressed in IF-THEN condition, and they can be simple or multi conditional IF-ELSE statement (Ligêza, 2006) (Keshtkar et al., 2014). It has been shown that rule-based learn- ing can be applied in a wide range of applications, such as to reduce and predict the energy gap between predicted and actual energy consumption in buildings (Yuce and Rezgui, 2017), big data classifications problems (Elkano et al., 2017), user behavior classifications (Alrashed, 2017), developing decision support system for risk assessment and management strategies in distributed software development (Aslam et al., 2017), and so on.
Case-based Learning (CBR), a lazy learning approach, works well where rich structured knowledge is not available earlier (e.g. in autonomic computing) (Aha, 1991) (Khan et al., 2011). For learning purpose, it involves the processing of a set of cases to train the system and predict values from these cases (and prior knowledge). In (Amin, 2017), for example, the authors proposed an architecture for sharing of experience using an agent-based system architecture layout (SEASALT). The latter works with diverse data repositories to maintain, retrieve, adopt and retain cases. Similarly, in (Brown et al., 2017), the authors proposed a temporal CBR for diabetes insulin by looking at prior events such as blood pressure level, physical activities and carbohydrate ingestion. Additionally, similarity metrics like Euclidean distance have also used with CBR for pattern classification .
Hybrid approaches can be more effective than either a rule-or casebased learning approach to solve complex problems (Van Den Bossche et al., 2010), (Prentzas and Hatzilygeroudis, 2003). In (Kumar et al., 2009), the author used a hybrid approach for domain-independent clinical decision support in a hospital's intensive care unit. The potential for a hybrid approach was demonstrated in a medical diagnosis system by the authors in (Sharaf-El-Deen et al., 2014) and (Tung et al., 2010).

Task scheduling in fog-cloud environment
Efficient task scheduling in fog computing to maximize resource management is one of several research focuses in recent years (Mouradian et al., 2017;Bitam et al., 2018). We mainly classify the task scheduling approaches into two main categories in Fog-Cloud environments.

QoS-aware task scheduling in fog-cloud environment
In (Bitam et al., 2017), the authors proposed a bio-inspired optimization approach (i.e., Bees Life Algorithm (BLA)), seeking to address the job scheduling challenge in a fog computing environment. The approach is based on the optimized distribution of a set of tasks among fog nodes to deal with users' excessive requests to computational resources. This approach also seeks to minimize energy consumption and CPU execution time. In a different work, the authors of (Intharawijitr et al., 2016) proposed three different strategies for optimum resource utilization. Firstly, a random methodology is used to select the fog nodes to execute tasks upon arrival. Secondly, the focus will be on the lowest latency fog devices. Finally, the fog resources having the maximum available capacity should be primarily considered. The three proposed policies were then evaluated using a mathematical model. The authors in (Zeng et al., 2016) introduced a joint optimization task scheduling and image placement algorithm, which is designed to minimize the overall completion task time for better user experience. The first sub-problem investigated is to balance the workload both on client devices and computational servers. The second sub-problem investigated is on the placement of task image on storage servers, and the final sub-problem is to balance the input-output interrupt requests among the storage servers. The research (Bittencourt et al., 2017) discusses applications scheduling in the fog computing process and focuses on the influence of user mobility on application performance. Different policies have been used in scheduling, when different application requests arrive these policies decides to execute them on cloudlet or on the cloud. Policies include concurrent policy, in this policy all request received at the cloudlet or allocated to cloudlet without measuring the usage. First Come First Server (FCFS) policy works in a traditional way serving the requests on arrival until it consumes all resources. The third was Delay-priority policy in which all request that requires lower delay was scheduled first. In (Deng et al., 2016), the authors proposed a workload allocation framework to balance the computational latency and power consumption in a fog-cloud environment. Likewise (Deng et al., 2016), the authors in (Zeng et al., 2016) studied the trade-off between power consumption and computational latency in fog. The approach is based on the convex optimization technique such as interior-point method (He et al., 2014), the mixed integer nonlinear programming problem using generalized benders decomposition (Li and Sun, 2006), and the Hungarian method (Kuhn, 2010) to address the problem in a fog-cloud environment. Furthermore, A. Toor et al. (2019b) proposed and evaluated energy and performance aware fog computing scheme using Dynamic Voltage Frequency Scaling (DVFS) technique while utilizing the green renewable energy resources. Similarly, effective resource utilization and computing service of delay sensitive applications were considered in (Song et al., 2016). The study proposed a graph representation base system and a task-oriented dynamic load balancing algorithm that maps the physical resources to virtualized resources. Each resource is represented by a node and has a certain capacity. On the arrival of a new fog node, the algorithm reallocates the load in its nearby neighborhood to maintain the balance and accounting task distribution degree and the links among nodes. A reverse strategy was adopted to remove edges not having sufficient resources.

QoE-aware task scheduling in fog-cloud environment
QoE refers to the user experience towards the various service aspects. It considers user needs, perceptions and intentions regarding provided services (Mahmud et al., 2019). Our published work (Mahmud et al., 2019) mainly focuses on QoE-aware task scheduling in Fog computing environment. The article uses a learning fuzzy logic based approach to enhance QoE in hierarchical, distributed and heterogeneous Fog-IoT environment. The technique follows the prioritized application placement to the suitable Fog servers using fuzzy logic models. Similar to (Deng et al., 2016), the authors in (Oueis et al., 2015) also studied load balancing while focusing on quality of experience (QoE). The proposed algorithm uses clustering in order to meet the computation demands and minimize the power consumption. The first in first out (FIFO) mechanism is used for task scheduling and earliest deadline first (EDF) policy is used for resource allocation. The authors in (Aazam, 2015) and (Aazam and Huh, 2015) considered multiple factors and formulated resource management on the basis of changing the relinquishing probability of the customer, service type, service price. However in (Aazam and Huh, 2015), resources were taken into account on the nature of devices. A loyalty based task scheduling model, a service-oriented resource management model to perform efficient and fair management of resources for IoT deployment, was proposed that incorporates the user's history of resource usage to increase the fairness and efficiency, when the resources were actually consumed. Table 1 summarizes the literature discussed in this section. In today's dynamic and heterogeneous environment necessitates a smart layer between end devices and fog nodes to encourage intelligent and adaptive task scheduling approaches. The main capabilities of the smart layer should include: a) the capacity to define whether the approaching request ought to be served by a cloud or a fog node, b) the layer should be able to assign the incoming task to most fitting fog node among the accessible fog nodes in a geographical proximity; and c) to extensively implement adaptiveness and learning-based intelligence for varied task scheduling processes.

Methodology
The methodology of the proposed scheme is detailed below.

System model
In this paper, we have designed the complete architecture of the Fog Cloud system as shown in 2. The architecture contains three layers. The first layer comprises of several IoE devices generating N number of requests. The second layer includes geographically distributed gateways deployed among the proximity of IoE devices and their corresponding Fog nodes. Provisioning of our proposed service at varied geographically distributed gateways makes our technique highly scalable and thus avoiding performance bottleneck. The proposed deployment model suits the existing and futuristic environments (i.e., IoT, IoE, smart X (smart city, smart building, smart forest etc.)).

Learning repository fog-cloud
To ensure effective job scheduling, we propose an intelligent and adaptive approach, named, Learning Repository Fog-Cloud (LRFC) that is a soft solution deployed at various gateways in the second layer. The basic sequence and operations of our proposed system are shown in Fig. 3. The sequence starts with the generation of asynchronous tuples to the LRFC layer. Jobs are decomposed into tasks in second step. Further, the Learning Repository creates its Meta in step three. In step four, best fitted fog servers are selected to serve the jobs. Hence, the LRFC schedules the task either for Fog or Cloud (i.e., if no suitable fog server is found or all the fog servers are completely occupied, then the tasks will be sent to cloud for further processing) in step five. In step six, Fog executes the task and returns the response details to LRFC layer; whereas, the cloud executes the received requests and returns response to LRFC layer in step seven, respectively. LRFC receives the response and updates the information iteratively in step eight. Finally, the results are generated accordingly.

An adaptive and intelligent task scheduling approach
We propose a hybrid approach based on the idea of both rule-based learning and case-based reasoning to produce efficient results. For this purpose, a learning repository is created that stores the particulars of each incoming task such as tuple identification (ID), tuple type, and the information of the resource where the task is served. Moreover, it also maintains the information such as propagation time, execution time, energy consumption of the tuple at a specific resource. The tasks are scheduled to the available resources based on the information stored in learning repository. Additionally, the selection of the service type (i.e., Fog, Cloud) is also made through the learning repository information. The learning repository is regularly updated, and the scheduling decisions are made accordingly. We have a tp total total number of tuples and fg total total number of fog servers. Initially, we are creating learning repository. Five percent of the tp total are used for training our proposed approach. The remaining ninety-five percent (95%) of tp total is used for testing purpose. Algorithms 1 and 2 provide a detailed and selfexplanatory description of our proposed approach. The learning repository storage F S and C S are created for Fog and Cloud, respectively. The initial tuples used for training are abbreviated as tp ini . The tuples used for are testing are presented as tp final . The SR stands for Storage Repository. Whereas, D T represents the type of IoT job. Internal Processing Time ITP is the time that a tuple/job takes for processing using a resource of Fog or Cloud. The link propagation time is abbreviated as PT L . In the LR exe process, the selection of fog server is decided on the distance between the request generated from and the server location where it is physically placed. The nearest server is selected with its server id fg id . The fg id of the serving server will be saved in LR storage as well. In the case of cooperation, if the selected server fg i is busy then the tuple will wait in the queue or will be sent to the Cloud.  Algorithm 1 Learning Repository Fog-Cloud Approach ⊳ storage for fog and cloud ⊳ getting 5% to initial execute

Formal verification of the proposed scheme
In this section, we analyze the efficiency of our scheduling algorithm in a formal way. To achieve that goal, we have used Uppaal timed automata to describe the behavior or IoT devices and scheduling algorithm. Then used the underlying model checker to verify a set of derived properties. Given the rich expreviness of Uppaal formalism, the behavior of IoT devices, fogs and algorithm can be constrained with time attributes such as the response time of the scheduler to acquire a request, minimum/maximum time interval between two requests from the same IoT device, time to communicate with a fog, etc.
Request types describe the combination of the resource types that an IoT device can request. A request type can include a single resource (e.g computation resource), or a combination of different resources (e.g computation and storage resources). We use ReqT to denote a request type. A request R targets a set of resources each with a different amount, e.g R=(Compute = 500, RAM = 12). We use notation |R i | to refer to the amounts of individual resources of the request R. As example, |R 1 | = (Compute, 500). Fig. 4 shows a parameterizable model of IoT devices. A device behavior is initially at location Init, waiting for certain time interval before issuing a request. The request type and budget are randomly generated using function Fresh(). While performing a request, the IoT device synchronizes with the scheduler using event Demand! The IoT device waits then to be scheduled to a fog at location Ready. Whenever scheduled, the device waits until the request is fully satisfied upon event Terminate[]? then it can start another request. Function Expired(Req) calculates when a request gets satisfied, given the performance characteristics of the fog and the budget requested by the IoT device.
To simulate the learning process, we use dynamic priorities for the allocation of fogs to different request types. The priority to allocate a fog F to a request type changes on the fly according to the dynamics of the system. Initially all priorities are set to zero (Prio(ReqT,F,0)=0), meaning that there is no preference in allocating a given fog to a given request type. For a given request type ReqT, if a fog F has recently been used many times to satisfy requests of type ReqT then the priority to allocate F to the given request type gets increased over time. Otherwise, the priority will be decreased on each time interval of length if F has not been allocated to any request of the concerned type during the last time interval. Fig. 5 shows the learning model where for each time interval delta, function All_Prio(delta) calls the aforementioned function Prio() for all potential fogs and request types as parameters.
When receiving a request from an IoT device, our scheduler considers first fogs having high priorities for the allocation to that request type. If two fogs have the same priority to serve a request and both have sufficient available resource budget, then the scheduler considers the neighborhood attribute where the closer fog gains the allocation. The allocation considers the resource availability at the fog level, for each: (fg in tp final ) do if (fghasPower() && hasResources()) then ⊳ for requested tuple ti with fg id The following function is used to show how to check whether a fog F satisfies a given request R issued at time instant t.

False Otherwise
Function budget(X, t) returns the amount requested by R for a resource type X at time instant t. We overload this function to return the available amount of a given resource type (|R i |) in a fog at time instant t,  The allocation of a fog to satisfy a request is performed via the following function:

False Otherwise
Function IoT(R) returns the actual IoT device that performed request R. So that we calculate the neighborhood Near(F i , IoT(R)) of the request issuer to the fog. Fig. 6 shows our scheduler model. Basically, whenever a device issues a request it asks the scheduler to reserve a fog to it upon the event Demand[]? If there exists a fog satisfying function Allocate() for the given request, the fog will be immediately reserved using event Reserve[]! from location Scheduling. If such a fog does not exist, at location Not_Optimal, the scheduler searches a fog that satisfies the request while being the most used one to serve such a request type recently. Otherwise, the scheduler searches a fog which is the nearest one to the IoT device. If none of the cited options exists, the scheduler mediates the request to the cloud.
When a request processing terminates, the corresponding fog resources are released and become available. We omit describing the termination function as it is very trivial. Fig. 7 shows the fog model. It is simple and consists in receiving a Reserve[] event from the scheduler, synchronizing with the requesting IoT device to start supplying resources. The fog updates the availability of its resources according to the current request budget, both when starts and terminates. The efficiency property we have formally analyzed using model checking is the following: Property 1. Each request, from an IoT device, is always satisfied by the nearest fog having sufficient budget and an experience to satisfy such type of requests. Formally, we write:

Performance evaluation
The simulation environment is set up using CloudSim (Calheiros et al., 2011) and iFogSim (Gupta et al., 2017). The arguments on why using CloudSim and iFogSim in fog/edge computing environments has been detailed in (Ficco et al., 2017). Modelling of environment is inspired by Azure Cloud Service (Chappellet al., 2008), and Amazon S3 service (Palankar et al., 2008). Analysis is performed based on relevant parameters such as data generation from IoTs (Chandio et al., 2014) data type, internal processing time, total processing time, queue delay, propagation delay, power consumption, available resources and resources required by a tuple, cooperation of fog nodes, and distance measurement in kilometers between nodes. The cloud services in simulation setup are presented in Table 2. Moreover, cloud data centers considered in simulation environment are shown in Table 3. Furthermore, the detail about dataset, deployment of fog and IoE devices, specifications of fog servers and performance evaluation metrics is given below:

Dataset
Fig. 8 presents the categories of the dataset used in the simulation (Iot-compute-dataset and http, 2019). It is a synthetic dataset. We have considered 30 and 50 thousand tuples to perform the experiments. The x-axis shows the different types of tuples and y-axis presents the number of tuples. This dataset is used to evaluate the performance of the proposed scheme.

Dataset characteristics
Dataset consists of multiple tuples. These tuples contains various properties such as -size, bandwidth, MIPS, and memory. Where, the tuple size refers to the required size of the tuple. It is important in terms of memory consumption at fog server and processing is performed based on the tuple size. The bandwidth of a tuple defines the bandwidth required by the tuple to reach its destination fog server. Every tuple has its specific bandwidth requirements and should be handled accordingly. Tuple MIPS refers to the processing requirements of a tuple to be executed at fog server. The memory of a tuple exhibits the memory required (in megabytes) by a tuple at a fog server. The dataset contains the job of heterogeneous nature having the information -name of the tuple, Tuple ID, tuple size, MIPS (required by a tuple), bandwidth, location (coordinates), type of IoE job (for instance, textual tuple of small size, medical tuples and abrupt, etc.), IsReversed (if not served by any resource), IsServerd (it has a boolean value and defines whether the tuple is served ot not by any available resource -cloud/fog), Isserver-byCloud (exhibits if the tuple is served by the cloud resource), type of the device (sensor, mobile, actuator etc), queue delay (the time a task rests in queue), processing time etc. The dataset can be used in varied computing environments.

Geographic deployment
Fog servers and IoE devices are deployed randomly at varied latitude and longitude of Rawalpindi and Islamabad cities, in the simulation setup. The specifications of the fog servers are presented in Table 4. Whereas, Cloud data centers are deployed in Singapore and United States of America (USA).

Evaluation metrics
The following evaluation metrics have been considered to evaluate the performance of the proposed scheme. Processing Time: The time taken by a tuple/job for processing at a fog server is termed as processing time. We compute the processing time using Equation (1).
The T p refers to the total processing time. Where, P i (t) is the power of the i t h and CP stands for the current power of a fog server fs. Response time: The round trip time of a tuple when it is generated from the source and returned back after completion. The following Equation (2) shows the computation of response time. Where, RT stands for response time, T p is the propagation time, T p shows processing time and T q presents the queue delay.

Policies
Here, following IoE data generation policies are considered to evaluate the proposed scheme. Random policy: This policy sends the tuples to fog servers arbitrarily without following their First-Come-First-Served: It refers to the synchronous forwarding of the tuples to fog servers (Bittencourt et al., 2017) according to their order of generation from IoE devices. Shortest-Job-First: SJF policy sends the small tuples on higher priority to fog servers in comparison to other tuples. Fig. 9 shows the utilization of Fog resources. Fig. 9. a shows the simulation time (in seconds) at x-axis and y-axis represents the number of tuples a Fog server is serving. Each   Fog server has specific computing capacity in terms of MIPS. Similarly, the incoming job also has a specific size in terms of MIPS. The size, mean-inter-arrival time and the change in number of tuples directly affect the utilization of a Fog device. Fig. 9. b shows utilization of a Fog server in 100 s of simulation time. Fig. 9. c shows the utilization of 7 Fog servers. The x-axis presents fog devices and y-axis shows their percent utilization. In case of LRFC, the minimum utilization of Fog servers is 86.295 and it reaches the 90.435 at maximum. Servers utilization is used as a key metric to determine the resource utilization in Fog environment. Hence, it has a main role in resource management. Subsequently, the utilization of Fog resources has a direct impact on the their energy consumption. The under-utilization of Fog servers results in wastage of resources. Consequently, the efficient utilization of dis-tributed resources is crucial to reduce the energy and to enhance the performance in fog environment. Power and Energy Consumption: Fig. 10a and 10. b exhibits the power and energy consumption of fog resources respectively. Two types of power consumption occurs in fog servers -static power consumption and dynamic power consumption. When a fog server is powered-on and it has no load, it consumes a constant amount of power that is required by its hardware and software for basic functions is called static power. The rest of its power consumption is proportional to its utilization -known as dynamic power consumption. In the experiments, both types of power consumption is computed. Fig. 10. a presents the average power consumed by fog resources while applying different types of task scheduling policies considered in our simulation environment. It can be seen that the proposed policy exhibits the lowest power consumption in comparison to the rest of the policies. It is due to the fact that LRFC allocates the resources according to the requirements of the jobs that ultimately has an impact on the overall power consumption of fog resources. The efficient task scheduling results in reduced power consumption in distributed environment (Lee and Zomaya, 2010). Similarly, the energy consumption (that is the power consumed in a specific time period) is also considerately decreased while allocating resources efficiently. Most of the state-of-the art policies concentrate on the loadbalancing that improves the quality of service (QoS); however, the energy consumption is somehow compromised. Moreover, allocating resources without considering nature of the jobs results in inefficient resource utilization that ultimately increases their power and energy consumption. Similarly, the overall computation time is also increased if the inappropriate resources are allocated. The LRFC policy selects the resources that perform best for specific types of jobs. This activity decreases the computation time that consequently minimizes the energy consumed by fog servers. The FCFS policy exhibits the highest energy consumption as it simply follows the flow of traffic and lacks in selecting the best resources. The shortest job first policy performs better than FCFS as it initially serves the smaller jobs. Subsequently, the smaller jobs do not wait and are served timely. In FCFS, smaller and bigger jobs are served at the same time and resources are occupied by bigger jobs that increase the overall computation time and hence the energy consumption. The random policy creates a load-balancing in the overall system and performs better than FCFS and SJF. The LRFC shows the optimal results because of the desirable task scheduling according to the job requirements. Fig. 11. a shows the average propagation delay in cloud-only and cloud-fog environments. It is clear that Fog-cloud environment is clear winner in terms of propagation delay. It is due to the remote deployment of cloud resources that create a huge latency in flow of jobs generated far from the cloud infrastructure. Contrarily, Fog exists nearer to the IoE devices and significantly reduces the delay (Mahmud et al., 2018).

Fog Servers Utilization:
Whereas, Fig. 11b and 11. c shows a comparison of end-to-end delay of IoE jobs served on cloud-only and fog-cloud environments. The yaxis of both Figures present the end-to-end delay in milliseconds. It exhibits the usage of network and processing resources by IoE jobs. As, the cloud is remote from the end devices so the jobs traverse all the network and use all underlying network resources and bandwidth. Whereas, fog is closer to devices where jobs are generated and results in lower utilization of network resources. In the given scenario, majority of the requests are served by fog that not only results in lower latency but it also reduces the burden of traffic on cloud. Fig. 12. a presents the processing delay occurred at the Fog servers while serving various IoE requests and applying different task scheduling policies. The x-axis presents the different task scheduling policies; whereas, y-axis shows the processing time in milliseconds. The strategy of allocating available Fog resources to incoming jobs has an important impact on the overall processing delay. If the jobs are not placed optimally at the fog resources, it causes the resource contention that increases the processing delay. Moreover, electing the best resource according to the job requirements improves the performance of the system by decreasing the delay. Fig. 12. a presents that when resources are allocated dynamically and intelligently, it reduces the server level pro- cessing delay. Hence, LRFC outperforms rest of the policies considered in our simulations, in terms of overall average processing delay. Fig. 12. b presents the average propagation time taken by all IoE jobs while traveling in the network. It includes the time taken in transmission of the data at links, at routers and switches and also in fog-tofog communication in case of cooperation among fog servers. Fig. 12. b clearly depicts that LRFC policy shows the better results compared to Random and FCFS; however, a slight increase is noted compared to SJF policy. Initially, LRFC takes some time to be matured and jobs are assigned randomly to the fog resources. However, it gets smarter with the passage of time. The prematurity time may have a negative impact on the performance at initial stages.
The resultant end-to-end delay depends on the efficiency of task scheduling algorithm (Mahmud et al., 2018). Fig. 12. c shows the performance evaluation of different task scheduling algorithms in a fogcloud environment, in terms of end-to-end delay. The x-axis shows the policies considered in our simulation and y-axis presents their corresponding end-to-end delay in milliseconds. The proposed LRFC policy exhibits the lowest end-to-end delay compared to FCFS, Random and SJF policies. As described earlier, the efficient task scheduling has a crucial role on the end-to-end delay. When the resources are allocated according to the job requirement keeping in view the other important factors like availability, capacity and proximity of resources (as done in LRFC), the average end-to-end time is reduced. Consequently, QoE of the system is improved. Additionally, assigning best resource for incoming job reduces the extra delays such as queuing and migration delays as well.
Finally, a comparative analysis of considered evaluation metrics is performed using 30k and 50k IoT jobs, as shown in Fig. 13. It can be observed that the proposed policy exhibits a slight difference among the varied performance evaluation metrics used in the paper even with a considerable amount of increase in IoT jobs. Consequently, the system shows a normal behaviour that confirms the scalability of the proposed approach in terms of number of number of IoE jobs.

Conclusion and future work
The paper explores task scheduling thoroughly in Fog computing environments. An adaptive and intelligent task scheduling technique, Learning Repository Fog-Cloud (LRFC), has been proposed to improve QoS (i.e., response time, and processing time of tuples) and energy consumption (i.e., power consumption of fog devices). The authors have proposed a smart soft layer between IoE/IoT-devices and Fog nodes that can be extended to implement various types of learning based policies. The proposed deployment model exhibits scalability and thus avoid performance bottlenecks. The proposed approach has been thoroughly evaluated using extensive simulations and verified formally. The verification of the proposed approach with current state-of-the-art shows promising results both in terms of energy efficiency and QoS. Our future work includes the utilization of our proposed smart layer with various experimentation of intelligent learning based techniques in combination with varied state-of-the-art scheduling policies to mainly access varied futuristic large scale distributed computational paradigms (i.e., Edge, Fog, and Cloud etc.)

Declaration of competing interest
We declare that there is no author's conflict of interest. Zeng, D., Gu, L., Guo, S., Cheng, Z., 2016.