Solving the Multi-Objective Problem of IoT Service Placement in Fog Computing Using Reinforcement Learning Approaches
The data generated in the Internet of Things (IoT) ecosystem requires continuous and timely processing. Transferring generated data to cloud data centers is costly and unsuitable for real-time applications. To increase the speed of service delivery, resources should be placed as close as possible to the user, i.e. at the edge of the network. A new paradigm called fog computing was introduced and added as a layer in the IoT architecture to meet this challenge. Fog computing provides the processing and storage of IoT data locally in the vicinity of IoT devices rather than in the cloud. Fog computing can provide less latency and better service quality for real-time applications than cloud computing. In general, there are theoretical foundations for fog computing, but the issue of locating IoT services to fog nodes remains a challenge and has attracted a great deal of research.
In this research, a conceptual computing framework based on cloud-fog control software is proposed to optimally locate IoT services. The proposed model is formulated as an autonomous planning model for managing service requests due to some constraints, considering the heterogeneity of programs and resources. To solve the problem of locating IoT services, an autonomous evolutionary approach based on enhanced learning approaches has been proposed with the aim of making maximum use of fog-based resources and improving service quality. A heterogeneous advantage operator-criterion algorithm is used as a new reinforcement learning approach aimed at maximizing long-term cumulative reward.
The results of the comparisons showed that the proposed reinforcement learning-enabled framework performs better than the advanced methods of the literature. The results of the proposed method compared to FSP-ODMA, SPP-GWO, CSA-FSPP, and GA-FSP methods indicate 4.6%, 2.4%, 3.4%, and 1.1% improvement, respectively.
Experimental studies were performed on a simulated artificial environment based on various metrics including fog usage, services performed, response time, and service delay. The proposed reinforcement learning-enabled framework outperforms the previous works and shows better scalability.Analysis of parallel heuristic algorithms to find a more accurate localization than evolutionary approaches is another aspect of future work. We intend to consider new reinforcement learning approaches such as the Asynchronous Advantage Actor Critic (A3C) algorithm along with the long-term cumulative reward maximization policy for locating services. Also, future efforts will explore reinforcement learning approaches for failure recovery towards Cloud-Fog-IoT architecture, where parallel processing architecture of IoT services can be considered in the location process.