Program
Computer Engineering
College
Engineering
Student Level
Doctoral
Start Date
7-11-2019 2:00 PM
End Date
7-11-2019 3:45 PM
Abstract
The advances introduced by Unmanned Aerial Vehicles (UAVs) are manifold and have paved the path for the full integration of UAVs, as intelligent objects, into the Internet of Things (IoT). This paper brings artificial intelligence into the UAVs data offloading process in a multi-server Mobile Edge Computing (MEC) environment, by adopting principles and concepts from game theory and reinforcement learning. Initially, the autonomous MEC server selection for partial data offloading is performed by the UAVs, based on the theory of the stochastic learning automata. A non-cooperative game among the UAVs is then formulated to determine the UAVs' data to be offloaded to the selected MEC servers, while the existence of at least one Nash Equilibrium (NE) is proven by exploiting the power of submodular games. A best response dynamics framework and two alternative reinforcement learning algorithms are introduced that converge to an NE, and their tradeoffs are discussed. The overall framework performance evaluation is achieved via modeling and simulation, in terms of its efficiency and effectiveness, under different operation approaches and scenarios.
Artificial Intelligence Empowered UAVs Data Offloading in Mobile Edge Computing
The advances introduced by Unmanned Aerial Vehicles (UAVs) are manifold and have paved the path for the full integration of UAVs, as intelligent objects, into the Internet of Things (IoT). This paper brings artificial intelligence into the UAVs data offloading process in a multi-server Mobile Edge Computing (MEC) environment, by adopting principles and concepts from game theory and reinforcement learning. Initially, the autonomous MEC server selection for partial data offloading is performed by the UAVs, based on the theory of the stochastic learning automata. A non-cooperative game among the UAVs is then formulated to determine the UAVs' data to be offloaded to the selected MEC servers, while the existence of at least one Nash Equilibrium (NE) is proven by exploiting the power of submodular games. A best response dynamics framework and two alternative reinforcement learning algorithms are introduced that converge to an NE, and their tradeoffs are discussed. The overall framework performance evaluation is achieved via modeling and simulation, in terms of its efficiency and effectiveness, under different operation approaches and scenarios.