Electrical and Computer Engineering ETDs

Publication Date

Fall 12-29-2022


With the advent of Artificial Intelligence (AI), the notion of autonomy, in terms of acting and thinking based on personal experience and judgment, has paved the way towards an autonomous decision-making future. This future can address the complex domain of the interdependent computing systems, whose main challenge is that they interact with each other with unpredictable and often unstable outcomes. It is crucial to envision and design this AI-driven autonomy for the reciprocal computing systems which cover a variety of use-cases ranging from the Internet of Things (IoT) to cybersecurity. This can be achieved by cloning the human decision-making process, which imposes that before humans decide how to act, they sense their unknown and stochastic environment, perform actions, and finally assess their perceived feedback. The feedback is subjectively evaluated as satisfactory or not by each human based on her personal behavioral profile and reasoning. The repetitive iteration of the aforementioned steps constitutes the learning process of humans. Consequently, the core idea is to inject human cognizance into the interdependent computing systems to transform them into AI-enabled decision-making agents who mimic the rational behavioral attributes of humans and optimize their subjective criteria autonomously. The rapid growth of interdependent computing systems, such as Unmanned Aerial Vehicles (UAVs) or Multi-Access Edge Computing servers (MEC), results in huge amounts of data and strict Quality of Service (QoS) requirements. When these systems act in an autonomous manner, they reveal a competitive behavior since each system aims at optimizing its own subjective criteria selfishly. This introduces the concept of interactive decision-making in non-cooperative environments, where the feedback for each system depends on the potentially conflicting actions of the rest. Therefore, we utilize Game Theory to efficiently capture these strategic interactions among the interdependent computing systems within the non-cooperative environments and prove that there exist solutions, i.e., stable Equilibrium points. The Equilibrium points are considered stable solutions because each system does not have a strategic incentive to change its own action unilaterally. To determine these Equilibria in a distributed manner we deploy Reinforcement Learning (RL), which enables the autonomous interdependent computing systems to be intelligent and learn in a stochastic environment by trial and error using the feedback from their own actions and experiences. Furthermore, the traditional RL methodology is enriched with the technique of reward reshaping to consider the Labor Economics-like arrangements among the autonomous interdependent computing systems via Contract Theory as well as their behavioral profiles via a Bayesian belief model. The concurrent utilization of Game Theory and Reinforcement Learning with reward reshaping is a step towards Self-Aware Artificial Intelligence (SAAI). We prove that it has a great potential to be the main component for building autonomous decision-making interdependent computing systems based on AI and can be effectively utilized in various application domains.

Document Type




Degree Name

Computer Engineering

Level of Degree


Department Name

Electrical and Computer Engineering

First Committee Member (Chair)

Eirini Eleni Tsiropoulou

Second Committee Member

Mark Gilmore

Third Committee Member

Jim Plusquellic

Fourth Committee Member

Symeon Papavassiliou