Electrical and Computer Engineering ETDs

Publication Date

Summer 6-10-2021


Artificial Intelligent autonomous systems are becoming increasingly ubiquitous in daily life. Mobile devices for example provide mechanical-generated intelligent support to humans, with various degrees of autonomy, and are a key part of the recent autonomous revolution. Autonomous intelligent systems aim to understand and interact with their users in a timely manner, while many of them are characterized by constrained resources. Despite that, the average person does not act in a formulaic and risk-neutral manner but instead exhibits risk-aware attitudes when performing a task that includes sources of uncertainties. When humans make decisions, they explore their surroundings, understand the emerging risks, perform actions, and evaluate their perceived outcomes. What a person characterizes as a satisfactory outcome is subjective to her own reasoning, behavior, and risk capacity. Therefore, an autonomous intelligent system should be enriched with human awareness, thus it should account for and sometimes mimic its owner's cognitive behavior and behavioral patterns, such that the latter's subjective satisfaction is optimized, and personalized service is provided. Furthermore, the proliferation of autonomous systems, e.g., mobile or wearable devices, boosts the data volume and service demand. Each autonomous system aims to optimize its owner's experience in a self-centric manner, and in several application domains, its actions impact the others' experience and decision-making process generally. To this end, the users' subjective goals generate conflicts, and the autonomous intelligent systems are expected to make decisions in non-cooperative environments. In this thesis, we investigate and introduce distributed autonomous decision-making frameworks by focusing on motivating application domains with the aforementioned challenges. We utilize Game Theory for studying the strategic interaction of the autonomous intelligent systems in non-cooperative environments and tackling the necessity of non-centralized and scalable solutions. We build autonomous intelligent decision-making agents through Reinforcement Learning, which is a popular statistical Artificial Intelligence (AI) technique for controlling unknown environments with partial, and incomplete information. Reinforcement Learning (RL) introduces the concept of an agent that learns to interact with an unknown environment by performing actions that are mainly driven by particular observations, and by evaluating the resulted feedback. We extend the regular RL setting through reward reshaping for considering the user's risk-aware characteristics that are exhibited in real life. We incorporate Prospect Theory, which belongs to the behavioral economic subgroup, and describes how individuals make decisions between probabilistic alternatives, where risk is involved, and the probability of different outcomes is unknown. In the considered non-cooperative environments, we seek distributed solutions, thus Equilibrium points, where each autonomous intelligent agent does not have the incentive to change its own decision unilaterally. Our investigation leads to autonomous intelligent decision-making frameworks that could serve as a step towards Artificial General Intelligence (AGI), where the computing systems learn to perform a task in a human-centric manner, thus in a similar way that the task would be completed by a person in real life.


Decision-Making, Reinforcement Learning, Game Theory, Prospect Theory

Document Type




Degree Name

Computer Engineering

Level of Degree


Department Name

Electrical and Computer Engineering

First Committee Member (Chair)

Eirini Eleni Tsiropoulou

Second Committee Member

Michael Devetsikiotis

Third Committee Member

Mark Gilmore

Third Advisor

Symeon Papavassiliou