Date on Master's Thesis/Doctoral Dissertation

12-2024

Document Type

Doctoral Dissertation

Degree Name

Ph. D.

Department

Electrical and Computer Engineering

Degree Program

Electrical Engineering, PhD

Committee Chair

Li, Hongxiang

Committee Member

Zurada, Jacek

Committee Member

Faul, Andre

Committee Member

Baidya Sabur

Author's Keywords

communications; spectrum; artificial intelligence; reinforcement learning; aviation

Abstract

As aviation operations expand and new participants enter the National Airspace System (NAS), the demand for aeronautical communications will experience a significant rise. This surge is propelled by increased air travel and the emergence of Urban Air Mobility (UAM) operations, a subset of Advanced Air Mobility (AAM). UAM aims to facilitate intra-city transportation of people and cargo utilizing remotely piloted aircraft capable of electric vertical takeoff and landing operations. The growing dependence on efficient wireless communication systems underscores the critical importance of intelligent spectrum allocation and effective airspace management to ensure safe, seamless, and technologically advanced air operations. However, the current frequency allocations for NAS operations have become increasingly scarce and inadequate to accommodate the projected growth in demand. Consequently, spectrum management techniques and new radio technologies have evolved over time to address the escalating demands arising from the expansion of airspace activities. While traditional methods, like reducing channel bandwidth, have offered some relief, there persists a pressing need for a scalable and sustainable solution. To tackle this challenge, this initiative explores a pioneering approach to spectrum management within the NAS, leveraging Artificial Intelligence and other advanced technologies. This research investigates a novel approach to spectrum management within the NAS utilizing Artificial Intelligence. It employs deep reinforcement learning techniques to optimize airspace operations, with the primary objectives of reducing mission completion time and enhancing safety, while addressing constraints related to airspace and frequency resources. The proposed multi-agent system, employing deep reinforcement learning, collaborates to optimize channel utilization, flight duration, and departure wait times by effectively managing spectrum allocation, vehicle scheduling, and flight speeds. To achieve the objective of minimizing mission completion time, a Markov Decision Process (MDP) is formulated, considering variables such as frequency channel availability, signal-to-interference-plus-noise power ratio, aircraft positioning, and flight status. Furthermore, we explore Deep Q-Networks (DQN) with Value Decomposition Networks (VDN) and Variant Distributed Deep Q-Networks (V-D3QN) techniques to improve the decision-making capabilities of the system. This evaluation enhances the system's ability to learn complex patterns and interactions among agents, leading to improved efficiency, robustness, and performance in dynamic airspace environments.

Share

COinS