Date on Master's Thesis/Doctoral Dissertation
8-2024
Document Type
Doctoral Dissertation
Degree Name
Ph. D.
Department
Electrical and Computer Engineering
Degree Program
Electrical Engineering, PhD
Committee Chair
Li, Hongxiang
Committee Member
Zurada, Jacek
Committee Member
Inanc, Tamer
Committee Member
Hu, Changbing
Author's Keywords
Advanced air mobility; urban air mobility; reinforcement learning; deep learning; wireless communication
Abstract
Advanced air mobility (AAM), which envisages a safe and efficient aviation transportation system, has drawn significant attention to support the increasing mobility demand in metropolitan areas. Communication services for AAM aerial vehicles (AVs) are crucial for ensuring flight safety. This dissertation explores three research topics on communication resource allocation problems in AAM applications. The first topic, addressed in Chapter II, investigates the joint velocity selection and spectrum allocation problem for AAM applications to enhance spectrum utilization efficiency (SUE). In the AAM scenario, multiple AVs travel along predefined paths for passenger and cargo deliveries. Given that AAM aims to provide fast and safe deliveries, SUE for AAM is defined as the number of completed missions per unit of time per spectrum channel. The joint optimization problem is formulated as a multi-agent Markov game aiming to minimize the total travel time of all AVs. To solve the optimization, we proposed a multi-agent Deep Reinforcement Learning-based VD3QN algorithm for discrete actions, alongside heuristic greedy and orthogonal multiple access solutions as baselines. Extensive simulation results demonstrate that the VD3QN algorithm achieves superior performance in minimizing mission completion time. Building on the first topic, Chapter III delves into the joint communication resource allocation (spectrum and transmitting power) and velocity selection problem in AAM applications. The objective is to minimize AVs' mission completion time and communication outage duration, subject to safety and resource constraints. This optimization problem is challenging due to its non-convex, multi-stage combinatorial nature. To address this, we formulated the problem as a Markov game and proposed a multi-agent RL algorithm that combines VDN and P-DQN for optimal hybrid actions. Extensive simulation results validate the effectiveness of the proposed solution. Recently, cellular-connected AAM (cAAM) has been proposed as a promising solution to provide communication services for urban air transportation systems, integrating each AV as an aerial user into the cellular system and sharing the spectrum with existing terrestrial users (TUs). The third topic, discussed in Chapter IV, examines the dynamic spectrum allocation problem in cAAM applications, where multiple AVs coexist with multiple TUs. Given the distinct objectives of AVs and TUs, we define a utility function as the weighted sum of AVs' mission completion time, TUs' bidirectional achievable rate, and their communication outage duration. The goal is to maximize this utility function by jointly optimizing both users' spectrum allocation and AVs' velocity selection. We proposed a multi-agent AVDQN algorithm, which utilizes D3QN as the decision engine for each agent and incorporates the attention mechanism and VDN to learn optimal joint actions. Extensive simulations show the effectiveness of the AVDQN algorithm in addressing this complex problem.
Recommended Citation
Han, Ruixuan, "Reinforcement learning assisted communication resources optimization in advanced air mobility." (2024). Electronic Theses and Dissertations. Paper 4425.
Retrieved from https://ir.library.louisville.edu/etd/4425