Emergent Mind

Abstract

Urban air mobility is the new mode of transportation aiming to provide a fast and secure way of travel by utilizing the low-altitude airspace. This goal cannot be achieved without the implementation of new flight regulations which can assure safe and efficient allocation of flight paths to a large number of vertical takeoff/landing aerial vehicles. Such rules should also allow estimating the effective capacity of the low-altitude airspace for planning purposes. Path planning is a vital subject in urban air mobility which could enable a large number of UAVs to fly simultaneously in the airspace without facing the risk of collision. Since urban air mobility is a novel concept, authorities are still working on the redaction of new flight rules applicable to urban air mobility. In this study, an autonomous UAV path planning framework is proposed using a deep reinforcement learning approach and a deep deterministic policy gradient algorithm. The objective is to employ a self-trained UAV to reach its destination in the shortest possible time in any arbitrary environment by adjusting its acceleration. It should avoid collisions with any dynamic or static obstacles and avoid entering prior permission zones existing on its path. The reward function is the determinant factor in the training process. Thus, two different reward function compositions are compared and the chosen composition is deployed to train the UAV by coding the RL algorithm in python. Finally, numerical simulations investigated the success rate of UAVs in different scenarios providing an estimate of the effective airspace capacity.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.