Abstract:
The deployment of multiple Unmanned Aerial Vehicles (UAV) in constrained environments has various challenges concerning trajectory optimization with the target(s) reachability and collisions. In this paper, we formulate multi-UAV navigation in constrained environments as a multi-agent learning problem. Further, we propose a reinforcement learning based Safe-MADDPG method to learn safe and cooperative multi-UAV navigation policies in a constrained environment. The safety constraints to handle inter-UAV collisions during navigation are modeled through action corrections of the learned autonomous navigation policies using an additional safety layer. We have implemented our proposed approach on the Webots Simulator and provided a detailed analysis of the proposed solution. The results demonstrate that the proposed Safe-MADDPG approach is effective in learning safe actions for multi-UAV navigation in constrained environments.