Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/3900
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSingh, A.-
dc.contributor.authorJha, S.S.-
dc.date.accessioned2022-08-25T15:13:36Z-
dc.date.available2022-08-25T15:13:36Z-
dc.date.issued2022-08-25-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/3900-
dc.description.abstractThe deployment of multiple Unmanned Aerial Vehicles (UAV) in constrained environments has various challenges concerning trajectory optimization with the target(s) reachability and collisions. In this paper, we formulate multi-UAV navigation in constrained environments as a multi-agent learning problem. Further, we propose a reinforcement learning based Safe-MADDPG method to learn safe and cooperative multi-UAV navigation policies in a constrained environment. The safety constraints to handle inter-UAV collisions during navigation are modeled through action corrections of the learned autonomous navigation policies using an additional safety layer. We have implemented our proposed approach on the Webots Simulator and provided a detailed analysis of the proposed solution. The results demonstrate that the proposed Safe-MADDPG approach is effective in learning safe actions for multi-UAV navigation in constrained environments.en_US
dc.language.isoen_USen_US
dc.subjectMulti-agent systemen_US
dc.subjectPolicy gradienten_US
dc.subjectReinforcement learningen_US
dc.subjectSafe navigationen_US
dc.subjectUAVen_US
dc.subjectWebotsen_US
dc.titleLearning safe cooperative policies in autonomous multi-UAV navigationen_US
dc.typeArticleen_US
Appears in Collections:Year-2021

Files in This Item:
File Description SizeFormat 
Full Text.pdf620.1 kBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.