dc.contributor.author |
Kaushik, P. |
|
dc.contributor.author |
Garg, A. |
|
dc.contributor.author |
Jha, S.S. |
|
dc.date.accessioned |
2022-08-25T15:25:10Z |
|
dc.date.available |
2022-08-25T15:25:10Z |
|
dc.date.issued |
2022-08-25 |
|
dc.identifier.uri |
http://localhost:8080/xmlui/handle/123456789/3902 |
|
dc.description.abstract |
Autonomous navigation and formation control of multi-UAV systems poses a significant challenge for the robotic systems that operate in partially-observable, dynamic and continuous environments. This paper addresses the problem of multi-UAV formation control while cooperatively tracking a set of moving objects. The objective of the multi-UAV system is to maintain the moving objects under their joint coverage along with aligning themselves in an optimal formation for maximizing the overall area coverage. We develop a multi-agent reinforcement learning model to learn a cooperative multi-UAV policy for the multi-object tracking and formation control. We design a reward function to encode the objectives of tracking, formation and collision avoidance into the model. The proposed deep reinforcement learning based model is deployed and tested against a baseline controller using the Gazebo simulator. The result indicates that the proposed model is robust against the tracking and alignment errors outperforming the baseline model. |
en_US |
dc.language.iso |
en_US |
en_US |
dc.subject |
Active tracking |
en_US |
dc.subject |
Deep reinforcement learning |
en_US |
dc.subject |
Formation control |
en_US |
dc.subject |
Gazebo simulator |
en_US |
dc.subject |
Unmanned aerial vehicles (UAVs) |
en_US |
dc.title |
On learning multi-UAV policy for multi-object tracking and formation control |
en_US |
dc.type |
Article |
en_US |