Abstract
Research into intelligent motion planning methods has been driven by the growing autonomy of autonomous underwater vehicles (AUV) in complex unknown environments. Deep reinforcement learning (DRL) algorithms with actor-critic structures are optimal adaptive solutions that render online solutions for completely unknown systems. The present study proposes an adaptive motion planning and obstacle avoidance technique based on deep reinforcement learning for an AUV. The research employs a twin-delayed deep deterministic policy algorithm, which is suitable for Markov processes with continuous actions. Environmental observations are the vehicle's sensor navigation information. Motion planning is carried out without having any knowledge of the environment. A comprehensive reward function has been developed for control purposes. The proposed system is robust to the disturbances caused by ocean currents. The simulation results show that the motion planning system can precisely guide an AUV with six-degrees-of-freedom dynamics towards the target. In addition, the intelligent agent has appropriate generalization power.
Original language | English |
---|---|
Article number | 103326 |
Number of pages | 14 |
Journal | Applied Ocean Research |
Volume | 129 |
Early online date | 1 Nov 2022 |
DOIs | |
Publication status | E-pub ahead of print - 1 Nov 2022 |