Emergent Mind

Abstract

Underwater vehicles are employed in the exploration of dynamic environments where tuning of a specific controller for each task would be time-consuming and unreliable as the controller depends on calculated mathematical coefficients in idealised conditions. For such a case, learning task from experience can be a useful alternative. This paper explores the capability of probabilistic inference learning to control autonomous underwater vehicles that can be used for different tasks without re-programming the controller. Probabilistic inference learning uses a Gaussian process model of the real vehicle to learn the correct policy with a small number of real field experiments. The use of probabilistic reinforced learning looks for a simple implementation of controllers without the burden of coefficients calculation, controller tuning or system identification. A series of computational simulations were employed to test the applicability of model-based reinforced learning in underwater vehicles. Three simulation scenarios were evaluated: waypoint tracking, depth control and 3D path tracking control. The 3D path tracking is done by coupling together a line-of-sight law with probabilistic inference for learning control. As a comparison study LOS-PILCO algorithm can perform better than a robust LOS-PID. The results shows that probabilistic model based reinforced learning is a possible solution to motion control of underactuated AUVs as can generate capable policies with minimum quantity of episodes.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.