Login / Signup

Revealing principles of autonomous thermal soaring in windy conditions using vulture-inspired deep reinforcement-learning.

Yoav FlatoRoi HarelAviv TamarRan NathanTsevi Beatus
Published in: Nature communications (2024)
Thermal soaring, a technique used by birds and gliders to utilize updrafts of hot air, is an appealing model-problem for studying motion control and how it is learned by animals and engineered autonomous systems. Thermal soaring has rich dynamics and nontrivial constraints, yet it uses few control parameters and is becoming experimentally accessible. Following recent developments in applying reinforcement learning methods for training deep neural-network (deep-RL) models to soar autonomously both in simulation and real gliders, here we develop a simulation-based deep-RL system to study the learning process of thermal soaring. We find that this process has learning bottlenecks, we define a new efficiency metric and use it to characterize learning robustness, we compare the learned policy to data from soaring vultures, and find that the neurons of the trained network divide into function clusters that evolve during learning. These results pose thermal soaring as a rich yet tractable model-problem for the learning of motion control.
Keyphrases
  • public health
  • neural network
  • healthcare
  • mental health
  • spinal cord
  • high speed