Login / Signup

Learning agile soccer skills for a bipedal robot with deep reinforcement learning.

Tuomas HaarnojaBen MoranGuy LeverSandy H HuangDhruva TirumalaJan HumplikMarkus WulfmeierSaran TunyasuvunakoolNoah Y SiegelRoland HafnerMichael BloeschKristian HartikainenArunkumar ByravanLeonard HasencleverYuval TassaFereshteh SadeghiNathan BatchelorFederico CasariniStefano SalicetiCharles GameNeil SreendraKushal PatelMarlon GwiraAndrea HuberNicole HurleyFrancesco NoriRaia HadsellNicolas Heess
Published in: Science robotics (2024)
We investigated whether deep reinforcement learning (deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies. We used deep RL to train a humanoid robot to play a simplified one-versus-one soccer game. The resulting agent exhibits robust and dynamic movement skills, such as rapid fall recovery, walking, turning, and kicking, and it transitions between them in a smooth and efficient manner. It also learned to anticipate ball movements and block opponent shots. The agent's tactical behavior adapts to specific game contexts in a way that would be impractical to manually design. Our agent was trained in simulation and transferred to real robots zero-shot. A combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training enabled good-quality transfer. In experiments, the agent walked 181% faster, turned 302% faster, took 63% less time to get up, and kicked a ball 34% faster than a scripted baseline.
Keyphrases
  • high frequency
  • low cost
  • virtual reality
  • transcranial magnetic stimulation
  • medical students
  • cancer therapy
  • drug delivery
  • high speed
  • mass spectrometry
  • high resolution
  • atomic force microscopy