How to train your quadrotor: a framework for consistently smooth and responsive flight control via reinforcement learning
Files
First author draft
Date
2020
DOI
Authors
Mysore, Siddharth
Mabsout, Bassel
Saenko, Kate
Mancuso, Renato
Version
First author draft
OA Version
Citation
Siddharth Mysore, Bassel Mabsout, Kate Saenko, Renato Mancuso. 2020. "How to Train your Quadrotor: A Framework for Consistently Smooth and Responsive Flight Control via Reinforcement Learning.." CoRR, Volume abs/2012.06656, https://arxiv.org/abs/2012.06656
Abstract
We focus on the problem of reliably training Reinforcement Learning (RL) models (agents) for stable low-level
control in embedded systems and test our methods on a high-performance, custom-built quadrotor platform.
A common but often under-studied problem in developing RL agents for continuous control is that the control
policies developed are not always smooth. This lack of smoothness can be a major problem when learning
controllers as it can result in control instability and hardware failure.
Issues of noisy control are further accentuated when training RL agents in simulation due to simulators
ultimately being imperfect representations of reality — what is known as the reality gap. To combat issues
of instability in RL agents, we propose a systematic framework, ‘REinforcement-based transferable Agents
through Learning’ (RE+AL), for designing simulated training environments which preserve the quality of
trained agents when transferred to real platforms. RE+AL is an evolution of the Neuroflight infrastructure
detailed in technical reports prepared by members of our research group. Neuroflight is a state-of-the-art
framework for training RL agents for low-level attitude control. RE+AL improves and completes Neuroflight
by solving a number of important limitations that hindered the deployment of Neuroflight to real hardware.
We benchmark RE+AL on the NF1 racing quadrotor developed as part of Neuroflight. We demonstrate that
RE+AL significantly mitigates the previously observed issues of smoothness in RL agents. Additionally, RE+AL
is shown to consistently train agents that are flight-capable and with minimal degradation in controller quality
upon transfer. RE+AL agents also learn to perform better than a tuned PID controller, with better tracking
errors, smoother control and reduced power consumption. To the best of our knowledge, RE+AL agents are
the first RL-based controllers trained in simulation to outperform a well-tuned PID controller on a real-world
controls problem that is solvable with classical control.