Show simple item record

dc.contributor.authorMysore, Siddharthen_US
dc.contributor.authorMabsout, Basselen_US
dc.contributor.authorSaenko, Kateen_US
dc.contributor.authorMancuso, Renatoen_US
dc.date.accessioned2021-09-17T13:28:43Z
dc.date.available2021-09-17T13:28:43Z
dc.date.issued2020
dc.identifier.citationSiddharth Mysore, Bassel Mabsout, Kate Saenko, Renato Mancuso. 2020. "How to Train your Quadrotor: A Framework for Consistently Smooth and Responsive Flight Control via Reinforcement Learning.." CoRR, Volume abs/2012.06656, https://arxiv.org/abs/2012.06656
dc.identifier.urihttps://hdl.handle.net/2144/43028
dc.description.abstractWe focus on the problem of reliably training Reinforcement Learning (RL) models (agents) for stable low-level control in embedded systems and test our methods on a high-performance, custom-built quadrotor platform. A common but often under-studied problem in developing RL agents for continuous control is that the control policies developed are not always smooth. This lack of smoothness can be a major problem when learning controllers as it can result in control instability and hardware failure. Issues of noisy control are further accentuated when training RL agents in simulation due to simulators ultimately being imperfect representations of reality — what is known as the reality gap. To combat issues of instability in RL agents, we propose a systematic framework, ‘REinforcement-based transferable Agents through Learning’ (RE+AL), for designing simulated training environments which preserve the quality of trained agents when transferred to real platforms. RE+AL is an evolution of the Neuroflight infrastructure detailed in technical reports prepared by members of our research group. Neuroflight is a state-of-the-art framework for training RL agents for low-level attitude control. RE+AL improves and completes Neuroflight by solving a number of important limitations that hindered the deployment of Neuroflight to real hardware. We benchmark RE+AL on the NF1 racing quadrotor developed as part of Neuroflight. We demonstrate that RE+AL significantly mitigates the previously observed issues of smoothness in RL agents. Additionally, RE+AL is shown to consistently train agents that are flight-capable and with minimal degradation in controller quality upon transfer. RE+AL agents also learn to perform better than a tuned PID controller, with better tracking errors, smoother control and reduced power consumption. To the best of our knowledge, RE+AL agents are the first RL-based controllers trained in simulation to outperform a well-tuned PID controller on a real-world controls problem that is solvable with classical control.en_US
dc.language.isoen_US
dc.relation.ispartofCoRR
dc.subjectNeural networksen_US
dc.subjectContinuous controlen_US
dc.subjectQuadrotoren_US
dc.subjectEmbedded systemsen_US
dc.subjectRoboticsen_US
dc.titleHow to train your quadrotor: a framework for consistently smooth and responsive flight control via reinforcement learningen_US
dc.typeArticleen_US
dc.description.versionFirst author draften_US
pubs.elements-sourcedblpen_US
pubs.notesEmbargo: No embargoen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Arts & Sciencesen_US
pubs.organisational-groupBoston University, College of Arts & Sciences, Department of Computer Scienceen_US
dc.identifier.mycv576679


This item appears in the following Collection(s)

Show simple item record