Nonholonomic Yaw Control of an Underactuated Flying Robot with Model-based Reinforcement Learning

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
Nonholonomic control is a candidate to control nonlinear systems with path-dependant states. We investigate an underactuated flying micro-aerial-vehicle, the ionocraft, that requires nonholonomic control in the yaw-direction for complete attitude control. Deploying an analytical control law involves substantial engineering design and is sensitive to inaccuracy in the system model. With specific assumptions on assembly and system dynamics, we derive a Lie bracket for yaw control of the ionocraft. As a comparison to the significant engineering effort required for an analytic control law, we implement a datadriven model-based reinforcement learning yaw controller in a simulated flight task. We demonstrate that a simple modelbased reinforcement learning framework can match the derived Lie bracket control – in yaw rate and chosen actions – in a few minutes of flight data, without a pre-defined dynamics function. This paper shows that learning-based approaches are useful as a tool for synthesis of nonlinear control laws previously only addressable through expert-based design.

What you need to know:

  1. The Ionocraft can be controlled with closed-form nonholonomic controllers.
  2. Model-based RL very closely imitates these nonlinear control laws learning only from data.

Citation

@article{lambert2020nonholonomic,
 title={Nonholonomic Yaw Control of an Underactuated Flying Robot with Model-based Reinforcement Learning},
 author={Lambert, Nathan and Schindler, Craig and Drew, Daniel S and Pister, Kristofer SJ},
 journal={IEEE Robotics and Automation Letters},
 year={2020},
 publisher={IEEE} }