The Challenges of Exploration for Offline Reinforcement Learning

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
Offline Reinforcement Learning (ORL) enablesus to separately study the two interlinked processes of reinforcement learning: collecting informative experience and inferring optimal behaviour. The second step has been widely studied in the offline setting, but just as critical to data-efficient RL is the collection of informative data. The task-agnostic setting for data collection, where the task is not known a priori, is of particular interest due to the possibility of collecting a single dataset and using it to solve several downstream tasks as they arise. We investigate this setting via curiosity-based intrinsic motivation, a family of exploration methods which encourage the agent to explore those states or transitions it has not yet learned to model. With Explore2Offline, we propose to evaluate the quality of collected data by transferring the collected data and inferring policies with reward relabelling and standard offline RL algorithms. We evaluate a wide variety of data collection strategies, including a new exploration agent, Intrinsic Model Predictive Control (IMPC), using this scheme and demonstrate their performance on various tasks. We use this decoupled framework to strengthen intuitions about exploration and the data prerequisites for effective offline RL.

What you need to know:

Citation

@misc{lambert2022challenges,
     title={The Challenges of Exploration for Offline Reinforcement Learning},
     author={Nathan Lambert and Markus Wulfmeier and William Whitney and Arunkumar Byravan and Michael Bloesch and Vibhavari Dasagi and Tim Hertweck and Martin Riedmiller},
     year={2022},
     eprint={2201.11861},
     archivePrefix={arXiv},
     primaryClass={cs.LG}
}