Papers
arxiv:2005.01643

Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems

Published on May 4, 2020
Authors:
,
,
,

Abstract

In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline <PRE_TAG>reinforcement learning</POST_TAG> algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep <PRE_TAG>reinforcement learning</POST_TAG> methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.

Community

Sign up or log in to comment

Models citing this paper 13

Browse 13 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2005.01643 in a dataset README.md to link it from this page.

Spaces citing this paper 14

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.