The History and Risks of Reinforcement Learning and Human Feedback

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
Reinforcement learning from human feedback (RLHF) has emerged as a powerful technique to make large language models (LLMs) easier to use and more effective. A core piece of the RLHF process is the training and utilization of a model of human preferences that acts as a reward function for optimization. This approach, which operates at the intersection of many stakeholders and academic disciplines, remains poorly understood. RLHF reward models are often cited as being central to achieving performance, yet very few descriptors of capabilities, evaluations, training methods, or open-source models exist. Given this lack of information, further study and transparency is needed for learned RLHF reward models. In this paper, we illustrate the complex history of optimizing preferences, and articulate lines of inquiry to understand the sociotechnical context of reward models. In particular, we highlight the ontological differences between costs, rewards, and preferences at stake in RLHF's foundations, related methodological tensions, and possible research directions to improve general understanding of how reward models function.

What you need to know:

Citation

@misc{lambert2023entangled,
     title={Entangled Preferences: The History and Risks of Reinforcement Learning and Human Feedback},
     author={Nathan Lambert and Thomas Krendl Gilbert and Tom Zick},
     year={2023},
     eprint={2310.13595},
     archivePrefix={arXiv},
     primaryClass={cs.CY}
}