Research
Writing
elsewhere
Contact
NEWSLETTER
Subscribe!
The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback
Oct 31, 2023
|
Nathan Lambert, Roberto Calandra
Tags:
Reinforcement Learning
Download the paper!
Read on ArXiv!
Run the code!
Video available!
ABSTRACT
:
hide & show
↓↑
What you need to know:
Citation
Links
Download the paper!
Read on Arxiv!
Read the code!
Video
←All Research
Google Scholar
Tags
Novel Robots
Reinforcement Learning
Multi-Agent Systems
Beneficial AI
Robotics
Model Learning