Published July 24, 2023 | Version v1
Presentation Open

Reinforcement Learning from Human Feedback: A Tutorial at ICML 2023

  • 1. Hugging Face
  • 2. Toloka
  • 1. Toloka
  • 2. Yandex
  • 3. Hugging Face


Reinforcement learning from human feedback (RLHF) has dramatically improved the real-world performance and user experience of large machine learning models. Still, this approach has primarily been applied at a scale of compute and data curation that limits academic availability. In this tutorial, we will describe the general framework of RLHF and explain the technical procedures required to apply this framework. The tutorial begins with a detailed conceptual overview and continues with an explanation of human-in-the-loop data collection procedures used when scaling state-of-the-art systems.



Files (21.4 MB)

Name Size Download all
21.4 MB Preview Download

Additional details