Claudia Shi

CS PhD student at Columbia University


Curriculum vitae


Claudia.j.shi AT gmail.com



Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback


Journal article


Stephen Casper, Xander Davies, Claudia Shi, T. Gilbert, J'er'emy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, P. Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, J. Pfau, Dmitrii Krasheninnikov, Xin Chen, L. Langosco, Peter Hase, Erdem Biyik, A. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
TMLR 2023

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Casper, S., Davies, X., Shi, C., Gilbert, T., Scheurer, J., Rando, J., … Hadfield-Menell, D. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. TMLR 2023.


Chicago/Turabian   Click to copy
Casper, Stephen, Xander Davies, Claudia Shi, T. Gilbert, J'er'emy Scheurer, Javier Rando, Rachel Freedman, et al. “Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback.” TMLR 2023 (n.d.).


MLA   Click to copy
Casper, Stephen, et al. “Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback.” TMLR 2023.


BibTeX   Click to copy

@article{stephen-a,
  title = {Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback},
  journal = {TMLR 2023},
  author = {Casper, Stephen and Davies, Xander and Shi, Claudia and Gilbert, T. and Scheurer, J'er'emy and Rando, Javier and Freedman, Rachel and Korbak, Tomasz and Lindner, David and Freire, P. and Wang, Tony and Marks, Samuel and Ségerie, Charbel-Raphaël and Carroll, Micah and Peng, Andi and Christoffersen, Phillip J. K. and Damani, Mehul and Slocum, Stewart and Anwar, Usman and Siththaranjan, Anand and Nadeau, Max and Michaud, Eric J. and Pfau, J. and Krasheninnikov, Dmitrii and Chen, Xin and Langosco, L. and Hase, Peter and Biyik, Erdem and Dragan, A. and Krueger, David and Sadigh, Dorsa and Hadfield-Menell, Dylan}
}

Abstract

Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in