An Invariant Learning Characterization of Controlled Text Generation
Carolina Zheng*, Claudia Shi*, Keyon Vafa, Amir Feder, David Blei (*equal contribution)
ACL 2023
Evaluating the Moral Beliefs Encoded in LLMs
Nino Scherrer*, Claudia Shi*, Amir Feder, David Blei (*equal contribution)
NeurIPS 2023 (spotlight)
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jeremy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, others
https://arxiv.org/abs/2307.15217
On the Misspecification of Linear Assumptions in Synthetic Control
Achille Nazaret, Claudia Shi, David M. Blei
arXiv:2302.12777, 2023
Causal-structure Driven Augmentations for Text OOD Generalization
Amir Feder, Yoav Wald, Claudia Shi, Suchi Saria, David Blei
NeurIPS, 2023
On the Assumptions of Synthetic Control Methods
Claudia Shi, Dhanya Sridhar, Vishal Misra, David M. Blei
International Conference on Artificial Intelligence and Statistics (Oral), 2022
Conformal Sensitivity Analysis for Individual Treatment Effects
Mingzhang Yin, Claudia Shi, Yixin Wang, David M Blei
Journal of the American Statistical Association, 2022
Invariant Representation Learning for Treatment Effect Estimation
Claudia Shi, Victor Veitch, David M. Blei
Uncertainty in Artificial Intelligence (Long talk), 2021
Adapting Neural Networks for the Estimation of Treatment Effects
Claudia Shi, David M. Blei, Victor Veitch
Neural Information Processing Systems, 2019