Revealing Life Preferences Through LLMs

Author: Omar Abdel Haq (Harvard Business School)Amitabh Chandra (Harvard Business School and Harvard Kennedy School)Tomáš Jagelka (University of Bonn)Erzo F.P. Luttmer (Dartmouth College)Joshua Schwartzstein (Harvard Business School)
Posted: 11 May 2026

Abstract

Large Language Models (LLMs) are trained on a prodigious corpus of human writing and may reveal human preferences over characteristics of life courses, such as income, longevity, and working conditions. We present OpenAI's GPT-5.4 and a broadly representative sample of Americans with pairs of life stories and ask them to choose the life they would prefer for themselves. A person's choice is better predicted by the LLM's choice than by another person’s choice over the same stories, and LLM valuations of several life attributes are similar to those derived from human responses. Our results suggest that LLM responses offer a scalable and cost-effective complement to existing methods for studying human preferences.
JEL codes: D0, H0, I0
Keywords: Generative AI, preference estimation methods, choice experiments, survey validation