Rohan Shah: Optionality and Career Choice

I. Optimizing for Optionality

The Optimizer’s Curse is a relatively straightforward problem in Bayesian decision theory. James Smith and Robert Winkler first sketched this out in 2006, but the phenomenon is as follows: a constrained optimizer calculates the expected value of different choices. This calculation involves some variation because we often over-or-underestimate expected values. Then, optimizers select the choices which garner the highest expected value. It follows that they have overestimated the expected value of whatever choice they made. The proof is pretty compelling, and most of the solutions forwarded are essentially modifications of Bayes’s rule: acknowledge uncertainty, and update accordingly. It’s a much harder problem to solve when we don’t have the requisite posterior distribution (i.e. comprehensive data on impact/goal/success), so it’s particularly difficult when we personally assign qualitative credence to the expected value of choices, or on a larger scale, when philanthropists determine the impact and cost-effectiveness of their programs. I’m interested in this, but more so on its implications for career choice.

II. The Optionality Framework of Career Choice

It’s unthinkably easy for college students to get caught in the trap of optimizing for optionality. Study a technical, generic-enough thing that lets you fit into any role, at any workplace you want. With your technical, generic degree, you perform technical, generic jobs, and the thinking goes: if I want to work on X important problem / interesting job, there’s a certain certainty threshold above which I want to do X. If I am below this threshold,  I will do Y things to preserve the ability to do X at some undetermined point in the future. Of course, there’s lots of hidden assumptions. Y option-preserving jobs are generally high-paying and prestigious, and some smaller subset of Y are high-paying, prestigious, and require high levels of intelligence. If someone is on the fence about working on an important but specific problem, the set of Y jobs sufficiently counteract the expected value of working on it immediately, and defer to doing Y. This is kind of like our current situation, especially out of good schools like Harvard, and part of why it is exceptionally difficult to convince Harvard students that they should defer high-paying jobs like consulting or investment banking in favor of committing their 80,000 hours to AI alignment research. It could be a useful mental model to optimize for optionality: what do you really lose?

III.  The Optimizer’s Curse for Careers

Adopting this model, I think, leads to a perverse set of incentives. Implicit in our expected value calculation for a generic, high-paying, prestigious job is that it preserves options for later. A job in investment banking may catapult you into upper management at a major company, or enable a lateral shift to private equity. But this is where the Optimizer’s Curse plays a role. We have robust empirical evidence that says that we systematically overestimate these expected values, and if preserving optionality is a factor in expected values, we also overestimate our ability to preserve options. It seems like we might be way more restricted than we think. If so, there is no reason I can think of to not pursue the high-impact thing you want to do right now, because it is better to overestimate expected value in terms of helping others and marginal cost-effectiveness of your time-and-wealth-contribution than to overestimate how many options you’ll preserve by not doing that high-impact thing.