Josh Greene

Joshua Greene is a professor in the Harvard Department of Psychology where he runs the Moral Cognition Lab. He received his bachelor’s degree from Harvard and then a PhD in philosophy at Princeton where he was mentored by many bright lights of analytical philosophy, including Peter Singer, who served on his committee. After doing a post-doc in a a cognitive neuroscience lab, Greene returned to Harvard to begin his own lab, the Moral Cognition Lab, which studies both descriptive and normative psychology and philosophy.

The best blog on the internet, SlateStarCodex, says of Josh:

"My own table was moderated by a Harvard philosophy professor [in fact, Josh transitioned to psychology] who obviously deserved it. He clearly and fluidly expressed the moral principles behind each of the suggestions, encouraged us to debate them reasonably, and then led us to what seemed in hindsight, the obviously correct answer. I'd already read one of his books, but I'm planning to order more. Meanwhile, according to my girlfriend, other tables did everything short of come to blows."

Recording this episode felt similar. To see Josh’s training in action, jump to the end where he does a brilliant job at passing the ideological Turing Test.

Below are some notes to set the context of our debate.

Greene’s Take on Consequentialism

Greene’s book Moral Tribes is one of the most accessible treatments of consequentialist morality. I would recommend it next to the now-archived Consequentialism FAQ by Scott Alexander as the most accessible introductions. For a shorter introduction to Greene’s framework, watch his EA Global talk from 2015.

The basic idea is that the dual process theory pioneered by Kahneman and Tversky applies to moral cognition as well. Greene uses the metaphor of a camera:

The moral brain is like a dual-mode camera with both automatic settings (such as “portrait” or “landscape”) and a manual mode. Automatic settings are efficient but inflexible. Manual mode is flexible but inefficient. The moral brain’s automatic settings are the moral emotions [...], the gut-level instincts that enable cooperation within personal relationships and small groups. Manual mode, in contrast, is a general capacity for practical reasoning that can be used to solve moral problems, as well as other practical problems.

-          Moral Tribes

We have fast, automatic gut reactions such as guilt, compassion, gratitude or anger. These are handled by our System 1.

­­

But we can also second-guess these reactions with considered judgment. In the “manual mode”, we can think through the data we’re getting from our fast moral machinery.

If you’re not familiar with the dual-process theory, a good place to start is Thinking, Fast and Slow or Rationality: From AI to Zombies. There is also an excellent short (though a bit more technical) description by Paul Christiano, the Monkey and the Machine