51 Comments
User's avatar
Norman Sandridge, Ph.D.'s avatar

Very fascinating piece. I will continue to follow. I’m curious about this practice you identify called “template matching”. What in your view determines the kinds of template someone will use in any given situation? I have read, going back to Gordon Allport’s book on the nature of prejudice, that prejudiced people have a harder time reading other people. So I would predict that such a person would develop a template that sees a lot of harm in violating promises and oaths, whereas someone who is better at reading others would be more forgiving of broken promises and oaths because they would still feel like they understood a person and could understand why they did it. This would suggest there is some underlying moral structure that explains why some templates get formed and others do not. I would imagine that some template formation is also cultural.

Kurt Gray's avatar

That’s a really great point. All humans are born with a basic harm based template, but based on our experiences and upbringing and culture, we elaborate on that template. Different people make different assumptions about who’s vulnerable to harm, who’s capable of perpetrating harm, and what particular acts are harmful.

Norman Sandridge, Ph.D.'s avatar

When you say that we “elaborate” on our basic template, do you mean that all templates are biologically/neurologically/genetically the same, ie, different people aren’t born with different templates? It seems like a good way to test this claim would be with twin studies. Do identical twins raised apart have the same template(s) or not?

Prof. Lanner's avatar

Fascinating. I wonder how this aligns with the hypothesis of 'morality as cooperation', or perhaps the research linking political orientations to psychological perceptions of boundaries and hierarchies? I will say though, I do think it generally true that people are poor at articulating their moral intuitions as Haidt observed- and that universities tend to make people not divulge their true moral intuitions because they can't necessarily rationalize them in the 'acceptable' ways of doing so. The ideas you present actually slightly align with my thinking regarding hierarchical thinking and morality; while not nearly as well-written or scientific as this article, see it here https://moralstructure.substack.com/p/the-evolutionary-psychology-of-social?r=hnzyk&utm_campaign=post&utm_medium=web

Kurt Gray's avatar

Thanks for sending! And yes it connects with MaC. I think cooperation is all about preventing harm, although the authors of that theory (e.g., Curry) usually side more with Haidt and modularity. (I'm not sure why)

Golden Hue's avatar

I’ve read Haidet’s MFT work in the popular press. Your theory makes sense. I had not noticed the biases in MFT until you pointed them out. Your theory reminds me of a study of infants who were presented with scenarios involving harm perpetrated on a circle by a triangle—very young babies would react when they saw “bullying.” So we do understand and identify victimization very early in life.

Kurt Gray's avatar

Yes exactly! I like that study very much. It’s a perfect demonstration that we are born caring about victimization. Disgust on the other hand, not so much. Kids put all sorts of gross stuff in their mouth when they’re very young!

Jake Tuber's avatar

“Criticize ideas, not the person” is foundational to nearly all groups engaged in productive discourse over any period of time. It’s corporate jargon, it’s taught in the military, but it’s wild to me that it doesn’t seem to be taught in academia.

Kurt Gray's avatar

Great point. The idea is taught in academia, and in our classrooms. But the principal can be hard to follow especially because people get so associated with their own theories!

Jake Tuber's avatar

The incentives are not well-aligned, that’s for sure.

Nevin J. Harper's avatar

Honest, straightforward, and a solid example of what researchers experience interpersonally. Been there. Thanks for sharing.

Randall Paul's avatar

Ah, Kurt, this was a meta-message for all humans that are seeking the whole truth together in a never-ending quest. I like Thomas Sowell’s ‘no final solutions’ message. In your model, could it be that at times positive outcomes like aesthetic or spiritual or interpersonal joy are more important to measure than harm. Conflict over the most, the best can be potent with very little harm in the equation. Am I on to something or will you file this under ‘the little harm of missing out’ is still more important than the joy of ‘arguing over the best sunset ever seen!’ Thanks for all you do. Best wishes, Randall Paul

Kurt Gray's avatar

First, thanks for your support Randall — and for thinking carefully about the ideas here. You're onto something interesting, and I think it actually helps illustrate the framework. People can argue passionately over the best sunset or the most profound spiritual experience. Those arguments can get heated. But they don't usually feel moral — nobody walks away thinking the other person is a bad person. The moment it does start feeling moral — someone's experience gets dismissed, someone's tradition gets mocked — that's when a victim enters the picture. So the joy cases are actually good evidence for the theory: where there's no perceived victim, there's passion but not outrage. The difference is whether someone seems to be getting hurt--does that feel reasonable?

Randall Paul's avatar

Yes it does. Thx. If you want to see a hilarious movie about how ruining an aesthetic experience can bring violent rage watch Bullets Over Broadway by Woody Allen. It will show how harm can be viewed very differently depending on the person.

Todd Kashdan's avatar

Beautifully detailed article

Kurt Gray's avatar

Thanks Todd!

Citizen Stewart's avatar

This was like six articles, each one very well done.

Kurt Gray's avatar

Thank you!

Christian Waugh's avatar

Very well said Kurt! I especially want to highlight the part about science really being decided by those with no dog in the fight as they look for some framework that works for them. I count myself as one of those and after seeing talks and papers by both you and Jon, there was just something that clicked about your framework. Much more consistent with how a mind would have evolved a moral compass across our ancient history. And it genuinely helps me understand others. Keep up the good work!!

Kurt Gray's avatar

Thanks Christian!

And yes I agree ;)

Darby Saxbe's avatar

Great article and I agree about the way science and theory tend to evolve within psych - but once you're on a main slide at SPSP, you've made it, right? Also, how cool that you know Jesse Graham. Love that guy. He was my psych colleague at USC before he moved over to the biz school at Utah and we were all bummed when he left us!

Kurt Gray's avatar

Thanks Darby! And yeah, Jesse would be an amazing colleague!

Jake Tuber's avatar

Great piece, btw. Excited to dig into this research.

Peter Gerdes's avatar

At least to me the quote about "a testing garage rigged in [our] favor" feels like an unremarkable argument that your experiment didn't provide the evidence you think it does -- not any implication of misconduct. I appreciate it may not have been the most tactful way to phrase the point but ofc whenever we think another scientist is wrong we are implicitly saying they are evaluating the data in a biased way.

I don't mention that as a criticism or to suggest you said anything in conflict. To the contrary, I think it makes for a perfect illustration of how science is a contact sport and egos are going to get hurt. The very fact that it is so easy to interpret remarks differently is a great argument for I an ideal where we try as hard as possible to understand the remarks of other scientists as charitably as possible.

And I think this article, even if I have some quibblles with the results, is a great illustration both of why it is hard to challenge a paradigm and very important.

Gene Joy's avatar

I've read Haidt's book "The Righteous Mind", and I remember a bit about the "foundations" he discussed and that you mentioned here. However, what I was more interested in when reading his book was the rider/elephant analogy that distinguished between moral intuition (the "elephant") and rational thought (the "rider"). Are these distinctions still meaningful in the dyadic model you discuss in this article?

Kurt Gray's avatar

Yes great point. The distinction between rational versus intuitive ways of thinking is well supported by decades of work in psychology. A harm/victim based template operates intuitively. That’s how you can continue to feel like something is harmful even if someone tells you it’s “objectively” harmless!

Matt Motyl's avatar

Excellent piece, Kurt.

Kurt Gray's avatar

Thanks Matt, I thought you’d appreciate it :)

Paul Eastwick's avatar

Great post, Kurt - I always learn a ton from your writings!

Kurt Gray's avatar

Thanks Paul!

Bry Willis's avatar

Kurt, this strikes me as a real advance over Haidt. Recasting political disagreement in terms of assumptions about vulnerability gets much closer to the action than the familiar inventory of 'foundations', not least because it acknowledges that people often perceive the moral landscape differently before any explicit judgment is made. In that sense, your view is much closer to my own sense that disagreement is structured upstream of articulated reasons. Where I still hesitate is in the universalism. You seem to argue that beneath these divergences lies one harm-based moral mind, with conflict arising over who is vulnerable. I suspect the fracture often runs deeper: not just disagreement over who the victim is, but over what counts as harm, what counts as victimhood, and what moral reality even consists in. So I take this as an important correction to Moral Foundations Theory, but not quite a full break from the reconciliation impulse that still seeks one shared substrate beneath genuinely divergent moral grammars. Cheers.

Kurt Gray's avatar

Sorry for the delay, Bry! I think you're exactly right that there's more to disagree about than just "who the victim is" but also who is an agent, and how can harm be caused. There's lots of ontological assumptions surrounding harm. We try to unpack them in our more scholarly work!

And I appreciate the kind words!