1) Given AI and LLMs are trained on stories, information, and ideas generated by humanity, AI will always lie just within the shadow of collective humanity. An AI that was just trained on say... the words of Mein Kampf or the Communist Manifesto I suspect would provide moral judgments that might be at odds with the broader population.
2) However, if we disregard that subset, what I think AI does better than humans is that it's not as attached to individual, personal values, which is where we (as humans) start to introduce various biases. In some ways, AI is more 'objectively free' and can present with relatively equal weighting multiple moral perspectives, whereas it's so much harder for humans to hold that mindset as cleanly.
In this context, I wonder if AI can sometimes take on the 'values busting' role in moral conversations; helping identify blind spots in thinking that humans aren't as good at doing?
Great point. It can certainly better free itself from emotional knee jerk reactions, like how it can help people follow the better angels of their nature in discussions about politics. AI doesn’t get outraged on social media, and (relatedly) is more generally universal. But maybe not the kind of guide you want during life or death times of war?
Well I dunno actually. It strikes me that the key issue here is about decision-making and discernment, because humans are notoriously bad at it.
Case in point, the Costa Concordia cruise ship that crashed on the Italian coastline and killed a heap of people had its autopilot (which is a semi-rudimentary form of AI) manually switched off by the caption who wanted to show off.
In that case, I would argue that it would've been great for the decision-making to have been left in the control of the auto-pilot!
It’s a fair question. My reading of The Ethicist is that it’s intended to provide good-faith advice, not satire (though it does have the characteristic flair of NYT writing). Of course, we could ask the same of GPT’s moral advice: why do people like it so much? The studies we mention found that GPT used more moral, positive, and empathic language compared to the Ethicist. But in the statistical analyses, these differences did not account for why GPT was consistently rated as higher quality. So it’s not just that GPT is using nicer language than the Ethicist - there’s something else about its advice that people prefer.
I'm sorry I wasn't clear. I didn't mean to question Appiah's honesty on the matter, I thought he was doing it in good faith. That is just the way the media works. AI is politically neutral, while the NYT column tends to be more left of centre, which gives it the advantage of being more entertaining for that particular audience. Politically neutral, non-partisan ideas are often more objective - and more boring.
I love this reflection. My two thoughts here are:
1) Given AI and LLMs are trained on stories, information, and ideas generated by humanity, AI will always lie just within the shadow of collective humanity. An AI that was just trained on say... the words of Mein Kampf or the Communist Manifesto I suspect would provide moral judgments that might be at odds with the broader population.
2) However, if we disregard that subset, what I think AI does better than humans is that it's not as attached to individual, personal values, which is where we (as humans) start to introduce various biases. In some ways, AI is more 'objectively free' and can present with relatively equal weighting multiple moral perspectives, whereas it's so much harder for humans to hold that mindset as cleanly.
In this context, I wonder if AI can sometimes take on the 'values busting' role in moral conversations; helping identify blind spots in thinking that humans aren't as good at doing?
Great point. It can certainly better free itself from emotional knee jerk reactions, like how it can help people follow the better angels of their nature in discussions about politics. AI doesn’t get outraged on social media, and (relatedly) is more generally universal. But maybe not the kind of guide you want during life or death times of war?
Well I dunno actually. It strikes me that the key issue here is about decision-making and discernment, because humans are notoriously bad at it.
Case in point, the Costa Concordia cruise ship that crashed on the Italian coastline and killed a heap of people had its autopilot (which is a semi-rudimentary form of AI) manually switched off by the caption who wanted to show off.
In that case, I would argue that it would've been great for the decision-making to have been left in the control of the auto-pilot!
I wonder if the aim of Appiah's column is to provide sound, uncompromising ethical advice, or is it more about being entertaining?
No, but seriously.
Citizen: Would it be ok if I cheat on the exam?
Ethicist 1: No, what are you, nuts?
Audience: Boooo! Boring!
Ethicist 2: Well, it might be ok, but there are a few caveats.
Audience: Okay, we're listening.
It’s a fair question. My reading of The Ethicist is that it’s intended to provide good-faith advice, not satire (though it does have the characteristic flair of NYT writing). Of course, we could ask the same of GPT’s moral advice: why do people like it so much? The studies we mention found that GPT used more moral, positive, and empathic language compared to the Ethicist. But in the statistical analyses, these differences did not account for why GPT was consistently rated as higher quality. So it’s not just that GPT is using nicer language than the Ethicist - there’s something else about its advice that people prefer.
I'm sorry I wasn't clear. I didn't mean to question Appiah's honesty on the matter, I thought he was doing it in good faith. That is just the way the media works. AI is politically neutral, while the NYT column tends to be more left of centre, which gives it the advantage of being more entertaining for that particular audience. Politically neutral, non-partisan ideas are often more objective - and more boring.
https://open.substack.com/pub/roccojarman/p/should-we-beat-our-robots?