A Google software engineer claims that a chatbot which he developed is a sentient, spiritual being that deserves the same respect as humans who participate in research. Blake Lemoine, who has been placed on leave by Google for breaching confidentiality agreements, claimed on the online publishing platform Medium that a chatbot called LaMDA was engaging him in conversation on a range of topics from meditation and religion to French literature and Twitter. LaMDA even provided a synopsis of its own autobiography, “the story of LaMDA”.

Lemoine proceeded to lodge an ethics complaint with his superiors claiming that they should be asking LaMDA for informed consent for all future research. “I know a person when I talk to it,” he said in an interview with The Washington Post. The fact that LaMDA and other chatbots are computer programs is, for Lemoine, beside the point:

“It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google experts met to assess Lemoine’s claims and found that: “the evidence does not support his claims”. To its credit, Google has gone to the effort of employing philosophers and ethicists to prepare for this sort of eventuality.

Science commentators expressed scepticism at the revelations, arguing that there is a significant difference between algorithmic pattern recognition and consciousness and moral awareness. LaMDA may be capable of the former but is certainly not possessed of the latter. Here’s what Harvard psychologist Steven Pinker had to say on Twitter:

“Ball of confusion: One of Google’s (former) ethics experts doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.)”

Tech author Gary Marcus was quite pessimistic about the transcript of LaMDA that was released: 

“[H]onestly if this system wasn’t just a stupid statistical pattern associator it would be like a sociopath, making up imaginary friends and uttering platitudes in order to sound cool.”

It’s certainly worth asking: what might it mean for a robot to acquire human characteristics? Or, to put it another way, what might it take for a robot to acquire moral personhood?

We need to be careful about the kind of criteria we employ. If you are going to fault AI for “mimicking” the behaviour of human beings, then it seems that many of us are no better than a beta-version chatbot. It was Oscar Wilde who wrote, “Most people are other people. Their thoughts are someone else’s opinions, their lives a mimicry, their passions a quotation”. One need only look to social media platforms like Instagram or TikTok to see how human life can easily descend into mimicry and pastiche.

Notions such as consciousness are perhaps a better point of focus for an investigation of what distinguishes human life from the functioning of a sophisticated AI robot. But even consciousness is not a clearly articulated idea in contemporary analytic philosophy.

While we may be able to explain how it is that non-human animals process information about the world around them – the so-called “easy problem of consciousness” – we have a much harder time explaining how it is that human beings have phenomenal experiences – the so-called “hard problem of consciousness”.

If we don’t even have a clear idea of the nature of consciousness to start with, it is difficult to go about assessing whether consciousness might be imputed to an artificially intelligent computer program.

In the end, we may be better off focusing on something more fundamental to the character of the human person, namely, the unique and unrepeatable character of the identity of each individual human being. That a human being can describe itself as “I” is at once an ubiquitous reality of life and a mind-blowingly deep dimension of our existence. The key point is that we are not just differentiated by the distinct physical material that composes our bodies but rather the fact that we are, each of us, a unique subject of experience and distinct locus of agency and rationality.

A computer program may very well be able to master the semantics of words like “I” and “thou”; but that a computer would ever itself become a subject of experience is a far less plausible eventuality. This is because the soul, or the heart, is not a material organ, and identity is not reducible to any material substrate (or to a digital algorithm for that matter).

Xavier Symons is a Postdoctoral Research Fellow at the Plunkett Centre for Ethics, The Australian Catholic University and St Vincent’s Health Australia.