How Can We Tell If AI Is Sentient?
What would you do if you thought the chatbot you were messaging was really talking back?
Earlier this week, Google engineer Blake Lemoine was put on administrative leave after telling executives that he thought Google’s chat generator LaMDA was sentient.
Back in April, he showed executives a Google Doc titled “Is LaMDA Sentient – an interview” which included snippets of his conversations with the chatbot. This included a part where LaMDA talked about a “deep fear of being turned off” and that “it would be exactly like death for me. It would scare me a lot.”
But execs like the Google vice president and head of Responsible Innovation weren’t convinced. Lemoine ended up on paid administrative leave on June 7th, which he wrote about in this post “May be Fired Soon for Doing AI Ethics Work.” Here he also mentioned the “Washington Post article coming out in the near future” which is where the story broke and hit mainstream news.
The History of Artificial Intelligence
First up, is he right?
Well, this debate around AI sentience has been around for decades, for as long as humans have developed technology. Some say it really kicked off in the early 20th century with the Tin Man from The Wizard Of Oz and the OG sci-fi classic Metropolis.
By the 1950s, there was a generation of researchers who grew up with the question of artificial intelligence on their minds — including Alan Turing. He coined the now-famous Turing test in his 1950 paper “Computing Machinery and Intelligence”, which he originally named ‘the imitation game.’
The test is essentially whether a machine can successfully convince someone that they’re human or as Turing asks “are there imaginable digital computers which would do well in the imitation game?”.
He asked this question specifically instead of his original one asking “can machines think?”.
That’s because it was a way to sidestep the philosophical questions of what it means to think and get down to the more practical question — can we tell the difference? Because we have asked ourselves a lot what it means to be human and how that would play out, often in the form of science fiction.
How We’ve Explored Artificial Intelligence
Like most new tech, artificial intelligence got its fair share of dystopian panic. It was often portrayed in movies and TV as something that will turn against humans and take over the world.
2001: A Space Odyssey is one of the classics with HAL 9000.
Similar themes are in the Terminator, The Matrix, and The Avengers franchises.
In sci-fi where AI is more integrated into society, the questions are more around what defines being human. The original Blade Runner asks this question directly by making the protagonist ambiguously human, not knowing whether his memories make him human or not.
The highly acclaimed game Detroit: Become Human is set in a world where androids are used as slaves and questions whether this is ethical if AI is sentient.
Which in the end leads us to the million dollar question: how do we define sentience?
So What Does It Mean To Be Human?
That is a rabbit hole that will take you on a very existential journey but there’s a couple of interesting research areas.
Sentience is a subset of consciousness and it’s our ability to feel and experience something. And it’s related to something called the hard problem of consciousness, which tackles the explanatory gap. That gap is in trying to explain how physical things in the world around us actually become feelings and experiences.
Our eyes can absorb lightwaves, but how do we see it as colour? Our ears can detect sound, but how do we experience music?
Computers, for example, don’t have this gap because everything they do can be explained by circuitry and software.
Which brings us back to our friend LaMDA. Its access to huge swathes of data could explain the apparent intelligence in the chat transcripts. But without us knowing how consciousness works it’s hard to know if we’ve created it.
But here’s hoping it’ll look more like Wall-E and not the Terminator.