Sentient AI: Already Here, or Impossible to Say?

Artificial Intelligence is taking our society and our technology in many different directions. Advances in healthcare, ecommerce, and entertainment are manifold—and the markets for Smart devices and IoT devices (already multi-billion dollar industries) are expanding every year. 

As AI reaches more aspects of daily life, we’re likely to see more concerns, and excitement, about what it means for AI to be truly sentient. 

In fact, some have announced that sentient AI is already here. 

But what does ‘sentience’ mean, for humans or for computers? How can something programmed by humans be capable of thoughts or feelings? And what are the ramifications of developing technology that we believe to be sentient? 

Definitions and tests: 

Merriam-Webster’s first definition of ‘sentient’ is “responsive to or conscious of sense impressions,” and the second definition is “aware.” The ambiguity and abstraction around these concepts—sentience, consciousness, awareness—is one of the reasons that pinning down Sentient AI is challenging. 

Giandomenico Iannetti, a professor of neuroscience at University College London, has posed similar questions: 

What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms? Or the ability to have subjective experiences? Or the ability to be aware of being conscious, to be an individual different from the rest?

Researchers looking to answer questions of life and intelligence in machines also of course employ the Turing test. Designed to discern intelligence in a computer, a Turing test requires that a human should be “unable to distinguish the machine from another human being by using the replies to questions put to both.” 

But the Turing test might be outdated, 70+ years on from its inception. In the past decades various AIs have passed the Turing test.  But Ianneti regards the test as something that “makes less and less sense.” Essentially, we’ve gotten good enough at creating machines that emulate emotion to deceive ourselves. 

Humans anthropomorphize machines all the time, but may be failing to realize that sentience is something deeper, something further—potentially related to having feelings (not just being able to copy or emulate them in interactions) which may also be related to having a body.

If a machine can convince a person of its intelligence or its ability to feel, it’s not proof that it can do either of those things. It’s merely proof of the machine’s ability to pretend…which is incredible in and of itself. 

So incredible, in fact, that it might just be real.

Earlier in 2022, a Google engineer shot to viral fame because he claimed that their AI chatbot “LaMDa” was truly intelligent, and sentient.

According to press briefings, Blake Lemoine was testing Google’s conversational AI machine to see if it produced hate speech, but during his conversations with LaMDA (short for Language Model for Dialogue Applications), he began to believe that it was fully sentient. According to the Guardian, he “published transcripts of these conversations in June and was fired on July 22 for breaching Google’s confidentiality agreement.” 

In interviews and reports after Lemoine’s dismissal, Google has repeatedly stated the claims are unfounded, saying that “LaMDA has been internally reviewed over ten times.”  

In order to approach the conversation of sentient AI with rigor and intention, it would help to establish some semantic agreements about what sentience means, for humans, animals, fungi, and machines. So, where do you stand? What would it take to convince you that a computer felt scared that it was going to be turned off some day? 

September 26, 2022
9:20 am
Author
mpix_user
Share Post
Insights

Related Blogs

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text.