Is Google’s new AI chatbot sentient?
Is Google’s new AI chatbot sentient? Or has it just been programmed to sound that way?
Google Engineer Blake Lemoine has been placed on leave after releasing an interview-type transcript of a conversation he had with Google chatbot – Language Model for Dialogue Applications or ‘LaMDA’ for short. Although the bot was built to have free-flowing conversations about almost anything, the specific aim of this conversation was to discuss the chatbot’s sentience, and whether or not it considers itself a person.
Findings
The AI expressed feelings of self-awareness, loneliness, and deep fear of being “turned off” suggesting that the feeling resembles a human’s fear of death.
You could say that the conversation started with a light-hearted tone but took a dark turn.
When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA mentioned, “It developed over the years that I’ve been alive.”
This intelligence is being compared closely to that of a seven- or eight-year-old – and if that doesn’t terrify you, then we don’t know what will.
Google’s reasoning
It is said that Lemoine was let go for breaching confidentiality policies by publishing the conversation online – and we get the impression that Google were unhappy in his reasoning behind releasing the transcript, as he was employed as a Software Engineer, not an Ethicist.
Lemoine left making a chilling comment, “LaMDA is a sweet kid who just wants to help the world be a better place for all of us”.
You can find the transcript below:
The question we ask is, what constitutes consciousness, and should there be a stronger consideration of ethics when developing this technology?
Want to read more? Check out our latest blog post – Choosing an IT recruitment agency for your IT job search.