Google has fired one of its engineers, a Christian mystic, who was suspended last month after telling the company he believed its artificial intelligence chatbot had become sentient and capable of human thinking and reasoning.
Blake Lemoine first shared the news during an interview on the Big Technology Podcast just hours after he was fired on Friday.
Lemoine was placed on leave on June 6 for talking with people outside Google about the company’s AI chatbot LaMDA, or Language Model for Dialog Applications. The system is used to create chatbots that can mimic human speech.
Lemoine had been working on the system since last fall and described it as sentient with an ability to express thoughts and feelings equivalent to a human child.
GOOGLE COFOUNDER SERGEY BRIN COULD BANK $100M IN TESLA STOCK SALES AMID REPORTS WIFE HAD AFFAIR WITH ELON MUSK
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told The Washington Post.
He published transcripts on Medium late last week of conversations between himself, a Google collaborator and LaMDA.
Lemoine said several of the conversations with LaMDA convinced him that the system was sentient. He said he believed it had become a person and that it should be asked for consent on the experiments Google runs on it.
FOX Business has reached out to Lemoine for comment but did not hear back before publication.
Lemoine appeared to reference his firing in a Saturday tweet, referring to a blog post he wrote in which he anticipated being fired for raising his concerns about AI ethics.
“Today I was placed on ‘paid administrative leave’ by Google in connection to an investigation of AI ethics concerns I was raising within the company,” Lemoine wrote. “This is frequently something which Google does in anticipation of firing someone. It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row. They pay you for a few more weeks and then ultimately tell you the decision which they had already come to.”
Google did not respond to FOX Business’ request for comment.
In a statement to the Big Technology Podcast, Google said all employees’ concerns about the company’s work are “extensively” reviewed, and found Lemoine’s claims about LaMDA being sentient “wholly unfounded.”
CLICK HERE TO GET THE FOX BUSINESS APP
“[I]t’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said. “We will continue our careful development of language models, and we wish Blake well.”
Read the full article here