Google suspends engineer who claimed AI Bot became sentient



Blake Lemoine, a software engineer with Google’s artificial intelligence development team, has publicly said he encountered “sensitive” AI on the company’s servers after he was suspended for sharing confidential project information with some thirds.
The Alphabet Inc. unit placed the researcher on paid leave early last week, claiming he violated the company’s privacy policy, he said in a Medium post titled “Maybe soon to be fired for doing work on AI ethics”. In the post, he links to former members of Google’s AI ethics group, such as Margaret Mitchell, who were eventually similarly fired by the company after raising concerns.
The Washington Post published an interview with Lemoine on Saturday, in which he said he concluded that the Google AI he was interacting with was a person, “in his capacity as a priest, not a scientist.” The AI ​​in question is dubbed LaMDA, or Language Model for Dialogue Applications, and is used to generate chatbots that interact with human users by adopting various personality tropes. Lemoine said he tried to run experiments to prove it, but was rebuffed by senior company executives when he raised the issue internally.
“Some in the wider AI community are looking at the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which don’t are not sensitive,” said Google spokesman Brian Gabriel. “Our team – including ethicists and technologists – reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims.”
The company said it does not comment on personnel issues when asked about Lemoine’s suspension.


You Can Read Also

World News



Leave a Reply

Your email address will not be published.