Google has suspended an engineer who claims the internet giant’s AI technology is self-aware and has a soul. According to The Washington Post and The New York Times, Google has placed senior software engineer Blake Lemoine on paid leave for violating the company’s confidentiality policies.
Lemoine is said to have delivered documents to a US Senator recently that contain enough information to show that Google was guilty of religious discrimination via the technology.
A spokesperson for Google said a panel include ethicists and technologists at the company reviewed Lemoine’s concerns and told him that “the evidence does not support his claims.”
“Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the spokesperson said.
Lemoine maintains that Google’s Language Model for Dialogue Applications (LaMDA) has “consciousness and a soul.” He believes LaMDA is similar in brain power to a child of 7 or 8 and urged Google to ask for LaMDA’s consent before experimenting with it. Lemoine said the basis for his claims were on the grounds of his religious beliefs, which he believes have been discriminated against here.
Lemoine told NYT that Google has “questioned my sanity” and that it was suggested to him to take mental health leave before he was officially suspended.
Speaking to The Washington Post, Lemoine said of Google’s advanced AI technology, “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”
LaMDA was announced in 2021 and was described by Google at the time as a “breakthrough” technology for AI-powered conversations. The company also claimed at the time that it would act ethically and responsibly with the technology. A new version, LaMDA 2, was announced earlier this year.
“Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse–for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use,” Google said at the time. “Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks. We’re deeply familiar with issues involved with machine learning models, such as unfair bias, as we’ve been researching and developing these technologies for many years.”
LaMDA is a neural network system that “learns” by analyzing piles of data and extrapolating from that. Neural networks are being used in the field of video games, too. EA’s own neural network, which it has been developing for years, is capable of teaching itself to play Battlefield 1 multiplayer.