Technology

Google debate over ‘sentient’ bots overshadows deeper AI issues

A Google software engineer was suspended after going public with his claims of encountering “sentient” artificial intelligence on the company’s servers — spurring a debate about how and whether AI can achieve consciousness. Researchers say it’s an unfortunate draction from more pressing issues in the industry.
The engineer, Blake Lemoine, said he believed that Google’s AI chatbot was capable of expressing human emotion, raising ethical issues. Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems” that can cause real-world harm.
Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a dance from the AI creators’ direct responsibility for any flaws or biases in the programs.Best of Express PremiumPremiumPremiumPremiumPremium
The Washington Post on Saturday ran an interview with Lemoine, who conversed with an AI system called LaMDA, or Language Models for Dialogue Applications, a framework that Google uses to build specialized chatbots. The system has been trained on trillions of words from the internet in order to mimic human conversation. In his conversation with the chatbot, Lemoine said he concluded that the AI was a sentient being that should have its own rights. He said the feeling was not scientific, but religious: “who am I to tell God where he can and can’t put souls?” he said on Twitter.
Alphabet Inc.’s Google employees were largely silent in internal channels besides Memegen, where Google employees shared a few bland memes, according to a person familiar with the matter. But throughout the weekend and on Monday, researchers pushed back on the notion that the AI was truly sentient, saying the evidence only indicated a highly capable system of human mimicry, not sentience itself. “It is mimicking perceptions or feelings from the training data it was given — smartly and specifically designed to seem like it understands,” said Jana Eggers, the chief executive officer of the AI startup Nara Logics.
The architecture of LaMDA “simply doesn’t support some key capabilities of human-like consciousness,” said Max Kreminski, a researcher at the University of California, Santa Cruz, who studies computational media. If LaMDA is like other large language models, he said, it wouldn’t learn from its interactions with human users because “the neural network weights of the deployed model are frozen.” It would also have no other form of long-term storage that it could write information to, meaning it wouldn’t be able to “think” in the background.
In a response to Lemoine’s claims, Google said that LaMDA can follow along with prompts and leading questions, giving it an appearance of being able to riff on any topic. “Our team — including ethics and technologs — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” said Chris Pappas, a Google spokesperson. “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”

The debate over sentience in robots has been carried out alongside science fiction portrayal in popular culture, in stories and movies with AI romantic partners or AI villains. So the debate had an easy path to the mainstream. “Instead of discussing the harms of these companies,” such as sexism, racism and centralization of power created these AI systems, everyone “spent the whole weekend discussing sentience,” Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. “Derailing mission accomplished.”
The earliest chatbots of the 1960s and ’70s, including ELIZA and PARRY, generated headlines for their ability to be conversational with humans. In more recent years, the GPT-3 language model from OpenAI, the lab founded Tesla CEO Elon Musk and others, has demonstrated even more cutting-edge abilities, including the ability to read and write. But from a scientific perspective, there is no evidence that human intelligence or consciousness are embedded in these systems, said Bart Selman, a professor of computer science at Cornell University who studies artificial intelligence. LaMDA, he said, “is just another example in this long hory.”
In fact, AI systems don’t currently reason about the effects of their answers or behaviors on people or society, said Mark Riedl, a professor and researcher at the Georgia Institute of Technology. And that’s a vulnerability of the technology. “An AI system may not be toxic or have prejudicial bias but still not understand it may be inappropriate to talk about suicide or violence in some circumstances,” Riedl said. “The research is still immature and ongoing, even as there is a rush to deployment.”
Technology companies like Google and Meta Platforms Inc. also deploy AI to moderate content on their enormous platforms — yet plenty of toxic language and posts can still slip through their automated systems. In order to mitigate the shortcomings of those systems, the companies must employ hundreds of thousands of human moderators in order to ensure that hate speech, misinformation and extrem content on these platforms are properly labeled and moderated, and even then the companies are often deficient.
The focus on AI sentience “further hides” the exence and in some cases, the reportedly inhumane working conditions of these laborers, said the University of Washington’s Bender.
It also obfuscates the chain of responsibility when AI systems make makes. In a now-famous blunder of its AI technology, Google in 2015 issued a public apology after the company’s Photos service was found to be makenly labeling photos of a Black software developer and his friend as “gorillas.” As many as three years later, the company admitted its fix was not an improvement to the underlying AI system; instead it erased all results for the search terms “gorilla,” “chimp,” and “monkey.”
Putting an emphasis on AI sentience would have given Google the leeway to blame the issue on the intelligent AI making such a decision, Bender said. “The company could say, ‘Oh, the software made a make,’” she said. “Well no, your company created that software. You are accountable for that make. And the discourse about sentience muddies that in bad ways.”
🚨 Limited Time Offer | Express Premium with ad-lite for just Rs 2/ day 👉🏽 Click here to subscribe 🚨
AI not only provides a way for humans to abdicate their responsibility for making fair decisions to a machine, it often simply replicates the systemic biases of the data on which it is trained, said Laura Edelson, a computer scient at New York University. In 2016, ProPublica published a sweeping investigation into COMPAS, an algorithm used judges, probation and parole officers to assess a criminal defendant’s likelihood to re-offend. The investigation found that the algorithm systemically predicted that Black people were at “higher risk” of committing other crimes, even if their records bore out that they did not actually do so. “Systems like that tech-wash our systemic biases,” said Edelson. “They replicate those biases but put them into the black box of ‘the algorithm’ which can’t be questioned or challenged.”
And, researchers said, because Google’s LaMDA technology is not open to outside researchers, the public and other computer scients can only respond to what they are told Google or through the information released Lemoine.
“It needs to be accessible researchers outside of Google in order to advance more research in more diverse ways,” Riedl said. “The more voices, the more diversity of research questions, the more possibility of new breakthroughs. This is in addition to the importance of diversity of racial, sexual, and lived experiences, which are currently lacking in many large tech companies.”

Related Articles

Back to top button