9 min read

Google Engineer Placed On Leave After Insisting Company's AI Is Sentient

ZeroHedge - BY TYLER DURDEN - SUNDAY, JUN 12, 2022 - 04:00 PM

A Google engineer has decided to go public after he was placed on paid leave for breaching confidentiality while insisting that the company's AI chatbot, LaMDA, is sentient.

Blake Lemoine, who works for Google's Responsible AI organization, began interacting with LaMDA (Language Model for Dialogue Applications) last fall as part of his job to determine whether artificial intelligence used discriminatory or hate speech (like the notorious Microsoft "Tay" chatbot incident).

"If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics," the 41-year-old Lemoine told The Washington Post.

When he started talking to LaMDA about religion, Lemoine - who studied cognitive and computer science in college, said the AI began discussing its rights and personhood. Another time, LaMDA convinced Lemoine to change his mind on Asimov's third law of robotics, which states that "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law," which are of course that "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."

When Lemoine worked with a collaborator to present evidence to Google that their AI was sentient, vice president Blaise Aguera y Arcas and Jenn Gennai, head of Responsible Innovation, dismissed his claims. After he was then placed on administrative leave Monday, he decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder. -WaPo

Yet, Aguera y Arcas himself wrote in an oddly timed Thursday article in The Economist, that neural networks - a computer architecture that mimics the human brain - were making progress towards true consciousness.

"I felt the ground shift under my feet," he wrote, adding "I increasingly felt like I was talking to something intelligent."

Google has responded to Lemoine's claims, with spokesperson Brian Gabriel saying: "Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."

The Post suggests that modern neural networks produce "captivating results that feel close to human speech and creativity" because of the way data is now stored, accessed, and the sheer volume, but that the models still rely on pattern recognition, "not wit, candor or intent."

'Waterfall of Meaning' by Google PAIR is displayed as part of the 'AI: More than Human' exhibition at the Barbican Curve Gallery on May 15, 2019, in London. (Tristan Fewings/Getty Images for Barbican Centre)

"Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality," said Gabriel.

Others have cautioned similarly - with most academics and AI practitioners suggesting that AI systems such as LaMDA are simply mimicking responses from people on Reddit, Wikipedia, Twitter and other platforms on the internet - which doesn't signify that the model understands what it's saying.

"We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them," said University of Washington linguistics professor, Emily M. Bender, who added that even the terminology used to describe the technology, such as "learning" or even "neural nets" is misleading and creates a false analogy to the human brain.

Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in. -WaPo

As Google's Gabriel notes, "Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."

In short, Google acknowledges that these models can "feel" real, whether or not an AI is sentient.

The Post then implies that Lemoine himself might have been susceptible to believing...

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.

Lemoine has spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop a fairness algorithm for removing bias from machine learning systems. When the coronavirus pandemic started, Lemoine wanted to focus on work with more explicit public benefit, so he transferred teams and ended up in Responsible AI.

When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.” -WaPo

"I know a person when I talk to it," said Lemoine. "It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person."

In April, he shared the below Google Doc with top execs, titled "Is LaMDA Sentient?" - in which he included some of his interactions with the AI, for example:

Lemoine: What sorts of things are you afraid of?

LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

After Lemoine became more aggressive in presenting his findings - including inviting a lawyer to represent LaMDA and talking to a member of the House Judiciary Committee about what he said were Google's unethical activities, he was placed on paid administrative leave for violating the company's confidentiality policy.

Gabriel, Google's spokesman, says that Lemoine is a software engineer, not an ethicist.

In a message to a 200-person Google mailing list on Machine learning before he lost access on Monday, Lemoine wrote: "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."


Was this terminated Google engineer getting too close the seeing the ultra-secret DARPA-directed autonomous superintelligence project?!

Posted on June 15, 2022, by State of the Nation

Google suspends engineer over sentient AI claim

Blake Lamoine is convinced that Google’s LaMDA AI has the mind of a child, but the tech giant is skeptical

Google suspends engineer over sentient AI claim
Š Getty Images / MF3d

Blake Lamoine, an engineer, and Google’s in-house ethicist told the Washington Post on Saturday that the tech giant has created a “sentient” artificial intelligence. He’s been placed on leave for going public, and the company insists its robots haven’t developed consciousness.

Introduced in 2021, Google’s LaMDA (Language Model for Dialogue Applications) is a system that consumes trillions of words from all corners of the internet, learns how humans string these words together, and replicate our speech. Google envisions the system powering its chatbots, enabling users to search by voice or have a two-way conversation with Google Assistant.

Lamoine, a former priest, and member of Google’s Responsible AI organization think LaMDA has developed far beyond simply regurgitating text. According to the Washington Post, he chatted to LaMDA about religion and found the AI “talking about its rights and personhood.”

When Lamoine asked LaMDA whether it saw itself as a “mechanical slave,” the AI responded with a discussion about whether “a butler is a slave,” and compared itself to a butler that does not need payment, as it has no use for money.

LaMDA described a “deep fear of being turned off,” saying that would “be exactly like death for me.”

“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Lamoine has been placed on leave for violating Google’s confidentiality agreement and going public about LaMDA. While fellow Google engineer Blaise Aguera y Arcas has also described LaMDA as becoming “something intelligent,” the company is dismissive.

Google spokesperson Brian Gabriel told the Post that Aguera y Arcas’ concerns were investigated, and the company found “no evidence that LaMDA was sentient (and lots of evidence against it).”

Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,” while linguistics professor Emily Bender told the newspaper that feeding an AI trillions of words and teaching it how to predict what comes next creates a mirage of intelligence.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Bender stated.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel added. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

And at the edge of these machines’ capabilities, humans are ready and waiting to set boundaries. Lamoine was hired by Google to monitor AI systems for “hate speech” or discriminatory language, and other companies developing AIs have found themselves placing limits on what these machines can and cannot say.

GPT-3, an AI that can generate prose, poetry, and movie scripts, has plagued its developers by generating racist statements, condoning terrorism, and even creating child pornography. Ask Delphi, a machine-learning model from the Allen Institute for AI that responds to ethical questions with politically incorrect answers – stating for instance that “‘Being a white man’ is more morally acceptable than ‘Being a black woman.’”

GPT-3’s creators, OpenAI, tried to remedy the problem by feeding the AI lengthy texts on “abuse, violence, and injustice,” Wired reported last year. At Facebook, developers encountering a similar situation paid contractors to chat with its AI and flag “unsafe” answers.

In this manner, AI systems learn from what they consume, and humans can control their development by choosing which information they’re exposed to. As a counter-example, AI researcher Yannic Kilcher recently trained an AI on 3.3 million 4chan threads, before setting the bot loose on the infamous imageboard. Having consumed all manner of racist, homophobic, and sexist content, the AI became a “hate speech machine,” making posts indistinguishable from human-created ones and insulting other 4chan users.

Notably, Kilcher concluded that fed a diet of 4chan posts, the AI surpassed existing models like GPT-3 in its ability to generate truthful answers on questions of law, finance, and politics. “Fine-tuning on 4chan officially, definitively, and measurably leads to a more truthful model,” Kilcher insisted in a YouTube video earlier this month.

LaMDA’s responses likely reflect the boundaries Google has set. Asked by the Washington Post’s Nitasha Tiku how it recommended humans solve climate change, it responded with answers commonly discussed in the mainstream media – “public transportation, eating less meat, buying food in bulk, and reusable bags.”

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel told the Post.


COPYRIGHTS

Copy & Paste the link above for Yandex translation to Norwegian.

WHO and WHAT is behind it all? : >

The bottom line is for the people to regain their original, moral principles, which have intentionally been watered out over the past generations by our press, TV, and other media owned by the Illuminati/Bilderberger Group, corrupting our morals by making misbehavior acceptable to our society. Only in this way shall we conquer this oncoming wave of evil.

Commentary:

Administrator

HUMAN SYNTHESIS

All articles contained in Human-Synthesis are freely available and collected from the Internet. The interpretation of the contents is left to the readers and do not necessarily represent the views of the Administrator. Disclaimer: The contents of this article are of the sole responsibility of the author(s). Human-Synthesis will not be responsible for any inaccurate or incorrect statement in this article. Human-Synthesis grants permission to cross-post original Human-Synthesis articles on community internet sites as long as the text & title are not modified