AI Danger Alarmism Hits Overdrive With Provocateur Eliezer Yudkowsky
By The Gateway Pundit - Paul Serran - Jul. 16, 2023
“‘I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die’, Yudkowsky said on an episode of the Bloomberg series AI IRL.
Hardly a week goes by that some respected voice in the Artificial Intelligence field does not raise the alarm about the impending dangers of this technology.
And this week was no different. AI theorist and provocateur known as Eliezer Yudkowsky, who has previously called for ‘bombing machine learning data centers’ is speaking out.
Yudkowsky has seriously maintained his theories for decades, and now his doom-mongering has gone into overdrive.
His latest bold prediction is that artificial intelligence will ‘inevitably’ lead to the demise of humanity. Brain Scan Uncovers Real Cause of Tinnitus (Ear Ringing)Diabetes Is Not from Sweets! Meet the Main Enemy Of DiabetesSee Why Thousands Of People Are Dumping Solar Panels For THIS New Alternative
7 Reasons to Stock Up on Long-Term Storage Premium Beef Right Away
News reported:
“For decades, Yudkowsky has been a staunch believer in the ‘AI apocalypse’ scenario. His views have gained traction in recent years as advancements in AI technology have accelerated, causing even the most prominent computer scientists to question the potential consequences.”
Yudkowsky is worried by the rapidly evolving capability of large language models, such as ChatGPT. He views these models as a significant threat, capable of ‘surpassing human intelligence’ and potentially causing ‘irreparable harm’.
“‘I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die’, Yudkowsky said on an episode of the Bloomberg series AI IRL.
‘The state of affairs is that we approximately have no idea what’s going on in GPT-4’, he continued. ‘We have theories but no ability to actually look at the enormous matrices of fractional numbers being multiplied and added in there, and [what those] numbers mean’.”
This latest cry of alert has followed many established voices in the field, like ‘godfather of artificial intelligence’ Geoffrey Hinton, a British computer scientist best known for his seminal work on neural networks that went on to form the foundation of machine learning models today.
Hinton quit his job at Google to be able to speak freely on the issue.
“‘Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI’, Hinton said. ‘And now I think it may be 20 years or less’.”
For now, omens of AGI are often invoked to drum up the capabilities of current models.
“But regardless of the industry bluster hailing its arrival or how long it might really be before AGI dawns on us, Hinton says we should be carefully considering its consequences now — which may include the minor issue of it trying to wipe out humanity.
‘It’s not inconceivable, that’s all I’ll say’, Hinton told CBS.
[…] ‘I think it’s very reasonable for people to be worrying about these issues now, even though it’s not going to happen in the next year or two’, Hinton said in the interview. ‘People should be thinking about those issues’.”
Another established voice rising up against the AI industry sleepwalking us into disaster is Yoshua Bengio, considered one of the three ‘godfathers’ of artificial intelligence.
He is feeling ‘a little blue’ that his life’s work seems about to spiral out of control.
Futurism reported:
“‘You could say I feel lost’, Bengio told the outlet. ‘But you have to keep going and you have to engage, discuss, encourage others to think with you’.”
Trending: Crowd Roars After Tucker Carlson Gloats Over Savaging Mike Pence and Ending His Political Career (VIDEO)
His most immediate concerns are ‘bad actors’ abusing AI.
“‘It might be military, it might be terrorists, it might be somebody very angry, psychotic’, Bengio told the BBC. ‘And so if it’s easy to program these AI systems to ask them to do something very bad, this could be very dangerous’.”
The AI controversy is poised to be with us for now. Let’s hope something is done while we still can.