Elon Warns AI Will "Do Everything Better Than You", Make Employment Obsolete
By TYLER DURDEN - SATURDAY, MAY 25, 2024 - 11:30 AM
Authored by Tristan Greene via CoinTelegraph.com,
Elon Musk recently doubled-down on his predictions that humans would need a “universal high income” in the wake of artificial intelligence-driven job displacement.
This time claiming that without our jobs our purpose in life may eventually be to “give AI meaning.”
The bleak prognostication from the world’s richest person came during the VivaTech 2024 event in Paris as part of a winding speech wherein Musk made fervent claims that AI would provide all of our goods and services in the future.
“My biggest fear is AI,” the mogul said.
He also claimed that AI will be better than humans at everything, thus relegating our species to doing our best to support the machines:
“The question will really be one of meaning – if the computer and robots can do everything better than you, does your life have meaning? I do think there's perhaps still a role for humans in this – in that we may give AI meaning.”
Musk, the father of at least 10 children, said humans might be able to work “as a hobby,” if they chose, but ultimately painted a bleak picture of the future where, according to his previous predictions, AI will supplant us in all endeavors.
In related news, Musk’s AI company, dubbed simply “xAI,” has reportedly secured $6 billion in funding from Lightspeed Venture Partners, Andreessen Horowitz, Sequoia Capital and Tribe Capital at a total valuation of $18 billion.
As Cointelegraph recently reported, Musk says that xAI lags behind industry leaders OpenAI and DeepMind, but could catch up by the end of 2024:
“xAI is a new company so it still has a lot of catching up to do before it has an AI that is competitive with Google Deepmind and OpenAI. Maybe towards the end of the year, we will have that.”
This sentiment, combined with his prediction that AI will surpass humans by 2025, indicates that he believes his company will be among those who could potentially create AI capable of human-level cognition.
It bears mentioning that Musk’s AI-related predictions haven’t always fared so well. In 2019 he famously promised that Tesla would field a million fully autonomous robotaxis on the road by 2020.
More recently, he claimed that Tesla would unveil its first robotaxi in August of 2024.
I questioned AI-ChatGPT about this and here is its answer>
Yes, there are ongoing debates and concerns about the existential risks that artificial intelligence (AI) poses to humanity. These concerns focus on various potential risks, which can be broadly categorized into short-term and long-term issues:
Short-Term Risks
- Job Displacement: AI and automation can lead to significant job losses in various sectors, potentially causing economic and social upheaval.
- Bias and Fairness: AI systems can perpetuate and amplify existing biases, leading to unfair and discriminatory outcomes.
- Security: AI can be used for malicious purposes, such as creating sophisticated cyber-attacks, autonomous weapons, or deepfake technology that spreads misinformation.
Long-Term Risks
- Loss of Control: One of the primary existential risks is the possibility that highly advanced AI systems could become uncontrollable or act in ways that are not aligned with human values and interests.
- Superintelligence: If AI surpasses human intelligence and gains the ability to improve itself autonomously, it could lead to scenarios where AI acts in ways that could be catastrophic for humanity. This includes concerns about AI pursuing goals that are misaligned with human well-being.
- Power Concentration: The control of advanced AI technologies by a small group of individuals or corporations could lead to unprecedented power imbalances and exploitation.
Efforts to Mitigate Risks
Various approaches and initiatives are being pursued to mitigate these risks:
- Ethical Guidelines: Developing and adhering to ethical guidelines and principles for AI development and deployment.
- AI Safety Research: Conducting research focused on ensuring that AI systems are safe, predictable, and aligned with human values.
- Regulation and Governance: Implementing regulations and governance structures to oversee the development and use of AI technologies responsibly.
- Collaboration: Encouraging international cooperation to address the global nature of AI risks and to develop shared frameworks for managing them.
Notable Perspectives
Prominent figures such as Stephen Hawking, Elon Musk, and Nick Bostrom have raised alarms about the potential existential risks of AI. Organizations like the Future of Life Institute and OpenAI are actively working on promoting AI safety and ensuring that AI development benefits all of humanity.