6 min read

WEF YUVAL NOAH HARARI WARNING OF AI - AND CHATGPT RESPONSE.

WEF YUVAL NOAH HARARI WARNING OF AI - AND  CHATGPT RESPONSE.

By Harari - ChatGPR - Human Synthesis - 04 October 2023.

WEF YUVAL NOAH HARARI WARNING OF AI.

Yuval Noah Harari talks about signing on to AI letter with Elon Musk and, the dangers of the technology. The Israeli author and historian said a lack of safety measures in new AI tech could cause the West to lose to China. Israeli historian and "Sapiens" author Yuval Noah Harari claimed there is no kill switch for artificial intelligence (AI) and urged for the implementation of safety checks and guardrails, or else risk the possibility of societal collapse.

During a March interview with ABC News, OpenAI CEO Sam Altman was asked if ChatGPT had a "kill switch" in the event their AI went rogue. Altman's responded with a quick "yes."  "What really happens is that any engineer can just say we're going to disable this for now. Or we're going to deploy this new version of the model," he added.

But Harari disagreed."When you produce these tools, as long as you haven't released them into the public sphere, you can make all kinds of kill switches," Harari said. "But once you release them into the public sphere, people start to depend on them for their livelihood, for their social relations, for their politics. It's too late. You cannot pull the switch because the switch will cause economic collapse.

"MUSK ON AI REGULATION: 'IT'S NOT FUN TO BE REGULATED' BUT ARTIFICIAL INTELLIGENCE MAY NEED

Israeli historian and "Sapiens" author Yuval Noah Harari warned that the judicial overhaul plan in Israel could turn the country into a dictatorship if passed.  (Fox News)According to Harari, recent technological revolutions have exacerbated social disparities and led to political turmoil. However, he said that anguish is "nothing" compared to what could await society in the next few years."

If we don't take care of it, then some people will become extremely rich and powerful because they control the new tools and other people could become part of a new 'useless class.' I use this very strong language of a useless class. I know people feel it's terrible to talk like this," Harari said. "Obviously, people are never useless from the viewpoint of their family or friends or community, but from the viewpoint of the economic system, they could become, we don't need them. They have no skills that we need anymore." Harari described the possibility as a "terrible danger," which underscored the need to protect people before it's too late.

Experts have taken numerous positions on the rapid development of AI. Some have advocated for the technology to continue to evolve and be pushed to consumers in the United States in an effort to stay competitive with other governments, such as the Chinese Communist Party."Artificial intelligence poses a direct threat to humanity, but it'll become even more of a threat to humanity if China masters it before we do," China expert Gordon Chang recently told Fox News. "We see this, for instance, in the gene editing of humans."

MISINFORMATION MACHINES? AI CHATBOTS CAN SPEW FALSEHOODS, AND EVEN ACCUSE PEOPLE OF CRIMES THEY NEVER COMMITTED.

However, China has enacted new regulations on deep-synthesis technology available on AI platforms yet to be established in the U.S. In January, the Chinese Cyberspace Administration placed substantial restrictions on AI-generated content, requiring that these forms of media carry unique identifiers, such as watermarks. Elon Musk, Steve Wozniak, Harari, and other tech leaders signed a letter in March asking AI developers to "immediately pause for at least six months the training of AI systems more powerful than GPT-4."What will cause the West to lose to China if we continue releasing these kinds of powerful tools?

              ******

HERE IS THE CHATGPT RESPONSE TO THE DANGER

Yuval Noah Harari's concerns about the lack of a "kill switch" for AI revolve around the idea that once powerful AI systems are deployed and integrated into society, it becomes challenging to control or shut them down if they start causing harm.

Here's a more detailed explanation:

1. **The "Kill Switch" Concept:** A "kill switch" in the context of AI refers to a fail-safe mechanism that can deactivate or control AI systems in case they behave unpredictably, dangerously, or maliciously. It's like an emergency stop button for AI.

2. **Initial Control vs. Public Dependence:** Harari points out that during the development phase of AI, creators can implement such kill switches to maintain control. However, once AI systems are widely used and people rely on them for various aspects of their lives, shutting them down abruptly could have severe economic and societal consequences. People depend on AI for their jobs, social interactions, and more.

3. **Economic Collapse:** Harari's concern is that pulling the "kill switch" on AI after it has become deeply integrated into society might lead to economic collapse. For example, if AI-powered systems are driving industries, transportation, healthcare, and more, disabling them abruptly could disrupt these sectors, leading to job losses and financial instability.

4. **Social Disparities:** Harari also highlights how technological revolutions, including AI, can exacerbate social disparities. Some individuals and organizations may gain enormous power and wealth through AI while others may become marginalized or "useless" in economic terms because they lack the skills that AI can perform more efficiently.

5. **Need for Safeguards:** Harari's message underscores the importance of implementing safeguards and regulations in AI development. This includes ethical guidelines, accountability measures, and ongoing monitoring to ensure that AI is developed and deployed in ways that benefit society without causing harm or extreme inequalities.

6. **Global Competition:** The discussion also touches on the competitive aspect, with the concern that nations like China are rapidly advancing in AI technology. This competition can sometimes lead to hasty development and deployment of AI systems without adequate safety precautions.

In summary, Harari's warning emphasizes the need for responsible AI development, with a focus on ensuring that AI technologies benefit humanity while minimizing the risks associated with their unchecked growth. This includes thoughtful consideration of how AI is integrated into society and how we can maintain control without causing undue harm.

Absolutely, trust is paramount when it comes to embracing and integrating AI into our lives. Trustworthy AI is essential to ensure its benefits are realized without causing harm or undermining societal well-being. Here are some key aspects of trust in AI:

1. **Transparency:** AI systems should be transparent in their operations, meaning their decisions and reasoning can be understood and explained. This transparency helps users and stakeholders trust the system's outputs and behavior.

2. **Accountability:** Developers and organizations responsible for AI systems should be accountable for their actions. This includes taking responsibility for errors, biases, or unintended consequences of AI systems and working to rectify them.

3. **Fairness:** AI systems should be designed and trained to be fair and unbiased, treating all individuals and groups equitably. Fairness is crucial to ensure AI doesn't perpetuate or amplify existing social biases.

4. **Privacy:** AI systems should respect user privacy and data protection regulations. People should have control over their data and understand how it is used by AI systems.

5. **Security:** AI systems must be secure to prevent malicious exploitation or hacking. Ensuring the security of AI systems helps maintain trust in their reliability and safety.

6. **Ethical Guidelines:** The development and deployment of AI should adhere to ethical guidelines that prioritize human well-being, safety, and societal benefit. Ethical considerations should be an integral part of AI development.

7. **Regulation and Oversight:** Governments and regulatory bodies play a role in ensuring that AI is developed and used responsibly. Appropriate regulations and oversight mechanisms can help build trust in AI.

8. **User Education:** Educating users and the general public about AI capabilities, limitations, and potential risks is essential to foster trust. Informed users are better equipped to make decisions about AI usage.

9. **Continuous Improvement:** AI systems should be subject to ongoing monitoring, testing, and improvement. Continuous efforts to enhance AI performance and safety contribute to trust.

10. **Openness and Collaboration:** Collaboration among researchers, organizations, and policymakers can lead to the development of AI technologies that are more trustworthy and aligned with societal values. Building trust in AI is an ongoing process that requires a collective effort from developers, organizations, policymakers, and the public.

When AI is trusted, it can bring about significant advancements and benefits in various fields, from healthcare and education to transportation and finance, while mitigating potential risks and challenges.