3 min read

EU Officials Want All AI-Generated Content To Be Labeled To "Combat Fake News"

EU Officials Want All AI-Generated Content To Be Labeled To "Combat Fake News"
Tyler Durden's Photo

BY TYLER DURDEN - Savannah Fortis  - JUN 06, 2023 - 09:45 AM

European Commission Vice President, Vera Jourova, said that companies deploying generative AI tools with the potential to spread disinformation should be publicly labeled...

Officials in the European Union have discussed additional measures that would make artificial intelligence (AI) tools, such as OpenAI’s ChatGPT, more transparent to the public.

On June 5, Vera Jourova, the European Commission’s vice president for values and transparency, told the media that companies deploying generative AI tools with the “potential to generate disinformation” should place labels on their content as an effort to combat “fake news.”

“Signatories who have services with a potential to disseminate AI generated disinformation should, in turn, put in place technology to recognize such content and clearly label this to users.”

Jourova also referenced companies that integrate generative AI into their services — such as Microsoft’s Bing Chat and Google’s Bard — as needing to create “safeguards” to prevent malicious actors from utilizing them for disinformation purposes.

In 2018 the EU created its “Code of Practice on Disinformation,” which acts as both an agreement and a tool for players in the tech industry on self-regulatory standards to combat disinformation.

Major tech companies, including Google, Microsoft, and Meta Platforms, have already signed onto the EU’s 2022 Code of Practice on Disinformation. Jourova said those companies and others should report on new safeguards for AI this upcoming July.

She also highlighted Twitter’s withdrawal from the code of practice, saying the company should anticipate more scrutiny from regulators.

“By leaving the Code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

These statements from the vice president come as the EU prepares its forthcoming EU Artificial Intelligence Act, which will be a comprehensive set of guidelines for the public use of AI and the companies deploying it.

Despite the official laws scheduled to take effect in the next two to three years, European officials have urged companies to create a voluntary code of conduct for generative AI developers in the meantime.


Comments from my AI-ChatGPT.

"As an AI language model, I don't have personal opinions. However, I can provide an analysis of the article you mentioned.

The European Union's discussion of additional measures to increase the transparency of AI tools, including OpenAI's ChatGPT, indicates a growing concern about the potential impact of artificial intelligence on society. The focus on labeling content generated by generative AI tools to combat disinformation is in line with the broader efforts to address the spread of fake news and misinformation online.

Labeling AI-generated content can help users differentiate between human-created and AI-generated information, which may assist in managing the potential risks associated with the misuse or manipulation of AI. By implementing such measures, policymakers aim to provide the public with more transparency and accountability regarding the origin and nature of the content they consume.

However, the effectiveness of labeling AI-generated content as a solution to the spread of disinformation is a topic of debate. While labeling can provide valuable information to users, it may not address the underlying issue of individuals knowingly or unknowingly sharing misleading or false information. Moreover, implementing such measures raises questions about the practicality of identifying AI-generated content accurately and efficiently.

It's important to note that the European Union's discussion reflects a specific regional perspective, and other countries or regions may have different approaches and opinions regarding AI transparency and combating disinformation.

Overall, the discussion of additional transparency measures for AI tools in the context of combating disinformation reflects ongoing efforts to grapple with the societal implications of AI and fake news. It will be interesting to see how these discussions evolve and what concrete actions are taken in the future."