Brussels, Berlin Companies should flag content created with artificial intelligence (AI) now. The European Commission wants to prevent so-called “deepfakes” from spreading on the internet. These are words, images or videos that appear to be realistic, but are entirely fictional.
European Commission Vice President Vera Jourova and Internal Market Commissioner Thierry Breton met business representatives in Brussels on Monday to present their plans. Companies building so-called generative AI in search engines must ensure they are not misused by malicious actors, Jourova said.
This applies to Microsoft and Google, for example, whose new search engines Bingchat and Bard are based on this technology. “Generative AI” refers to software that can independently create text and images from data.
Companies should flag their AI content — starting now, says Jourova. Google boss Sundar Pichai recently assured her that it was technically possible. Users should be able to immediately recognize that the text or video was created by a bot.
Powerful AI applications such as ChatGPT exacerbate the deepfakes problem as scams become increasingly difficult to detect. They are still often harmless internet jokes: In March, for example, a fake photo of Pope Francis in a hip down jacket caused a stir.
>> Read here: Like a grandson’s trick, only with AI – deepfake scammers blackmail companies with fake boss’ voice
But AI could also do real damage: Stock market speculators could conceivably use fake videos of company bosses to trigger price drops — and profit from them.
New AI applications can create images of events that never happened, Jourova said. Politicians must respond to this.
First, labeling should be voluntary. The commission simply added a new AI paragraph to the year-old EU anti-disinformation code of conduct. The code has already been signed by 40 companies and organizations, including Microsoft, Google, Meta and Tiktok.
SPD leader Esken calls for legal regulation
In Berlin, voluntary regulation has been questioned. SPD leader Saskia Esken told Handelsblatt: “Those who want to disrupt and divide our free and democratic society with their disinformation will neither voluntarily nor abide by the laws that label AI-generated content. obligation.”
“In this regard, the authenticity of digital media content such as text, sound and images can be marked at the originator in a more sensible and targeted manner in a counterfeit-proof manner, thereby enhancing the recognizability of reliable information.”
Esken therefore sees the EU initiative only as a first step. Voluntary agreements cannot replace legal requirements, she said. Marit Hansen, data protection officer for the canton of Schleswig-Holstein, also said the legal requirement was necessary “quickly”.
However, as an interim measure “it would be very sensible to implement measures such as labelling, as well as impact assessments and safeguards” until the AI Act comes into force. Obviously, not all companies will participate. “As a result, we will not be able to rely on AI-powered deepfakes and disinformation.”
Bernhard Rohleder, managing director of digital association Bitkom, sees another problem. Almost half of Germans don’t know what artificial intelligence really is, he said. “Without these fundamentals, labeling obligations will be useless.”
>> Read here: EU urges AI firms to quickly commit to themselves
The Commission considers the voluntary pledge only as an interim measure. The advantage of this, Jourova said, is that codes of conduct can be changed quickly without a legislative process. In the long run, AI labeling will be enshrined in law.
In international comparisons, the European Union is a pioneer in AI regulation. The planned European AI law, known as the AI Bill, is still under discussion in the European Parliament. The Council of Member States still needs to agree. Therefore, it may only be effective in a few years.
European Commission warns Twitter boss Elon Musk
On the other hand, the Digital Services Act (DSA) will apply from 25 August. With this law, the EU hopes to curb the problem of hate speech and fake news on the internet in general. Until then, all social networks will have to disclose the standards by which content is displayed to users.
The balance of fighting fake news has been sobering so far. “There’s still so much dangerous disinformation out there,” Jourova said. Big platforms will have to build more capabilities to fact-check and add caveats to content. As an example, she cites Russian disinformation about the war in Ukraine.
It’s not about our ban, it’s about reducing risk. Thierry Breton, EU Commissioner for the Internal Market and Services
She said the Kremlin was trying to use its propaganda to undermine democracy in the EU. Especially in Eastern European countries, online platforms must be strictly investigated and sorted out. The EU suffered a setback a few weeks ago when Twitter pulled out of a voluntary EU code of conduct under new owner Elon Musk. From Jourova’s point of view, this was a big mistake.
“Twitter chose confrontation,” said the committee vice-chairman. When the DSA goes into effect in August, the commission will keep a close eye on Twitter’s compliance. On the other hand, signatories to codes of conduct may hope for the goodwill of regulators. In June, the committee wants to stress test Twitter and other platforms to see how well they are already compliant with the DSA.
Brittany heads to Silicon Valley
Commissioner Breton will open an EU liaison office in Silicon Valley within two weeks. Talks are also planned with OpenAI, the company behind the so-called do-it-all ChatGPT, and Nvidia, a leading AI chipmaker. Breton sees artificial intelligence as a “fantastic innovation,” but his message to U.S. business is this: To be active in the European market, you must play by European rules.
“It’s not about bans, it’s about risk reduction.” Breton compared the development of increasingly powerful artificial intelligence systems to the invention of the car — and EU rules to the introduction of seat belts.
Just as seat belts will not prevent accidents, a mix of Brussels voluntary codes of conduct and regulations will not reduce AI risks to zero. But the commissioner hopes the risk of injury – which in the case of artificial intelligence will affect society as a whole – will drop significantly.
more: ChatGPT – What you should know about OpenAI’s AI