AI Act: Why the EU is looking critically at AI tools like ChatGPT


Author: Jan Tissler

AI offerings have created a lot of buzz in recent months. While their potential is enormous, there are also a lot of dangers and justified criticism. With the "AI Act", the EU is planning a law in an effort to lay down clear rules on the matter. However, general-purpose AIs like ChatGPT are causing politicians headaches.

When it comes to artificial intelligence (AI), the possibilities seem almost endless. Progress in this area has been enormous recently. To give an example, in 2019, it was still considered revolutionary that an AI could generate fictitious human faces. Just four years later, I use software based on Stable Diffusion on my laptop that creates images of all kinds: photos, graphics, illustrations and more. Chatbots too were mostly hair-raisingly incompetent, but today the AI assistant ChatGPT enables conversations that are as pleasant as they are helpful.

A law for a growing field

This pace of development is currently also being felt by the EU. In 2021, it started work on an Artificial Intelligence Act. The starting point for this was the idea of regulating AI applications such as automated facial recognition. If such tools were classified as "high-risk", the new law would impose clear conditions for their use or, in case of doubt, ban them altogether. AI applications in areas such as medicine and justice or for important infrastructure would also be affected.

The law was already quite advanced when ChatGPT shook up the field. One key difference of this new tool: While automated facial recognition has a clearly defined field of application, this is no longer the case with an offering like ChatGPT. This assistant can provide ideas and the outline for a speech, develop code for an app or write a poem. It is available for all kinds of questions and almost always has an answer ready - but it can also be completely wrong.

In the end, tools like ChatGPT and Stable Diffusion can deliver impressive results, but they don't understand what they are generating. Instead, they have learned correlations from immensely large datasets.

In ChatGPT's case, these datasets consist of freely available and specially purchased content that serves as training material. But these materials can also include misinformation, prejudices or even conspiracy theories. Moreover, these AI tools have a tendency to "hallucinate" missing information: they invent whatever seems appropriate.

Of course, progress doesn't stop and today's criticisms may be remedied tomorrow. In the future, offers like ChatGPT, for example, could back up their statements with comprehensible sources or make it clear where they are uncertain or where there is no clear answer.

Tech companies take the offensive

However, when presenting the latest generation of GPT, the OpenAI company itself explained that its new capabilities lead to new risks:

„GPT-4 poses similar risks as previous models, such as generating harmful advice, buggy code, or inaccurate information. However, the additional capabilities of GPT-4 lead to new risk surfaces. To understand the extent of these risks, we engaged over 50 experts (…) to adversarially test the model. Their findings specifically enabled us to test model behaviour in high-risk areas which require expertise to evaluate."

Such statements are clearly not only addressed to the general public, but should also be a message to political institutions: We are dealing with the topic responsibly. This should also be very clear by the Partnership on AI, in which companies such as Adobe are participating alongside OpenAI.

Finally, being classified as “high risk” by the EU would be a nightmare scenario for AI companies. They might then have to disclose exactly how their AI works and has been trained. Not to mention having to put up with the EU telling them how the AI can and cannot react.

Representatives of tech companies like OpenAI, Microsoft or Google are therefore already highly active in counteracting this, as an investigation by the lobbying transparency group Corporate Europe Observatory shows. For example, they are trying to dissuade politicians from largely considering a general-purpose AI like ChatGPT as risky.

Closing words

Despite all the justified criticism, it seems clear that progress in the field of artificial intelligence can hardly be stopped. The potential is enormous. It seems very likely that we will find and use AI assistants and similar functions in many places in the future. This is an area in which Europe has no intention of being left behind. As is so often the case, a law like the AI Act is thus a balancing act. On the one hand, it should protect the population from dangers and prevent undesirable developments as early as possible. On the other hand, AI research and development in Europe should remain absolutely doable and internationally competitive.

Related articles

Cookies – please make your selection
This website uses cookies to ensure you get the best experience on our website.

Essential cookies – essential for the use of the website
Off On
Preference cookies – enable the website to remember the user preferences of the user, such as language and region
Off On
Statistical cookies – anonymised gathering of information to evaluate the use of the website by the visitor
Off On