AI Act: Why the EU is taking a critical look at AI tools like ChatGPT

2023-04-11
-
Author:
Jan Tissler

AI offerings have caused quite a stir in recent months. While their potential is enormous, there are also dangers and justified criticism. With the "AI Act", the EU is planning a law to provide clear rules. However, general-purpose AIs such as ChatGPT are causing headaches for politicians.

When it comes to artificial intelligence (AI), the possibilities seem endless. Progress in this area has recently been enormous. One example: in 2019, it was still considered revolutionary that an AI could generate fictitious human faces. Just four years later, I'm using software based on Stable Diffusion on my laptop that creates images of all kinds: Photos, graphics, illustrations and more. Or think of chatbots: they were mostly hair-raisingly incompetent, today the AI assistant ChatGPT enables conversations that are as pleasant as they are helpful.

A law for a growing field

The EU is currently also feeling the effects of this pace of development. In 2021, it started work on an Artificial Intelligence Act. The starting point for this was the idea of regulating AI applications such as automated facial recognition. If such tools were classified as "high-risk", the new law would impose clear conditions on them or, in case of doubt, ban their use altogether. This would also include AI applications in areas such as medicine and justice or for important infrastructure.

The law was already quite advanced when ChatGPT stirred up the field. One major difference with this new tool: While automated facial recognition has a clearly defined field of application, this is no longer the case with an offering like ChatGPT. This assistant can provide ideas and the outline for a speech, develop code for an app or write a poem. It is available for questions of all kinds and almost always has an answer ready - but it can also be completely wrong.

Ultimately, tools like ChatGPT and Stable Diffusion may deliver impressive results, but they don't understand what they are generating. Instead, they have learned correlations from immensely large data sets.

In the case of ChatGPT, these data sets consist of freely available and specially purchased content that serves as training material. These materials may contain false information, prejudices or even conspiracy theories. In addition, these AI tools have a tendency to "hallucinate" missing information: they invent whatever seems appropriate.

Of course, progress never stops and today's points of criticism could be resolved tomorrow. In future, services such as ChatGPT could back up their statements with verifiable sources or make it clear where they are uncertain or where there is no clear answer.

Tech companies go on the offensive

At the presentation of the latest GPT generation, however, the company OpenAI itself explained that its new capabilities lead to new risks:

"GPT-4 poses similar risks to its predecessors, such as generating dangerous advice, faulty code or inaccurate information. The additional capabilities of GPT-4 also introduce new risk factors. To understand the extent of these risks, we engaged over 50 experts (...) to test the model for its suitability. Their insights allowed us to test the model's behavior in high-risk areas that require expertise to assess."

[Original: "GPT-4 poses similar risks as previous models, such as generating harmful advice, buggy code, or inaccurate information. However, the additional capabilities of GPT-4 lead to new risk surfaces. To understand the extent of these risks, we engaged over 50 experts (...) to adversarially test the model. Their findings specifically enabled us to test model behavior in high-risk areas which require expertise to evaluate."]

Such statements are clearly not only aimed at the general public, but are also intended to signal to political institutions: We are dealing with the topic responsibly. The "Partnership on AI", in which companies such as Adobe are participating alongside OpenAI, is also intended to make this clear.

After all, a nightmare scenario for AI companies is to be classified as "high-risk" by the EU. They might then have to disclose exactly how their artificial intelligence works and has been trained. And they might have to put up with the EU telling them how the AI may or may not react.

Representatives of tech companies such as OpenAI, Microsoft and Google are therefore already very active in counteracting this, as an investigation by the lobbying transparency group Corporate Europe Observatory shows. They are trying to dissuade politicians from generally considering general-purpose AI such as ChatGPT to be risky.

Closing words

Despite all the justified criticism, it seems clear that progress in the field of artificial intelligence can hardly be stopped. The potential is enormous. It seems very likely that we will find and use AI assistants and similar functions in many places in the future. Europe does not want to be left behind here.

As is so often the case, a law like the AI Act is a balancing act. On the one hand, it should protect the population from dangers and prevent undesirable developments as early as possible. On the other hand, AI research and development in Europe should remain possible and internationally competitive.