Does Europe remain the place with the greatest guarantees for fundamental rights?

Photo: European parliament

A few weeks ago, the European Parliament finally adopted the Artificial Intelligence Act, after an incredible 4 years of preparation and debate, and an unprecedented 36-hour trialogue between the Council, the Commission and the Parliament.  We can say that this is the boldest legal regulation in the world regarding technologies using generative artificial intelligence so far.

The AI Act is a source of pride because it is a serious attempt to strike a balance between technological development and the protection of democratic principles and fundamental rights. And it is the first legally binding document in this direction.

However, despite the success, many challenges remain:

1. Many of the legal norms remain unclear, with multiple possibilities for interpretation. The risk-based approach adopted also opens the door to exceptions and different options for analysing risk and therefore taking different measures.

This immediately raises the challenge of who will do this analysis and how, and to what extent national authorities are prepared to do this adequately. It is also essential to ask whether it will be possible to apply the same approach in all 27 member states. And will this approach be effective for countries that have a problem with the transparency of institutions?

2. Every rule has an exception, but in this case we have quite a bit:

  • Some of the most dangerous uses of artificial intelligence, including systems that enable mass biometric surveillance and predictive policing systems, are not explicitly and completely banned; this is a retreat from the original version of the Act.
  • It creates a separate regime for the use of technology against one of the most vulnerable groups – people migrating, seeking asylum and/or living without documentation – leaving them with far fewer rights than EU citizens and almost no access to legal redress when those rights are violated;
  • Further extends the national security exemption beyond what is allowed in EU treaties, allowing governments to exempt themselves from the obligations of the AI Act to pursue cases deemed important to national security;

 

3. The extent to which existing legal systems are fit to adequately regulate emerging AI technologies

Our legal systems are based on assessing human behaviour, including culpable behaviour and therefore responsibility. In this regard, AI presents us with a serious challenge because it is not very certain how a violation of rights using AI technology will establish fault, determine compensation and ensure justice for the victim.

Artificial intelligence systems can indeed affect civil liberties by targeting, surveilling, harassing, defaming, censoring and framing civil society actors. The other major risk is misinformation and the loss of the concept of truth, and when they are used to provide public services the question of how to ensure non-discrimination remains unresolved

Therefore, all initiatives – national, regional and global – need to make sure that they prevent the negative impact of emerging technologies on democracy and civil liberties and try to create rights-based frameworks for transparency, accountability and citizen participation.

On the 26 March 2024, a discussion was held at the House of Europe in Sofia to discuss the different aspects related to the adoption and implementation of the AI Act, next steps for the European institutions and Member States, possible measures and best practices that can create greater safeguards at local level.

Photo: European parliament

Special guests at the event were Bulgarian MEPs Eva Maydel (EPP) and Petar Vitanov (S&D), both of whom were actively involved in the process of writing and adopting the Act.

Among the panelists were also.  The discussion was moderated by journalists Dorothea Dachkova and Silvia Velikova. You can watch the whole event here.

The discussion was organized by the European Parliament Office in Bulgaria and the Bulgarian Center for Not-for-Profit Law.

You can read more about BCNL's work on Digital Democracy here.