“Artificial Intelligence can exacerbate inequality”

The EU Parliament is mulling a proposal to minimize the risks of AI. Steps in this direction are long overdue but more needs to be done, argues the civil rights federation European Digital Rights

an interview with Sarah Chander

 

August 2022

Surveillance, policing, data collection: Artifical Intelligence is now part of our everyday lives. Are we being equally impacted by these new technologies?

In certain areas, such as policing, migration control, and social risk assessment – AI can have discriminatory effects and exacerbate existing inequalities. In the Netherlands, for example, we've seen public authorities using AI to predict whether certain individuals are less likely to pose a fraud risk with regards to benefit claims.

These systems often make mistakes, however, because their purpose is to find patterns on the basis of a set of predefined variables and information. So, in the case of public welfare, it means potentially being denied benefits or labelled as a benefit fraud.

In the Netherlands in the 2010s, for instance, thousands of people and families were erroneously asked to pay back thousands of euros. This caused financial hardship for many and plunged some into a psychological crisis. There were even suicides.

Similarly dire cases can arise in the policing context. AI-systems are designed to predict the risk of certain people committing crimes and often use facial recognition systems. These don’t work equally on everyone, however.


 

“In the Netherlands, people were erroneously asked to pay back thousands of euros. This caused financial hardship for many”

 


Black men, for instance, have been wrongfully identified and arrested in the US because of these systems. The issue is, that the software is not only making mistakes but also playing directly into already existing racist dynamics, which foster a kind of heightened suspicion towards marginalized and already overpoliced communities.

The other side of the argument, though, is that even if these systems worked perfectly, they would still contribute to discrimination just by the nature of their utilization.

They would be used to scrutinize some people rather than others. For example, they are more likely to be used in areas where black people live, where poor people live, where Romani people live.

Question: Have these issues been addressed in the current EU proposal to create a legal framework for the use of AI?

In my view, the proposal is a really complicated piece of legislation. It introduces some bans on certain types of uses of AI. So it recognizes that some types of uses of technology are incompatible with our democratic society, can be inherently discriminatory and can also be creating a state of mass surveillance.

However, the prohibitions don’t go far enough. The use of facial recognition in public spaces, for example, is a key debate with regards to this piece of legislation. The initial proposal argues for a partial ban on facial recognition and other biometric mass surveillance technologies in public spaces, if used for law enforcement purposes.


 

“The proposal recognizes that some uses of technology are incompatible with our democratic society”

 


However, the use of such technology in shopping malls, airports and other public spaces is not covered by this ban. However, these are also public spaces, and are also places which you wouldn’t want to be surveilled in either.

Question: Is that the only loophole?

No, there are many other exemptions. Even regarding the facial recognition ban for law enforcement. Police could still use the technology if they want to find a terrorist or a missing child, for example, along with a long list of other crimes.

So while indicating a ban of harmful uses of AI, the piece of legislation is in fact enabling the police to use facial recognition in public spaces.

Also, none of the rules in the Act would apply for systems that are developed and used exclusively for military purposes. EU governments have been pushing to extend this loophole for alleged national security reasons.

If this happens, none of the demands and regulations that civil rights groups have been putting forward would apply, if a national government decided to use banned or high-risk AI systems to defend itself against a threat.

What if that threat is a public protest, however? In this case, governments could easily argue that they need these “harmful tools” to prevent national security threats, without having to justify why something poses a threat to national security. This is a big issue.

Question: Aren’t EU authorities like Europol also using AI-driven technologies, for example within the framework of the Eurosur surveillance system? If the current proposal were adopted, would they be able to continue to do so?

According to the AI Act, EU institutions should be subject to all requirements that are put forward. However, at the very end of the proposal there is a provision that states, that the Act doesn’t apply when the AI system is part of EU migration control databases.

And that includes pretty much all of the large-scale IT systems that the EU's migration processes require and are built upon ­– the types of systems, for example, that process visas and travel authorizations, but also asylum claims.


 

“Basically, the EU is exempting itself from its own laws when it comes to migration management”

 


This means that if an AI system is used in the purpose of any of these databases, the EU itself as a developer and user of these AI systems is not subject to scrutiny. So basically, the EU is exempting itself from its own laws when it comes to migration management.

Question: When it comes to the risks posed by the use of AI systems, the framework distinguishes between different risk levels: unacceptable risk, high risk, limited risk and minimal risk applications. Does that categorization make sense in your opinion?

There is some value in having different risk categories, yes. But of course there will always be flaws when it comes to categorizing something so complicated and diverse as the use of AI systems.

So, for me it’s much more about the political questions. And in that regard the question “Who gets to decide how these systems are governed?” is much more interesting than the categories themselves.

Is it the people affected, is it civil society or is it the governments, companies, and EU Institutions? Currently the EU Commission remains in control of what systems fall under these categories and there is no possibility to update the unacceptable risk category.

Question: Given the swift rate of technological progress, will new AI developments soon outpace the ideas of the proposal – and even make it useless?

The whole point of the AI Act is to have banned practices, and high-risk practices regulated. These practices might increase in the future though. We can already see that more dystopian projects will be released in the future and with that the types of banned and high-risk practices are likely to increase as well.

AI systems that are categorized as high-risk now may prove to be too harmful to be used soon. So, of course, in a rapidly changing technology market we cannot always account for such developments.

Thus, there is a need for more flexibility to update the list of banned and high-risk systems in the AI Act.

Question: What could those developments be?

We might see more harmful examples of the use of emotion recognition in public spaces, for example, or workplace algorithmic management and the use of AI systems to track the attention of students.

Currently, in the context of education, a system is only high-risk, if it meets one specific standard, one of them is that it is used to assess people in their examinations. So, for example there are AI systems that are looking at students through their webcam and assessing whether they have high attention levels – these fall under “high-risk”.

You could imagine that a slightly different system that could be used in a different way, but for the same purposes, would be developed. But because of the wording of the Act it may not be covered by it and fall under “high-risk”.


 

“Who gets to decide how these systems are governed? Is it the people affected, is it civil society or is it the governments, companies, and EU Institutions?”

 


Also, there are currently big gaps in the high-risk list, too. For example, emotion recognition systems and biometric categorization systems are not included.

Question: Which positive amendments were included so far?

The good news is that the Act is currently being negotiated by the EU Parliament, which has, in the past, put forward a ban on predictive policing, bans on emotion recognition as well as numerous bans on harmful uses of AI in the migration management context.

So, we can see a certain movement in the direction of more prohibition – and the EU Parliament is usually considered to be a defender of fundamental rights and democratic values.

The concern is, however, that many of those positive developments and positive amendments might get rolled back by the Council after all, which wants to introduce more national security loopholes.

Question: What further amendments would make the Act more useful?

The AI Act includes technical requirements for companies that are developing these AI systems. However, there is not enough attention being paid to obligations on entities that make use of these systems. And that is the main human rights issue – how these systems are put into use.

There should be some governance requirements, including impact assessment and a mandatory publication of the results, before such high-risk systems are put into use. We need to know what systems are being used and how they might be affecting us.

Also, more accountability and transparency by users of AI systems is required. The AI Act is lacking a framework of rights for people that have been harmed by these systems. There need to be mechanisms to challenge these questionable developments in the national context.


 

“The technology companies don’t want to put the resources necessary for human rights compliance into their business model”

 


And there must also be a rigorous mechanism by which we could take to court the corporations and organizations – the users – that deploy AI technology, if we find that any laws or rules of the AI Act have been breached.

Question: Some argue that overly strict AI regulations may limit innovation.

This argument is often peddled by the technology companies themselves. They don’t want to put the resources necessary for human rights compliance into their business model.

However, EDRi has spoken with many responsible employees of such companies, who said that the requirements put forward by the AI Act are minimal. Especially as the list of high-risk AI systems is very small – around 10-15 percent of AI on the market, as estimated by the Commission.

So only if you are producing high-risk AI, are you required to adhere to the technical requirements. The purpose of these safeguards is to ensure that human rights are not being violated.

If these safeguards are halting innovation, it is only the type of innovation that doesn’t comply with fundamental rights. And that is not the type of innovation we should want to push for anyway.


Sarah Chander is a senior policy advisor at European Digital Rights. The interview was conducted by Atifa Qazi 

 

Back to start

Read more