On the 11th of May the European Parliament voted on the EU’s draft AI Act, redefining the world’s first rules on Artificial Intelligence.
Two committees of the European Parliament agreed on a text which would ban: All use of live FRT in public spaces, with no exceptions; Running an FRT analysis of recorded footage obtained from public spaces, with the exception of law enforcement for the prosecution of serious crimes and only with judicial approval in a specific case.
As much as banning facial recognition in public spaces is a huge step forward, Eticas wants to emphasize that retrospective use of these technologies still represents a major interference with people’s fundamental rights.
Not only they have shown to be biased and discriminatory, but used for law enforcement, facial recognition has the potential to enable mass surveillance.
The Members of the European Parliament broadened the scope of high-risk areas as well, encompassing potential harm to people’s health, safety, fundamental rights, or the environment. Additionally, they included AI systems that manipulate voter behavior in political campaigns and recommender systems employed by social media platforms (with over 45 million users regulated by the Digital Services Act) in the list of high-risk elements.
MEPs made significant amendments to the previous draft, incorporating prohibitions on invasive and prejudiced applications of AI systems, such as:
- The previously mentioned “real-time” remote biometric identification systems in publicly accessible spaces.
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization.
- Biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation).
- Predictive policing systems (based on profiling, location, or past criminal behavior).
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions.
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).