Years ago, one of the first ministers of the Department of Social Rights of the Generalitat received a family who had been denied a subsidy for families with children with disabilities. The family’s income slightly exceeded the eligibility threshold, leading the system to automatically reject their application. “The only reason we surpass the income limit is because we work tirelessly to save money that will protect our child when we are no longer here to take care of him,” the parents protested. The conversation ended with the Department overturning the decision and granting the subsidy after considering the family’s unique situation.
This story is still remembered in the halls of the Catalan Data Protection Authority (APDCat), and its message has driven the agency to focus on humanizing algorithms and artificial intelligence. On Tuesday, the APDCat presented a groundbreaking methodology in the Parliament of Catalonia to “prevent discrimination and bias” in cases where artificial intelligence plays a key role in decision-making. “We must supervise AI systems,” insists Meritxell Borràs, president of APDCat.
With the unstoppable rise of AI, the European Council approved a regulation last year requiring an impact assessment on fundamental rights for products and services that use artificial intelligence. However, APDCat argues that “there are no clear guidelines in Europe on how to carry this out.” The Catalan proposal introduces a specific method to quantify this impact through a matrix model.
According to risk theory, the impact of AI systems consists of two key dimensions: probability and severity. Their combination results in an AI risk index. Based on this index, technology providers must take action to mitigate the risk until it becomes “residual.” “Allowing humans to make automated decisions based on AI raises concerns about the logic behind artificial intelligence itself, and it is crucial to understand the side effects of this technology,” APDCat emphasizes.
AI restrictions aim to anticipate potential harm to users, an issue that has already prompted several administrations to take action. In 2020, a Dutch court ruled that the country’s government used an analysis system that “stigmatized and discriminated against” citizens when tracking potential fraud cases. In 2023, New York City (USA) regulated the use of algorithms in hiring processes to prevent “racial or gender biases.” Similarly, a 2021 analysis by MIT Technology Review found that Compas, a widely used AI system in the U.S. that informs judges about the risk of recidivism among prisoners, disproportionately discriminates against certain minorities, particularly Black individuals.
Experts also warn about AI systems developed outside of Europe. “Models trained outside of Europe that are introduced into our continent do not meet the best guarantees for implementation here, as they are not designed for a European context,” argues Alessandro Mantelero, a professor at the Polytechnic University of Turin (Italy) and one of the developers of the Catalan methodology.