Professor Karen Yeung and Milla Vidina
One of the most commonly expressed concerns about AI systems is their capacity to produce unfairly discriminatory outputs and decisions. Well-known examples abound. For example, in healthcare, AI-enabled decision support systems have been shown to produce less accurate predictions for women relative to men, and for black patients relative to white patients. In AI-based recruitment systems, natural language processing algorithms can produce discriminatory results. For example, Amazon found that its AI-based hiring algorithm favoured those who used words like “executed” or “captured,” which were more often found in men’s resumes rather than those submitted by women. Similarly, researchers have found that generative AI systems, when asked to create images of people in specialized professions, depicted both younger and older people, but the older people were always men, reinforcing biases concerning women in the workplace.
Addressing these dangers, and other threats posed to fundamental rights, is one of the stated aims of the EU’s AI Act: the world’s first ‘horizontal’ AI regulation. It requires providers of ‘high risk’ AI systems to adopt risk-management measures to comply with the Act’s ‘essential requirements’ intended to address health, safety and fundamental rights risks. The Act relies heavily on technical AI standards that will be produced by European standards organisations which, once approved, will confer on AI providers who certify their adherence a ‘presumption of conformity’ with the AI Act’s essential requirements. Hence AI developers and deployers will have powerful incentives to comply with the resulting technical standards.
But will those standards provide effective protection against AI-enabled discrimination? Despite the explosion of interest by data scientists in mathematical approaches to ‘algorithmic fairness’, little attention has focused on how ‘technical design choices’ embedded into machine learning (ML) models and their implementation implicates equality laws in Europe. This is understandable, given the opacity of these systems and the need for difficult-to-find expertise in both law and data science. But the resulting knowledge deficit creates serious dangers that new regulatory regimes intended to foster ‘trustworthiness’ may compromise equality protection. The opportunity to address this deficit is closing fast, as the drafting of European AI harmonised standards under the EU’s AI Act by European standards organisations (CEN/CENELEC) is already well underway. Moreover, one of the most serious shortcomings inherent in the regulatory architecture of the AI Act is a misplaced assumption that technical standards produced by CEN/CENELEC will offer meaningful protection of equality and other fundamental rights. Yet technical experts typically lack expertise in fundamental rights protection.
It is precisely these dangers, and this urgency, that the AI Equality by Design, Deliberation and Oversight Project (AI Equality by DDO) seeks to address. Funded by the UKRI’s Responsible AI UK programme, the project team is now working to support Equinet, a network of 48 independent statutory authorities (Equality Bodies) for the protection of equality and the prevention of discrimination from over 30 European countries. Thanks to Equinet’s recently acquired ‘liaison’ status with CEN/CENELEC, Equinet representatives are entitled to participate in the AI standard-setting discussions currently taking place. Hence one major aim of the AI by DDO project is to actively intervene in these discussions, striving to develop and embed ‘equality by design, deliberation and oversight’ principles into European AI standards. But this won’t be enough. A proper understanding of what equality protection requires, and a widespread culture of respect for equality and other fundamental rights is also needed, to help ensure that those standards are properly interpreted, understood and applied to provide effective protection against AI-generated discrimination across Europe and the UK. Hence this Project also seeks to build the capacity of civil society and public Equality Bodies across Europe to understand the implications of technical standards for the protection of equality.
Lead by Prof Karen Yeung, this project entails collaboration between academic researchers, AI ethics professionals and a membership organisation. Its mission is to effect ecosystem change by creating knowledge and supporting network building between public equality defenders, civil society organisations and UK tech developers and firms. Accordingly, Equinet’s participation in AI standards drafting by the CEN/CENELEC JRC-21 will complemented by two project strands.
First, it seeks to equip Equality Bodies and civil society with concrete knowledge concerning the intersection of European AI standards and equality rights. The project will build their understanding of EbD safeguards – methods, practices and principles that can be embedded into AI system design and implementation— through which they could monitor and investigate the discriminatory impacts of AI-enabled services and contribute to the prevention of discrimination by supporting the uptake of EbD principles by digital tech developers, AI deployers and policymakers.
Second, it seeks to provide UK tech firms with online training in equality law to address gaps and misunderstandings in their current knowledge so that equality concerns are considered and meaningfully addressed throughout the AI lifecycle. This project component will assess tech developers’ awareness and understanding of equality law and its application to AI systems, while facilitating their engagement with the Equality and Human Rights Commission of Great Britain (EHRC).
The project is now seeking to recruit a 3.5 days-a-week legal project officer (hybrid position, both remotely and in Brussels). This is an exciting and important legal policy opportunity that seeks to build equality rights capabilities in the face of growing unaccountable and increasingly powerful digital systems to support Equinet’s work in AI. Potential candidates are warmly encouraged to contact Milla Vidina (milla.vidina@equineteurope.org) by email in advance to discuss their suitability for this post.