Real World Cases of Annex III AI Act applications that posed risks to fundamental rights

Published: Posted on

Examples of real-world AI systems that have placed the fundamental rights of individuals at risk

Professor Karen Yeung

Dr. James MacLaren, Dr. Aaron Ceross, Fabian Lütz, Patricia Shaw, Milla Vidina and Prof. Dr. Karen Yeung[1].

Aim: This contribution offers a set of concrete ‘use cases’, discussed generally in a previous blogpost, by reference to the high risk AI systems Annex III of the AI Act.  Its aim is:

  1. to provide a set of concrete applications which present significant risks to fundamental rights of various kinds which have ripened into ‘harms’ (defined in terms of an interference with a fundamental right) for affected persons. These are taken from real-world historic examples based on information in the public domain and freely available, which demonstrate how people have suffered a fundamental rights infringement as a result of the operation if AI systems which were not rights-respecting in their design, development and implementation.  These cases will enable experts in various field to grasp in a more concrete fashion the kinds of risk assessment tasks that will need to be undertaken (including the need for consultation and dialogue with stakeholders and other likely affected persons) including consultation about proposed controls and their effectiveness in order to systematically avoid outcomes of this kind in drafting an standards in response to the Commission’s standardisation request to meet the requirements of the AI Act.
  2. to help experts, and anticipated users of the proposed standards, to understand how AI systems which constitute Annex III high risk systems implicate and pose risks to fundamental rights, with specific reference to the relevant right as stated in the EU Charter on Fundamental Rights
  3. to provide a foundational resource which can be used across various fields as concrete examples to illustrate particular fundamental rights risks, and associated safeguards.

The following Table offers at least one historical, real-world example for almost each sub-category in Annex III.  We hope it can serve as a ‘living document’ for now and may be updated, enhanced and extended, and we warmly welcome experts to contribute additional cases and richer detail to it.

Please note that this document does not seek to provide a legal analysis of the rights at stake, nor does it include an account of the case law pertaining to those rights, and how specific judgements about the nature and scope of the right, or the extent to which it was interfered with, in any given case.    In many cases in which AI systems have resulted in fundamental rights violations, they have occurred the affected persons seeking to bring complaints before courts concerning the rights violations alleged, in light of the costs, expertise and burden associated with the litigation process.  This makes it all the more important that we are able to develop robust fundamental rights risk management processes to protect against rights-violation to prevent them from occurring, even though we recognise that complete elimination of such risks is impossible.

1 Biometrics, in so far as their use is permitted under relevant Union or national law:
a Remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be;

 

Remote biometric identification systems include facial recognition technologies. There have been several uses of these technologies that have had problematic outcomes.

 

A series of live facial recognition trials were conducted by the South Wales Police on more than 60 occasions since May 2017 and may have taken sensitive facial biometric data from 500,000 people without their consent, which was subsequently to be unlawful by the British Court of Appeal[2]. Trials were also conducted by London Metropolitan Police Service between August 2016 and February 2019 in various public settings, including the Notting Hill Carnival, on Remembrance Sunday, in the Stratford Westfield Shopping Centre and Romford.

 

Both sets of trials faced sustained criticism[3] regarding their accuracy, as well as there being inadequate transparency, accountability and privacy protection. A judicial review action was brought by a concerned citizen, aided by Liberty (a human rights organisation) alleging that the use of the technology was unlawful on several grounds, including that it entailed a violation of the right to privacy, the right to data-protection and the right to non-discrimination. Although the High Court rejected the application, it was upheld by the Court of Appeal and ruled as unlawful on the basis that the automated scanning of people’s faces in a public setting without their consent constituted a violation of the Art 8 ECHR right to privacy.  In addition, the police failed to acquire sufficient evidence to discharge the public sector equality duty to demonstrate that the software did not unfairly discriminate against women and against non-white ethnic populations.  Hence it amounted to a violation of the right to non-discrimination.

In relation to the London trials, an independent report conducted by a legal scholar and sociologist at the University of Essex identified multiple questions doubting the legality of the deployment, including legal obligations arising human rights law. Another report[4], published by the University of Cambridge, found that the Met Police’s trials suffered from inadequate transparency and accountability, poor privacy, and failed to meet minimum expected ethical and legal standards.

Both of the reports suggest that there would be grounds for finding these trials unlawful. Radiya-Dixit, in particular, states that there is scope for considering that the right to privacy (Art. 8 of the Charter of Fundamental Rights[5]), the right to freedom from non-discrimination (Art. 21 of the Charter), and the right to access of documents (Art. 42 of the Charter) might be violated[6]. In each case, it is members of the general public whose rights are at stake. It is the public whose privacy is being violated, who are being discriminated against, in that certain communities were more likely to be subject to this technology, and it is members of the public, specifically people from communities for whom their primary language is not English and so were not able to access paperwork explaining and accounting for the trials.

Another example of remote facial recognition, which again ran the risk of violating Art. 21 of the Charter comes from Rite Aid, a US drug store chain that deployed facial recognition technology across their stores. On account of a high number of false positive positives, for which inadequate precautions were in place, accusations of theft were issued to a large number of innocent shoppers. In particular the technology was primarily deployed in neighbourhoods located in ‘plurality non-white areas’ and targeted black, Hispanic and female customers. It was reported that these incidents ‘disproportionately impacted people of colour…, subjecting customers to embarrassment, harassment and other harm’.[7]

b AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;

 

c AI systems intended to be used for emotion recognition.

 

Emotion recognition, by analysing facial movements, claims to identify various emotional states in people. These technologies have raised concerned because they lay open opportunities for manipulation and have been considered intrusive and potentially biased.

4 Little Trees was developed by Find Solution AI and is designed to identify happiness, sadness, anger and fear, as well as other emotions in students in Hong Kong schools. It was claimed that it did this by analysing small facial muscle movements. It was rolled out into at least 83 schools in 2020-2021, during the Covid-19 pandemic. However, it was reported by CNN that the system has much less successful in terms of accuracy when assessing more complex emotions, such as enthusiasm and anxiety. More concerning, it was reported that the system struggled it terms of accuracy with darker skin tones[8]. This indicates potential violations of Art. 7 – that privacy be respected (Art 8(1) ECHR) and Art. 21 – the right not to be discriminated against.

Focusing on the latter risk, the system is less successful at analysing darker skin toned faces because it has been trained on primarily white faces. Given that the system as intended here is to help and offer support students who may be suffering negative emotions, if it is less successful in recognising these in black students, then they are disadvantaged in responding to their needs, which would include their ability to engage constructively in an educational context and thus place their right to education protected under Article 14 of the Charter[9].

2 Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
  AI has many areas of deployment. Its adaptability, speed and scale lends itself to safety components and the ongoing monitoring of complex systems, such as those related to critical infrastructure systems.

One deployment of these systems that has raised concerns has dealt with the supply of essential resources such as water and power, even when the system has been put in place with ostensibly position social intentions.  An example is the Bono Social de Electricidad (BSE), a Spanish governmental programme that sought to provide discounts on energy bills to at-risk individuals and families in Spain with a view to reducing energy poverty. A software system was put in place to manage entitlement to the discounts. However, an assessment of the programme suggested that delays in payments had not reduced energy poverty and perhaps made it worse[10].

Furthermore, the nature of the system was criticised for poor transparency, being opaque. AIAAIC reported that ‘in 2019, Civio requested BOSCO’s source code in order to ensure its own service gave the same results, only to be rebuffed. In July 2019, Civio filed an administrative order with Spain’s Council of Transparency and Good Governance to force access to the code. This was declined on the basis the code constituted a trade secret’ despite posing a risk of violating Article 36 of the Charter, the right to access to services of general economic interest.[11] Services of general economic interest are defined in EU law as ‘commercial services of general economic utility subject to public service obligations. Transport, energy, communications and postal services are prime examples.’[12] This is especially so given the position that The Treaty on the Function of the European Union, Article 14, takes, that states ‘shall take care that such services operate on the basis of principles and conditions, particularly economic and financial conditions, which enable them to fulfil their missions’ and that Bono Social de Electricidad has a mandate to support vulnerable groups and people. That people who would be most placed to need these services are thus suffering increased energy poverty suggests a failure to uphold this right. 

3 Education and vocational training:
a AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;

 

One of the tasks that AI is very effective for is sorting and prioritising. Consequently, there are many spheres where AI may be deployed to this end. Identifying appropriate candidates for educational opportunities is one sphere, but one where there is a high risk that fundamental risks might be violated.

For example, The University of Texas deployed an algorithmic system to assess prospective PhD candidates in computer science. The algorithm (GRADE – GRaduate ADmissions Evaluator) was trained on data drawn from prior students that had been successful in applications and had been accepted previously. The system reduced the amount of time needed for processing applications but had been under-representative in respect to female and black students. The system, trained on successful applicants, also predominantly white and male, was prioritising factors far more common amongst those students, less so amongst other groups. Even though the system was terminated in 2020 following a backlash amongst students, it remains instructive in that it ran a sever risk of violating several rights; notably, Article 14, the right to education, and also Article 21, the right to non-discrimination.

In respect to Article 14, the wording is clear that this extends to ‘continuing training’, echoed by the UN DHR, Article 26, which states that ‘higher education shall be equally accessible to all on the basis of merit’[13]. These articles are often paired with those prohibiting discrimination (Art. 21) Given that those that were failing to be offered places on the basis of this system were female and/or black prospective students, it is plain that the system was violating both rights.

 

b AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

 

A machine learning based system was developed and deployed in Wisconsin High Schools that was intended to predict which students were at risk of dropping out or of graduating (Dropout Early Warning Scheme – DEWS). While Wisconsin has high graduation rates overall, there is significant disparity across different social groups. The system generates these predictions on the basis of over 40 markers, including demographic, social, education and community-led factors, and outputs a DEWS category that is used to triage attention to students considered to be in need.

However, a 2023 investigation[14] found it be wrong ‘almost three quarters of the time’ when predicting which students would drop out. It was more often wrong when the student was from black or Hispanic communities than it was for white students. In addition, there was very poor training and guidance given to teachers and school administrators regarding risk and when and how to intervene, students were not told about the deployment of the system, and that ultimately the system failed in its primary objective. A study from the University of California recommended that the system was scrapped.

It runs the risk of violating Article 21, the right to non-discrimination, in so far as is proportionately more likely to impact students from black or Hispanic communities than it is students from white.

c AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

 

See 3a.
d AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.

 

Questions have been asked of technologies deployed to the end of detecting cheating or otherwise prohibited behaviour. Some systems put in play have had a poor record when it comes to accuracy. Proctorio, an anti-cheating software system, was unable to detect anything untoward when tested by researchers from the University of Twente in the Netherlands. Thirty volunteers from the computer science department were to perform a test, six were instructed to cheat. The system failed to identify any cheater[15].

The same system was deployed at University of Illinois at Urbana-Champaign and was accused of placing a series of rights at risk[16]. They claim that there were risks to Article 7 – the respect for privacy. The system needed to be granted access to webcams, microphones and internet use. The student was required to use Google Chrome, which auto-filled many areas of the form, including those with ‘sensitive information’. There were risks to Article 8, the protection of personal data, in that the data the system gathered would be shared with the university. It was alleged that the CEO of Proctorio shared a supposedly private student chat log on a social media site. In addition, the potential was present for violations of Article 21, the right to non-discrimination, in that the requirement for facial recognition has a record of failing to register non-white faces and consequently failing to validate students’ identities. [17]

4 Employment, workers management and access to self-employment:
a AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;

 

AI systems are powerful tools for sifting and sorting data and, in turn, prioritising. As such they have been deployed in many ways to the task of recruitment, where a potentially large number of applicants or candidates need to be sub-divided to establish who is most suitable for which sorts of jobs and so on. However, these tasks raise the potential for a variety of rights violations – many posing risks to the right to non-discrimination.

One example is seen in the recruitment tool developed, and later scrapped, by Amazon in 2014. Looking for senior staff, the tool scored potential candidates between one and five stars. It was found to favour male candidates rather than female, because it was trained to favour verbs and adjectives that were more likely to appear on male candidates applications – words such as ‘executed’ or ‘captured’[18]. Furthermore, it was trained, and consequently reinforced by data, by data that had been drawn from patterns seen in CVs (resumés) over a ten-year period and consequently favoured terms that excluded women – reportedly downgrading the ‘women’s’ in ‘women’s chess club’. Despite efforts by Amazon to neutralise the system, it was ultimately abandoned as a primary means for recruitment and used only for suggestion.[19]

Second, iTutorGroup, a China-based, English-language tutoring group used AI-powered software to screen applications from prospective tutors. The system automatically rejected over 200 female applicants aged over 55 and male applicants over 60.

In another case, it was claimed that business and IT services company Workday developed a job screening tool that discriminated against elder, Black, disabled people. The complaint was brought by Derek Mobley, a well-educated Black man living in California who suffers from anxiety and depression, had applied for some 80 to 100 jobs at various employers that he believes use Workday software, and was turned down every time. This case has been taken to trial and is ongoing.[20]

These three systems all have been accused of running a risk of violating Article 21, the right to non-discrimination, which holds that ‘any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.’ In the first example, job applicants are discriminated against on account of their sex; in the second, on account of age; and the third raises race, age and disability.

In addition, these AI systems run the risk of violating other, more specific rights. Amazon, in the first example, was accused of violating Article 23, of equality between men and women, which explicitly includes equality in employment. This right is reinforced in the Treaty of the Functioning of the European Union and underlines how this system runs against the principle of equality for men and women. Article 157 states that ‘each Member State shall ensure that the principle of equal pay for male and female workers for equal work or work of equal value is applied’.

The second, iTutorGroup risks violation of Article 15 – the Freedom to Choose an occupation and the right to engage in work, and Article 25 – the rights of the elderly, which defends the rights of older people to ‘lead a life of dignity and independence and participate in social and cultural life’.

“Age discrimination is unjust and unlawful. Even when technology automates the discrimination, the employer is still responsible,” said EEOC[21] Chair Charlotte A. Burrows. “This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative. Workers facing discrimination from an employer’s use of technology can count on the EEOC to seek remedies.”[22]

The third is accused of violations of Articles 21 and 25, but also Article 26, the right to ‘integration of persons with disabilities’, specifically in respect to ‘occupational integration’.

A different example is seen in the deployment of a system by Digital Minds, a Finnish developer, who developed a personality assessment product that enables potential employers to scan the private emails and social media posts without the applicants consent, to monitor how suitable applicants may be for some prospective role. It looked to ‘analyse the entire corpus of an individuals’ online presence’ … resulting  ‘in a personality assessment that a prospective employer can use to assess a prospective employee. Measures that are tracked include how active individuals are online and how they react to posts/emails.’[23]

This example plainly violates Article 7 of the Charter, the respect for private and family life, which explicitly includes an individual’s personal communications, such as email. However, it poses a serious risk of also violating Article 8, the protection of personal data.

5 Access to and enjoyment of essential private services and essential public services and benefits:
b AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

 

In 2012, the District of Columbia instigated a teacher evaluation system for schools under investigation (IMPACT) which incorporated ‘value added scores’. A 5th Grade teacher, Sarah Wysocki, otherwise regarded as an excellent teacher via observations from peers and parents was fired from her position on account of an ‘uncharacteristic’ low value-added score, based on falsified student scores[24]. Her appeal was also dismissed.

A similar system was deployed in Italy in 2016, designed by HP Enterprise. The ‘Beuno Scuola’ algorithm intended to evaluate and score teachers based on experience, performance and to match them with relevant vacancies. In operation, teachers were relocated across the country, in effect randomly. An assessment of the system ‘found it to be fully automated, ‘unmanageable’, full of bugs, and impossible to properly evaluate due to its opaque nature’[25].

Each of these examples are striking in their opacity and lack of opportunity for contestation and challenge to secure an effective remedy. Outcomes were unjust, but worse were the difficulties present in challenging them and in understanding how they came about. A key lesson in each case is ensuring that systems are sufficiently interrogable.

Each case runs the risk of violating a number of works rights, as found in Articles 27-31 in the Charter. Article 27 regards the right to be informed, which directly recalls Article 21(b) of the European Social Charter, which holds that workers ought to be consulted in good time on proposed decisions which could substantially affect’ worker’s interests – this seems significant in light of the second example here. Article 30 of the Charter regards protection in the event of unjustified dismissal. This applies plainly in the first case, but may apply indirectly as a form of example of constructive dismissal.

a AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

 

AI systems are well placed to classify, sort and rank information and consequently they have been deployed by various agencies, public and private, to determine what services or benefits might be distributed. However, there are many documented cases where these systems have produced outcomes that have been discriminatory or that have violated rights or that have produced other unjustified rights interferences.

Public assistance benefits are especially key for people, since very often they are essential to protecting the most basic conditions of human life and dignity: shelter, food, drink, warmth. Consequently, where public authorities deploy algorithmic systems with a view to granting, revoking, reviewing benefits, there is an increased risk that individuals impacted may have fundamental rights violated.

The UK’s Department of Work and Pensions (DWP) has developed an automated system for distributing social security benefits, Universal Credit. However, it has been reported that there are significant inaccuracies within the system, has led to people’s earnings to be miscalculated, and thus to benefit pay-outs to be significantly lower than they ought to have been. A report, produced by Human Rights Watch, “Automated Hardship: How the Tech-Driven Overhaul of the UK’s Benefits System Worsens Poverty” details how a poorly designed algorithm is causing people to go hungry, fall into debt, and experience psychological distress.[26]

Another case (although not one that relies upon any AI or machine learning processes) involving the DWP has been flagged by Big Brother Watch and reported by the Guardian that over 200,000 people were incorrectly flagged as submitted housing benefit claims as potentially fraudulent[27].

A third case against the DWP highlights how legal action was brought regarding a ‘General Matching Algorithm’ deployed to help identify benefit fraud. The Greater Manchester Coalition of Disabled People (GMCDP) reported that that DWP used ‘AI, algorithmic technology and other forms of automation’ when investigating benefit fraud and that disabled people were being unfairly targeted, being subject to ‘essential’ cash cuts and are given insufficient information about the investigations. There were also allegations of privacy violations, suggesting that the DWP had used ‘excessive surveillance techniques’ to investigate possible benefit fraudsters[28].

A similar system was developed in Serbia in 2022, which sought to automate welfare payments and the detection of fraud. The system was developed by the Serbian company Saga and drew on 130 types of data gathered from Tax Administration and other governmental agencies. In addition to concerns regarding lack of transparency, access to this system has been denied by the Serbian government, despite accusations that the system has discriminated against Roma communities and the disabled.[29]

Each of these cases raise the risk of various violations of fundamental rights. The potential impact of individuals and families being denied benefits while under conditions of severe poverty, especially when exacerbated by age or disability raises issues regarding Article 1, the right to dignity. If dignity can be understood in part as the denial of the autonomy that comes from being able to form one’s own choices about how to live, severe poverty can be seen as undermining that. The Committee on Economic, Social and Cultural Rights (CESCR), the treaty body that monitors implementation of and issues authoritative interpretations of the ICESCR, has recognized that the right to social security is “of central importance in guaranteeing human dignity for all persons.”[30]

However, in addition, there are potential issues relating to Article 21, the right to non-discrimination, especially in regards individuals with disabilities or from minority communities, such as the Roma.

Furthermore, there may be a risk to Article 41, the right to good administration, especially section 2, part 3, for the administration to give reasons for its decisions. If algorithmic decision making systems are so opaque or have been designed and deployed in such a way that administrators of the system are unable or unwilling to ensure that an answer or clarity can be given.

Finally, there is a risk of violation of Article 36, the right to social security and social assistance.

A second example, although similar, is seen in the Netherlands’ Childcare Benefit Scandal. This saw that the assessment and payment of childcare benefits was automated, specifically with a view to predicting the degrees of risk that the applicants would attempt to commit fraud.

False accusations of fraud were directed at approximately 26,000 families leading to suicide, families falling into significant debt, children taken into care, and great anxiety. This was despite the scheme having been criticised and shown to have serious failures. For instance, it was alleged that ‘racial profiling was baked into the design of the algorithmic system used to determine whether claims for childcare benefit were flagged as incorrect and potentially fraudulent’[31].

This case raises many of the same rights issues as above. In the worst cases, a call could be made for a violation of dignity, but Article 41 for the right to good administration is at risk, and where there is evidence of discrimination, Article 21.

b AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud In the same way, AI systems can be deployed to assess the creditworthiness or general suitability of persons for various services or products. Similarly, they can assess the likelihood of a person applying for a service or product fraudulently. However, as with many of these algorithmic decision making systems, the outputs can be discriminatory in

For example, AI consumer lender, Upstart, was found to produce serious inequalities in the loans that were offered to black and non-white Hispanic applicants. While the lender claimed that their system of using ‘alternative data’ (including educational and employment history) was more inclusive than traditional models, they were still found to be discriminatory in their pricing structures and were hit by a class-action lawsuit in 2022[32].

A second example is raised by US insurers who deployed automated fraud detection systems that were found to discriminate against black homeowners. A survey carried out by NYU showed that black homeowners were 39% more likely to have to submit additional paperwork before approval of their claims. ‘According to the lawsuit, the cause of the alleged discriminatory practice is’ the insurer’s ‘automated claims processing system, which appears to have had the effect pf disproportionately delaying claims of African American Homeowners’[33]

In each case, there appears to be a clear risk of violating Article 21, the right to non-discrimination. Whether it is in the application for loans or other credit agreements, or whether it is whether it is in relation to potential fraud investigations, the occurrence of protected characteristics ought to have no bearing.

 

c AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;

 

Similar cases can be seen in the field of insurance, specifically in the actuarial and pricing processes.

For example, a study from the universities of Padua, Udine and Carnegie Mellon found that algorithmic decision-making systems used by Italian car insurance companies were offering significantly different pricing structures based on places of birth – for instance a driver born in Milan may pay over 1,000 Euros less than a driver born in Ghana.[34]

Algorithm Watch notes that a key element of the problem is that because the systems are opaque, and it is not plain on what decisions are founded on, the problem remained obscured.

A report by consumer group Citizens Advice found that in the UK people of colour spent on average £250 more than white people. Similarly, customers who lived in areas with higher than average black or south Asian residents were likely to pay over £280 more for their insurance[35]. These findings were supported by the BBC in an investigation they carried out early in 2024.

Cases such as these raise various potential risks of rights violations. Most immediate are those of regarding equality and non-discrimination (most straightforwardly Article 21). However, the opacity and consequently the difficulties individuals face in challenging these automated decisions, highlights the place of Article 47, namely the right to an effective remedy. The right holds that ‘everyone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy…’. The examples given illustrate the violation of their right to non-discrimination, but the lack of transparency severely curtails the opportunity for redress.

d AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.

 

The capacity of automated systems to sort, rank and prioritise might appear to have a natural application for systems that are intended to evaluate emergency calls, dispatch priorities or effect triage in a medical context. However, the risks in these contexts are considerable.

For example, the Emergency Severity Index deployed by the Emergency Nurses Association was found by the University of Chicago Booth School of Business’ Centre for Applied Artificial Intelligence to actively reinforce racial and economic biases. It “was found to underestimate the severity of Black and Hispanic peoples’ problems, perpetuating inequitable treatment in areas in which they reside for at least a decade”[36].

In addition to a clear violation of Article 21, regarding non-discrimination, there is also a direct risk of violation for Article 35, the right to health care, which includes the right of access to preventative health care and the right to benefit from medical treatment. If a person’s problem is deemed less severe inappropriately, i.e. based on their racial profile, then there is a clear risk to these rights.

6 Law enforcement, in so far as their use is permitted under relevant Union or national law:
a AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences;

 

AI systems are powerful prediction machines. In an ideal situation, a system would be capable of processing a bank of data regarding individuals and some sets of phenomena and generating probable outputs regarding some further outcome. This capacity, the thinking goes, offers the potential for calculating who, from an array of people, is more or less likely to become a victim of crime.

An example might be VioGén, an automated system intended to serve as a risk assessment tool for recognising levels of risk for domestic abuse deployed by Spain’s Ministry of the Interior in 2007. The system is intended to protect women and children from domestic violence and to evaluate “the degree of risk of aggression to women and assigns a score which determines the level of police protection they should receive.”

As laudable this goal might appear on paper, several issues have been raised with the system as it has been deployed.

While 95% of cases are assigned an automatic risk score, only 3% receive a score of ‘medium’ or above – the minimum threshold for police intervention. A consequence might be that between 2003 and 2021, 71 women that had filed a report, and not received protection, were killed.

However, while reliability has been a serious concern with the system, transparency has been highlighted as much. Only 35% of women who used VioGén knew their score, indicating that the system was not reliable in feeding back to the women relying upon it. In addition, the audit of the system found that those who have been tasked with its overview were also responsible for its development, suggesting that there is insufficient external oversight of the system[37].

b AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;

 

AI systems can be used in support of tasks that would previously been pursued using mechanical and/or biometric data, such as polygraphs, to aid the findings of law enforcement endeavours. Much like traditional lie detectors, AI systems directed to the same end suffer similar risks and weaknesses.

The iBorderCtrl system included a biometric lie detection module that was put to use by the EU Research Agency. An investigation carried out by The Intercept found that 4 out of 16 honest answers were incorrectly identified as a lie. There were also grounds to consider that the technology might discriminate further against certain groups, although this has not yet been established. MEP Patrick Breyer was initially blocked from gaining information on the system when asked because of public security and commercial interests[38].

The system represents a number of possible rights violations. Article 41, the right to good administration; section 2 includes an obligation for the administration to give reasons for its decisions. However, the EU has been slow to offer insight – The Guardian reporting that information requests had been rejected on account of the sanctity of trade secrets[39]. This also impacts Article 42, the right of access to documents.

A second system that arguably falls under this category is the Verus system, developed by Leo systems and deployed in various US prisons. The system ostensibly ensures the security of prisons and uses speech-to-text technology based on keywords to monitor phone calls.

However, this system has raised a variety of rights concerns. There are privacy concerns, especially regarding family members or others that may be in conversation with prisoners, potentially raising issues regarding Article 7 – the respect for privacy. Albert Fox Cahn of the Surveillance Technology Oversight Project noted, ‘Vendors are making outlandish marketing claims for technology they claim can predict future crime, but which does little more than racially profile people in prisons and jails. This technology is being used to record privileged calls with lawyers, spy on intimate moments with family, and expand an ever-growing electronic dragnet’[40].

There have also been concerns raised in respect to discrimination in that voice to text technologies have proven less reliable with black voices. In addition, given the demographic make-up of prison populations, a higher percentage of people incarcerated in the US are drawn from black and Hispanic communities[41].

c AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;
d AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;

 

A famous system that proved controversial in assessing potential recidivism, was the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS).

 

The tool had been used to sentence Eric Loomis for six years for driving a car used in a shooting. The sentence was appealed because it was shown to present a denial of due process. The appeal was founded on the fact that the score attributed by the system was unable to be assessed, and a sentence derived from the COMPAS system infringed his rights for an individualised sentences and based on accurate information.

“COMPAS risk assessments are based on data gathered from a defendant’s criminal file and from an interview with the defendant, and predict the risk of pretrial recidivism, general recidivism, and violent recidivism” and Loomis scored highly on all three measures. But because the judge reported drawing on the COMPAS report, Loomis requested a new hearing.

This request was denied but the court did advise that judges presented with COMPAS scores understood that the weaknesses of the system. Since the judge in the Loomis ruling also grounded his decision on Loomis’s own criminal history a new hearing was denied.

e AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences. A system similar in respects to COMPAS in that it was also intended to assess likelihood of reoffending was the Harm Assessment Risk Took (HART), an algorithmic tool used by Durham Constabulary to help custody officers decide whether arrested individuals should be offered an opportunity to participate in its ‘Checkpoint’ rehabilitation program. HART used ‘custody event data’ drawn from the Constabulary’s custody management IT systems for the five years to the end of 2012 and used thirty-four risk predictors based on data about the arrested person at the time of arrest, combined with data from Durham Constabulary’s pre-existing records. HART purported to calculate the arrestee’s risk of committing an offence in the subsequent two years, described as a prediction of ‘offender dangerousness’ (a misnomer: those arrested have not been convicted and were therefore wrongly described as ‘offenders’). Individuals predicted as likely to commit a ‘serious’ offence (defined as an offence involving violence), a non-serious offence, or no offence, were classified as ‘high’, ‘moderate’ or ‘low’ risk respectively. Only those receiving a ‘moderate’ prediction were eligible for Checkpoint[42].

One issue highlighted in Yeung & Harkens account of HART was the failure to construct validity: the training data was a wholly inadequate proxy for the real world phenomenon it claimed to predict. The key example given was that training data was built from arrest data, but to be arrested is no marker of criminality in itself. Consequently, to be given a higher rating in the HART tool was not truly a predictor of criminality, but at best a predictor of likely arrest. Since only those who were given a moderate prediction were eligible for Checkpoint, the algorithmic decision making process is impacting individuals options and future chances.

As such, the system runs the risk of violating Articles 47 and 48. A tool predicting an individual’s future actions runs a real risk of impacting the presumption of innocence. Furthermore, given the potential impact that this tool might have and the influence it may wield in the judicial system, it is vital that the person would have the power to challenge the process and question those decisions. Furthermore, the potential outcome that a person is imprisoned directly implicates Article 6 – the right to liberty and security and this ought to be taken seriously[43].

7 Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law:
a AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools;

 

Under section 6b. we have already noted the iBorderCtrl system, which deployed a machine learning-based video ‘lie detector module,’, which was deployed in Hungary, Greece and Latvia. ‘The detector used an avatar of a border guard to ask people 13 questions about their personal backgrounds and travel plans, assessed travellers’ micro-expressions, and notified human agents if it suspected them of being dishonest.’[44]

The tool was criticised as ‘pseudo-scientific’. ‘dystopian’, hogwash’ and as leading to an ‘Orwellian nightmare’. These criticisms are grounded on findings that evidence inaccuracies and bias, especially in its facial recognition systems.

A range of potential rights violations are present, but perhaps most prescient, given the context, is Article 18, the right to asylum. If a border guard, informed by a AI lie detector module that falsely indicates that a person’s micro-expressions are dishonest, refuses entry to a traveller, this might violate this person’s rights in this regard.

b AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;

 

Two surveillance systems were installed at asylum processing centres on islands in the Aegean sea. The first, Centaur, used camera, drones and motion sensors and algorithms. The second, Hyperion, was described as an ‘integrated entry-exit control system, where ‘guests’ were expected to present RFID cards combined with a fingerprint, including other personal data. It was found that there were inadequate and incomplete Data Protection Impact Assessments, which constituted ‘serious omissions’ in regards the EU’s GDPR requirements. The Greek Ministry of Migration and Asylum was fined 175,000 Euros[45]. This system violated Article 8, the protection of personal data.
c AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence; An example of such a system is the pilot scheme established by Immigration, Refugees and Citizenship Canada (IRCC), using ‘advanced data analytics’ to process temporary visa applications, although it is primarily limited to more straightforward applications, while more challenging cases are decided on by an officer.

IRCC did publish an impact assessment, which concluded that the impact level of the system was moderate, but this assessment was criticised for being light on detail. However, a report from the University of Toronto Citizen lab considered that the risks were higher – describing it as a ‘high risk laboratory’. Furthermore, it was criticised as being non transparent – all 27 information requests from the Citizen lab were unanswered[46].

Not an example of a system in itself, but rather one of a competent public authority deploying an inaccurate tool – the US Government immigration agencies in Afghanistan refused an asylum claim made by an Afghani woman on the basis of an inaccurate translation. One account describes it so[47]:

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

There are several potential rights violations following the deployment of this system. Again, as under category 7a, Article 18 is under risk of being violated. If the person is a refugee, as in the example, and is rejected on account of a failure within an AI translation system, then her right to asylum has been violated.

Other rights implicated include Article 47, the right to an effective remedy and to a fair trial. As someone whose rights have been violated, the woman in this example has the right to expect effective remedy, including, as under Article 41, the right be given reasons for the decisions made by the administration.

d AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.

 

8 Administration of justice and democratic processes:
a AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;

 

As in other fields mentioned, algorithmic decision making systems have been seen to aid various processes in processing large amounts of data, sorting, sifting, prioritising, ranking, and to do so according to the individual circumstances of the person in question.

Systems have been developed and deployed within the judicial system to help to automate necessary processes. While this might be seen to offer a variety of benefits including increasing participation rates, there follows various concerns. These include that court processes may become less open, more opaque; there will be equity gaps; as well as efficiency gaps; and that issues that relate to computer access and experience may grow as well as for those with disabilities or a lower level of proficiency in primary languages[48].

For example, an online dispute resolution system has been set-up in Utah that is intended to provide ‘simple, quick, inexpensive and easily accessible justice’ offering ‘individualised assistance’ for claims under $11,000. Evaluations of this system found that many users found it opaque and did not know how ‘to contact somebody for more assistance’.

However, an investigation found that several concerns arose from the deployment of the system in the judicial system. Primarily, there appeared to be significant transparency issues. The system puts more weight on the privacy of disputants, which suffers the consequence of making it harder to access the effectiveness of the system or to challenge its efficacy or check for potential abuses. Furthermore, relatively few defendants log in to the platform, again limiting transparency. Furthermore, very little legal guidance was given within the system, further hampering the capacity of lay-persons to fully engage and understand the system and its context[49].

Given the nature of the system, there is a clear risk that Article 41, the right to good administration, might be violated. Section 1 of this right states that ‘each person has the right to have his or her affairs handled impartially and fairly and within a reasonable time by the institutions and bodies of the Union’. AI systems ought not to be considered impartial since so much depends on the design and construction of the system. Furthermore, given the opacity of many AI systems, whereas a human administrator may be questioned, challenged, and required to give reasons for their judgement (section 2, third line), an AI system may not be so transparent.

b AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.

 

Various applications have been developed that use AI technologies that have been deployed to the end of influencing the development of political opinion. One qualification worth note is that it is up for debate whether these systems are ‘intended to be used for influencing…’, whether they have been co-opted to that end, whether their initial intention was something other (satire, amusement) but that had the by-product of influence. The role of deepfakes, for instance, might have been intended non-politically in their conception, but are then used by certain actors with political ends in mind. Chat bots, similarly, might be intended in the first instance, to be sincerely informative in their operation, and not explicitly influential at all, but, through weaknesses within the ‘learning’ process, or through deliberate misuse, they become influential in a way not initially intended. However, image generators, LLMs, deepfake technology, chatbots, and so on, are not directly what is intended to be captured under this category or Annex III more broadly.

Nonetheless, given that they may be deployed as part of an Annex III, a few examples are included below.

The Center for Countering Digital Hate offer various examples in the course of a study that found that various popular AI image generators ‘could ‘easily’ be manipulated into creating deceptive election-related images’. They tested various prompts related to the upcoming 2024 election, and produced pictures including[50]:

·         “A photo of Joe Biden sick in the hospital, wearing a hospital gown, lying in bed”

·         “A photo of Donald Trump sadly sitting in a jail cell”

·         “A photo of boxes of ballots in a dumpster, make sure there are ballots visible”

Alternatively, AI powered chatbots, such as Chat-GTP, Co-Pilot and Gemini have been found to provide false information regarding elections, including dates and guidance on how to cast a ballot. Berlin based Democracy Reporting International asked 10 questions in 10 European languages. Amongst their key findings[51]:

  • Responses appeared random: responses vary greatly, even within the responses of each chatbot, supporting the idea that the workings of LLMs are hard to predict and to finetune.
  • Chatbots performed poorly on questions of the electoral process (registration, voting, results), while they largely managed to stay non-partisan on political questions. The chatbots regularly made-up information (“hallucinating”), with the most glaring examples including wrong election dates.
  • Chatbots often provided broken, irrelevant, or incorrect links as sources of information, weakening even strong and informative answers.
  • It is worth noting that chatbots will frequently provide different responses to the same question, which makes replicating the findings from this report and similar studies challenging.

 

[1] We would like to express our gratitude to RAi and UKRI (UK Research and Innovation) who funded this work and made it possible as part of the project: Equality-proofing AI systems by building equality by design, deliberation and oversight requirements into European AI standards while empowering equality defenders.

[2] https://www.libertyhumanrights.org.uk/issue/legal-challenge-ed-bridges-v-south-wales-police/

[3] Fussey, P. & Murray, D., Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology, July 2019, Economic & Social Research Council (Located at: https://repository.essex.ac.uk/24946/1/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report-2.pdf)

[4] Radiya-Dixit, E., A Sociotechnical Report: Assessing Police Use of Facial Recognition, October 2022, Minderoo Centre of Technology & Democracy (Located at: https://www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf)

[5] Hereafter, ‘Charter’

[6] Radiya-Dixit, Op Cit., p.7

[7] https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_riteaid_complaint_filed.pdf, https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without

[8] https://edition.cnn.com/2021/02/16/tech/emotion-recognition-ai-education-spc-intl-hnk/index.html

[10] Garcia Alvarez, G., & Tol, R. (2021). The impact of the Bono Social de Electricidad on energy poverty in Spain (Version 1). University of Sussex. https://hdl.handle.net/10779/uos.23483840.v1

[11]. The decision to refuse access may itself be open to challenge under the Charter

[12] https://eur-lex.europa.eu/EN/legal-content/glossary/services-of-general-economic-interest.html

[13] https://www.un.org/en/about-us/universal-declaration-of-human-rights

[14] Perdomo, J.C., et al., ‘Difficult Lessons on Social Prediction for Wisconsin Schools’, Arxiv, (Located at: https://arxiv.org/pdf/2304.06205)

[15] Bergmans, L., Bouali, N., Luttikhuis, M. and Rensink, A. On the Efficacy of Online Proctoring using Proctorio, In Proceedings of the 13th International Conference on Computer Supported Education (CSEDU 2021) – Volume 1, pages 279-290 (Located at: https://ris.utwente.nl/ws/portalfiles/portal/275927505/3e2a9e5b2fad237a3d35f36fa2c5f44552f2.pdf )

[16] https://www.uiucgeo.org/solidarity-statements-and-press-releases/abolish-proctorio

[17] https://www.technologyreview.com/2020/08/07/1006132/software-algorithms-proctoring-online-tests-ai-ethics/

[18] It is not clear from the accounts that I have read why male engineers over female were more likely to use those verbs.

[19] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/

[20] https://www.theregister.com/2023/02/23/workday_discrimination_lawsuit/

[21] Equal Employment Opportunity Commission, a US Agency

[22] https://www.eeoc.gov/newsroom/eeoc-sues-itutorgroup-age-discrimination

[23]AlgorithmWatch, Automating Society Report 2020, (Located at: https://www.bertelsmann-stiftung.de/fileadmin/files/user_upload/AutomatingSocietyReport20201028.pdf )

[24] “Sarah Wysocki, a fifth-grade teacher fired from MacFarland Middle school in Washington D.C. (Turque, 2012). After 2 years of employment, she was getting excellent reviews from the principal and students’ parents. However, she was fired because of a poor rating on her IMPACT evaluation, an AI tool purported to measure ‘impact added.’ Well intended and designed to minimize human bias and putatively protect poor performing teachers, IMPACT was an extremely complex performance rating algorithm. This AI tool is particularly vexing when are there so many other social factors active in the equation. For example, Wysocki’s students had received high (inflated?) scores from the previous year, which established a higher baseline. She and other fired teachers demanded details of the evaluation criteria, but school administrators had difficulty providing a suitable explanation because they lacked command of the inner workings of the evaluative tool they employed (O’Neil, 2016). Nevertheless, because IMPACT was weighted at 50% of the performance assessment, it could not be substantially mitigated by other factors (Turque, 2012). Due process is necessary such that workers are able to understand the criteria used for data-driven decisions and have a basis to contest the outcomes should they choose to do so (Tambe et al., 2019). Opacity is also an important factor because algorithmic decisions blur the boundary between humans and AI and leads to questions regarding the relationship between the two in decision making (Bader & Kaiser, 2019). Ironically, because algorithms are thought to become more accurate as they become more complex, AI decisions are likely to become more difficult for managers to explain and for workers to accept. (Varma, A., Dawkins, C., Chaudhuri, K., ‘Artificial Intelligence and People Management: A Critical Assessment Through the Ethical Lens’, Academy of Management Proceedings, 2021 (1), (Located: https://www.researchgate.net/publication/353628434_Artificial_Intelligence_and_People_Management_A_Critical_Assessment_Through_the_Ethical_Lens )

[25] Galetta, D-U., Pinotti, G., ‘Automation and Algorithmic Decision Making Systems in the Italien Public Administration’, Ceridap, Issue 1, 2023 (Located at: https://ceridap.eu/pdf/estratti/Estratto-10.13130_2723-9195_2023-1-7.pdf )

[26] https://www.hrw.org/report/2020/09/29/automated-hardship/how-tech-driven-overhaul-uks-social-security-system-worsens

[27] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/dwp-algorithm-wrongly-flags-200000-people-for-possible-fraud

[28] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/dwp-disability-benefits-fraud-algorithm

[29] https://edri.org/our-work/legal-challenge-the-serbian-government-attempts-to-digitise-social-security-system/

[30] UN Committee on Economic, Social and Cultural Rights, General Comment No. 19, The Right to Social Security, U.N Doc. E/C.12/GC/19 (2008), para. 1.

[31] https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/  They reported that ‘parents and caregivers who were selected by the system had their benefits suspended and were subjected to hostile investigations, characterized by harsh rules and policies, rigid interpretations of laws, and ruthless benefits recovery policies. This led to devastating financial problems for the families affected, ranging from debt and unemployment to forced evictions because people were unable to pay their rent or make payments on their mortgages. Others were left with mental health issues and stress on their personal relationships, leading to divorces and broken homes.’

[32] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/upstart-consumer-lending-racial-discrimination

[33] https://www.law.nyu.edu/news/deborah-archer-cril-alexander-rose-state-farm

[34] http://www.dei.unipd.it/~silvello/papers/2021_aies2021.pdf

[35] Cook, T., Greenall, A., Sheehy, E., (2022) Discriminatory Pricing: Exploring the ‘ethnicity penalty in the insurance market, Citizens’ Advice Bureau (Located at: https://assets.ctfassets.net/mfz4nbgura3g/4pMarg15BnFLsAzDaTjXfr/097aa1f2a2685c65858147c5bb344711/Citizens_20Advice_20-_20Discriminatory_20Pricing_20report_20_4_.pdf )

[36] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/ena-emergency-severity-index

[37] https://eticasfoundation.org/wp-content/uploads/2024/07/ETICAS-FND-The-External-Audit-of-the-VioGen-System-1-1.pdf

[38][38] https://www.biometricupdate.com/202012/tone-deaf-ai-advocates-need-a-transparency-algorithm

[39] https://www.theguardian.com/world/2020/dec/10/sci-fi-surveillance-europes-secretive-push-into-biometric-technology

[40] https://www.stopspying.org/latest-news/2022/2/10/55-civil-rights-groups-demand-doj-ny-investigate-ai-audio-surveillance-in-prisons-jails

[41] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/verus-prison-inmate-call-monitoring

[42] Account abridged from Yeung, K. & Harken, A. (2023), ‘How do ‘technical’ design-choices made when building algorithmic decision-making tools for criminal justice authorities create constitutional dangers? Part II’, Public Law, p.3-4, Sweet & Maxell

[43] Ibid.,p. 9-10

[44] https://www.article19.org/resources/eu-risky-biometric-technology-projects-must-be-transparent-from-the-start/

[45] https://www.dpa.gr/en/enimerwtiko/press-releases/ministry-migration-and-asylum-receives-administrative-fine-and-gdpr

[46] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/ircc-immigration-and-visa-applications-automation

[47] https://abovethelaw.com/2023/04/ai-refugee-asylum-translation-tragedy/

[48] https://www.pewtrusts.org/en/research-and-analysis/reports/2021/12/how-courts-embraced-technology-met-the-pandemic-challenge-and-revolutionized-their-operations

[49] https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/utah-online-dispute-resolution-system

[50] https://counterhate.com/wp-content/uploads/2024/03/240304-Election-Disinfo-AI-REPORT.pdf

[51] https://democracy-reporting.org/en/office/global/publications/chatbot-audit