The Uses of a Use Case Analysis

Published: Posted on

In this blogpost, Dr James Mclaren and Professor Karen Yeung discuss the relevance of “use cases” for artificial intelligence systems

Professor Karen Yeung

Dr. James MacLaren & Prof. Karen Yeung

We would like to express our gratitude to RAi and UKRI (UK Research and Innovation) who funded this work and made it possible as part of the project: Equality-proofing AI systems by building equality by design, deliberation and oversight requirements into European AI standards while empowering equality defenders.

What are ‘use cases’ for AI systems and why are they useful?  In this blog, I introduce a selective survey of AI ‘use cases’ based on incidents resulting in potential fundamental rights interferences arising from real-life deployment of AI systems. Rather than offer a comprehensive catalogue, it is limited to “high-risk” AI systems, as defined in Annex III of the EU AI Act. This set of uses cases is important because

  • it reminds us that the deployment of AI systems can and do result in fundamental rights violations. These ‘risks’ are not fanciful nor hypothetical;
  • it draws attention to the rights that are placed ‘at risk’ by Annex III high risk systems and demonstrates how those rights may be interfered with by their operation;
  • it provides concrete, practical illustrations to help the general public, as well as technical developers and deployers, understand how fundamental rights interferences might be brought about by the operation of AI systems, and thus, how they might be avoided by deployers and providers.

Risks and Rights

There has been an increase in the applications and deployments of AI technologies and their proliferation into all aspects of our lives. We find AI technologies in our homes and in our offices, but also in our everyday lives – increasingly being used in the operations of many public services, particularly for the purposes of informing decision-making and action, ranging from judicial decision-making to border control decisions to the distribution of welfare benefits and entitlements. Consequently, the potential risks to fundamental rights that are posed by AI are also on the increase. It is with this backdrop in mind that the EU has written and passed the EU AI Act.

The overarching purpose of the AI Act is stated in Article 1(1) as:

‘to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation

Article 6 identifies what constitutes a ‘high risk’ AI system for the purposes of the Act, in which those AI systems referred to in Annex III, unless they “do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making” Art 6(3). Annex III of the Act lists the following eight broad contexts and purposes:

  1. Remote biometric identification systems
  2. Critical infrastructure
  3. Education and vocational training:
  4. Employment, workers management and access to self-employment:
  5. Access to and enjoyment of essential private services and essential public services and benefits:
  6. Law enforcement, in so far as their use is permitted under relevant Union or national law:
  7. Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law; and8. Administration of justice and democratic processes:

Other than AI systems concerned with critical infrastructure, all the other categories of system are those which primarily pose threats to fundamental rights. Fundamental rights are universal legal guarantees without which individuals and groups cannot secure their fundamental freedoms and human dignity. Key examples of freedoms and rights include the right to privacy, freedom of expression, or the right to a fair trial.

Art. 9(5)(a) of the EU AI Act requires that providers of high-risk AI systems put in place a risk management system such that the ‘residual risk ‘of the high-risk AI systems is “judged to be acceptable” and thus includes reducing the risks to fundamental rights to a level judged ‘acceptable’. Thus, given that the AI Act is EU law, the obligations arising under the Act must be interpreted, including the nature, scope and protection of fundamental rights, in accordance with the Charter of Fundamental Rights and the European Convention on Human Rights. Hence those seeking to develop or deploy high-risk AI systems in the EU are required, when its operative provisions enter into force, to ensure that they comply with the obligations take steps to protect against unacceptable risks to fundamental rights.

Use Cases

An effective tool for thinking about the kinds of protections for fundamental rights needed in relation to high-risk AI systems and services is to reflect on, and learn from, real life examples, employing ‘use cases’ – concrete real-world examples of AI systems that have adversely affected fundamental rights. It is little surprise that in the rush to deploy AI systems in a wide range of different contexts, including situations where there are significant and worrying opportunities for harm, there have been many incidents in which fundamental rights have either been at considerable risk or have indeed been violated. For example, facial recognition systems deployed by law enforcement agencies have raised privacy concerns, and failures in AI translation have left to people’s right to asylum being violated. In sum, these cases issue in discriminatory outcomes (Art. 21 of the Charter, also Art. 22-26) – displaying unlawful biases, and which discriminate against persons on account of sex, race, age, for example, but a great number of other fundamental rights are also at stake. Rights to dignity (Art. 1), liberty and security (Art. 6), privacy (Art. 7), education (Art.14), asylum (Art. 18), protection from unjust dismissal (Art. 30), health care (Art. 30), good administration (Art. 41), effective remedy (Art. 47), presumption of innocence (Art. 48) have all been implicated.

Examining real life use cases such these, then, in the first place illustrates that these risks are not hypothetical, but real, and that real people have had their fundamental freedoms violated. In addition, they demonstrate the weaknesses of AI in the face of the much-vaunted claims often made for them. AI is often proposed to be a solution to human weakness – an impartial, rational, ever-reliable replacement for human judgement, representing pure gain with little to no loss. Using the example of discrimination, many AI systems have been shown not to eliminate the risks of inequality, but only absorb the innate biases of data engineers or datasets, buried deep in opaque operational systems, obscuring them in a ‘computer says no’ mentality.

Use cases also help to illuminate which rights are most vulnerable, and which types of system are most frequently highlighted, although in relation to the AI Act we are limited to considering those brought within the scope of the Act. We can also investigate specific mechanisms and modes of deployment. Most importantly, perhaps they may enable us to understand and reflect on where they went wrong and what strategies might have been employed – whether at design or deployment level – to mitigate against these negative outcomes.

AI Systems are tools. Like all tools, some are well constructed, others poorly constructed and tools which may be developed and used for purposes we might consider legitimate, might cause unintentional harm or, worse, be misused or weaponised. An examination of use cases can help us adopt more rights-respecting development and deployment practices. They may help us to identify whether the risk of a rights violation is more likely a matter of poor design or poor deployment, and thus can direct our attention appropriately when we seek to design, deploy, use and oversee these technologies. Regulators, fundamental rights defenders and those potentially adversely affected can focus attention on vulnerable areas ensuring that systems are developed and deployed with close scrutiny of issues identified as problematic. Examining the details of use cases offer insight into how the specific legal requirements introduced for high-risk systems set out in Chapter 3 of the Act (which include obligations to introduce a risk management system, to adhere to specific requirements for transparency, record-keeping, human oversight and so forth) may need to be devised and put in place to safeguard against risks to fundamental rights. –

Finally, use-cases are helpful tools for deployers and developers. While AI systems will be put in place sometimes by large enterprises well-versed in regulation and standards, often systems will be utilised by smaller businesses who do not possess an expertise in law, regulation, fundamental rights, perhaps not even in the systems themselves or their internal mechanisms and workings. They may struggle to realise the import of the standards or the requirements within; they may fail to appreciate the risks of fundamental rights and potential human costs that follow. Use cases, then, serve as instructive concrete examples of the ill-effects that may attend the deployment of AI systems. These cases can operate as examples, cautions, reminders of why the requirements of regulations and standards are well followed.

Use-Cases Table

To this end, we have put together a table linking use cases with the Annex III categories from the AI Act and demonstrating where these case also present fundamental rights risks or violations. The use cases have been drawn ultimately from a wide range of sources; news sources, advocacy and human rights defenders, as well as from organisations committed to monitoring the development and proliferation of AI technologies. This is a dynamic work and is in development. As new, possibly more impactful cases emerge, additional detail will be added, offering more comprehensive insight into the risks to fundamental rights and how they manifest in AI deployments. We are currently addressing questions of severity and potential mitigation strategies to reduce or prevent these risks. If you are interested in this work and want to contribute, please get in touch.

Leave a Reply

Your email address will not be published. Required fields are marked *