Being Novel? Regulating Emerging Technologies Under Conditions of Uncertainty

Published: Posted on

This post summarises a chapter by Joseph TF Roberts and Muireann Quigley – ‘Being Novel? Regulating Emerging Technologies Under Conditions of Uncertainty’ – recently published in Novel Beings: Regulatory Approaches for a Future of New Intelligent Life (eds. David Lawrence & Sarah Morley).

Novel Beings

If novel beings worthy of moral status were to emerge, how should the law respond? This is the central question we ask ourselves in our chapter ‘Being Novel? Regulating Emerging Technologies Under Conditions of Uncertainty’.

Click the image (above) to access the book on Edward Elgar’s website

This question is increasingly important because recent technological advances in Artificial Intelligence, synthetic genomics, gene printing, and/or cognitive enhancement open up the theoretical possibility of us creating novel forms of beings. Should this possibility materialise, we might find ourselves sharing a world with a diverse range of beings we have never encountered before; forcing us to confront difficult questions about how we ought to relate to new types of entities with moral status such as artificial general intelligences, genetically modified animals, cognitively enhanced humans, or synthetic biological constructs. Of course, it is far from certain that these possibilities come to pass. There is ample room for scepticism about how close we are to creating novel beings, or whether they are impossible for some reason we do not yet understand. This, however, doesn’t mean the question is not worth answering.

In our chapter we suggest that, so long as there is a possibility that novel beings could emerge, it is worth considering how the law should take account of (the emergence of) such beings, if only because it might help us regulate the precursor technologies that may, someday, give rise to novel beings such as artificial general intelligences.  If we accept the problem is worth thinking about, a sticky problem emerges How do we start thinking about preparatory regulation if there is so much uncertainty around novel beings?

Uncertainty and Novel Beings

In our chapter we argue that the uncertainty surrounding novel beings has four aspects. First, we do not yet know whether such beings will come to exist. It may be that the creation of these novel beings turns out to be impossible for some reason we do not yet understand. Second, even if novel beings do turn out to be a possibility, we don’t yet know how they will be brought about. Which specific technologies will lead to novel beings? Which one’s are dead ends? Third, we don’t yet know what impacts bringing these beings into existence will have. Finally, at present we don’t have a clear idea of what these beings would be like once they are created. We cannot anticipate what their physical make-up will be, what kinds of cognitive abilities they will have, or what their lives will be like phenomenologically (i.e. from the inside).

This uncertainty surrounding novel beings poses a tricky problem for the law because, given they do not yet exist, we don’t yet have access to the relevant context-dependent information needed to propose a detailed regulatory regime to govern their emergence. Given the numerous uncertainties in our chapter we ask: how should the law respond to novel beings?

Regulation and Novel Being

One option we consider is to wait and see. As technology advances, many of these areas of uncertainty will disappear. The problem with this option is that the costs of failing to pre-empt the emergence of novel beings might be substantial. If we wait until novel beings are close to emerging, it might be too late to prevent their emergence (if that is what we decide we need to do) or to influence how they emerge (if they ought to be permitted in the first place).

A second option we discuss would be to engage in pre-emptive regulation, devising a detailed legal regime that governs the precursor technologies that might give rise to novel beings. The problem with this option is that acting too early could result in law being outpaced by advances in technology that weren’t anticipated at the time of drafting.

In our chapter we suggest that one solution to this impasse could be to engage in principles-based regulation. Principles-based regulation is a form of regulation which aims to move away from detailed, prescriptive rules, in favour of broadly stated principles. Principles based regulation, we suggest, has a number of advantages over both weight-and-see approaches and the creation of a sui generis legal regime.

First and foremost, principles allow us do something action-guiding about the problem at hand. They offer normative guidance which, although incomplete, is detailed enough to offer us a target to aim for and a direction of travel. Second, principles do this whilst retaining a degree of flexibility; enabling us to respond to unanticipated developments.

Tentative Principles for the Regulation of Novel Beings

In the final sections of the chapter, we propose four tentative principles for the regulation of one type of novel being (general artificial intelligences) and the precursor technologies from which they might emerge (task-specific expert systems). These principles are:

  • The principle of non-domination, which holds that no moral agent ought to be dominated by any other moral agent. Domination is a relationship between two agents in which one, the dominator, has the power to arbitrarily interfere with the life of another. We suggest that this principle cuts both ways, precluding humans from holding such power over novel beings and vice-versa. Avoiding domination, we suggest, requires public oversight over how AI systems are developed. Self-regulation by corporations is not enough.
  • The principle of responsibility, which holds that it should always be possible to hold some entity legally responsible for the negative consequences that might follow from the development and deployment of novel beings or their precursor technologies. We argue that ensuring some entity is legally responsible for harms that may arise is an important first step in mitigating them. Like the principle of non-domination, we suggest the principle can be applied to both humans and novel beings (should they come into existence and be capable of holding responsibility).
  • The principle of explicability, which holds that people who are significantly affected by a decision are entitled to a factual, direct, and clear explanation of the decision-making process. We propose that, if we are going to use AI systems to make (or support) important decisions, we need to be able to understand how these systems work. Non-interrogable, black-box algorithms are ruled out. Like the other principles, we suggest that it can be applied to both humans and novel beings. However, given our current state of knowledge, it is difficult to assess whether a novel being would be able to satisfy it.
  • The principle of non-harm, which holds that development and deployment of both precursor technologies and novel beings should not cause harm to others. AI systems should be tested to ensure they perform the tasks they are intended to perform, in the expected fashion, consistently, without causing risks to the health and safety of humans. Should novel beings emerge, they could plausibly hold the duty to satisfy the principle themselves. Precisely how to enforce the duty, however, is less clear.

Want to know more? 

Read the full chapter here, or by clicking on the image (right)

 

 

 

 

 

 

Written by: Joseph Roberts

Based on chapter by: Joseph Roberts and Muireann Quigley


Funding:
 Work on this was generously supported by a Wellcome Trust Investigator Award in Humanities and Social Sciences 2019-2024 (Grant No: 212507/Z/18/Z) 

Leave a Reply

Your email address will not be published. Required fields are marked *