Who gets a say in the future direction of AI?

Published: Posted on

Open laptop with ChatGPT on the screen

By Dr Adam Matthews
Senior Research Fellow, School of Education, University of Birmingham

This week’s AI Safety Summit at Bletchley Park hosted politicians, big tech and academics to come together to forge a shared understanding of AI to create national and international frameworks which ensure frontier AI safety through collaboration in research, evaluation and governance, all with the aim of AI for good. A task that is both technical and political.

Political theorist, Langdon Winner, in his classic work from 1980 in technology and society studies asked: Do Artefacts Have Politics?

Before the summit got underway, Winner’s question of whether technology has politics was answered as global political and big tech leaders came together to consider the safety of new technologies which have come to divide visions of the future as both utopian and dystopian.

King Charles recorded an opening address to attendees which called for unity in the face of profound challenges which can both advance but also negatively impact economic and psychological wellbeing, and called for attendees of the summit to work towards helping to secure democracies from harm, and to ensure that technology benefits all.

In 2024 there are elections in many of the most powerful countries in the world, including the US and likely in the UK. Public understanding of AI, of its opportunities and risks, is vital when it comes to these elections and the choices people make. Synthetic media generated by AI is making it easier for misinformation to flood media platforms, and electorates will also be asked to vote for representatives with policies for regulation of AI and associated technologies.

Education and the media have a key role to play in educating and informing publics. The technical complexity of many of these technologies and how they impact upon society will be challenging for politicians and electorates to understand. Such communication should not fall exclusively to science, technology, engineering, and mathematics (STEM) experts and disciplines. The social sciences and humanities have roles to play in identifying ways of communicating complex information but also the potential social, cultural and political impacts.

As the world got more complex in the early 20th century, writer and journalist, Walter Lippmann made the argument that citizens in their busy lives did not have the time and capacity to understand every political issue. Pragmatist Philosopher, John Dewey countered this argument to say that Lippmann’s proposal to leave governing to experts was the antithesis of democratic societies. The Lippmann Dewey debate can provide us with foundational arguments facing electorates and politicians in understanding complex technologies and their future developments which will have profound impacts on society.

If we had any doubt to the question posed by Winner in 1980 of artefacts having politics, politicians and technologists have in the past year called for caution and some for a pause on developments in this area as they are such a risk to society. Speaking ahead of the summit, Rishi Sunak said firms should not ‘mark their own homework’ which means that regulation and policy will be forthcoming. President Biden this week signed an executive order aimed at tackling AI and the European Union have been leading on a risk-based approach to AI developments. There are those that say regulation will restrict innovation but also that governments will be regulating the unknown as development is still at an early phase.

AI risks include bias and discrimination, security risks, job displacement and misinformation. Moving from dystopia to utopia and opportunities include enhanced automation, improved data analysis and pattern recognition for more informed decision making.

When in 1980, Winner asked if artefacts had politics he used two case studies, one being parkway bridges in New York. The bridges were designed by urban planner Robert Moses who insisted on low bridges. This resulted in buses not being able to gain access to the city. Winner’s analysis showed that the artefact, the bridge, had politics in that low-income citizens, using buses were cut off from the city. The debate continues today as whether this was intentional.

King Charles in his address to the AI Summit said that we should avoid unintended consequences. Many have claimed that we should use the term unanticipated consequences to show the responsibility of policy makers and developers.

I am making the case here that these technologies do have politics. A key issue with AI decision making is that huge amounts of data and algorithms are making decisions and taking actions. These actions often have a lack of transparency. There are many examples of the recent past that have relied upon computer systems to produce analysis of data which have led to unfair outcomes. The Horizon Post Office IT system incorrectly showed results of fraud leading to the prosecution of 736 Post Office workers, and facial recognition systems have shown racial bias. Through machine learning, AI systems are learning and making decisions beyond the control of humans. This includes both developers of the systems and the public.

A shared understanding of AI and its possibilities is important for democratic engagement. A challenge for many citizens as well as the social sciences and humanities is a detachment from the complex technical infrastructure that controls AI systems. We are already in a place whereby the owners and developers of Big Tech are as powerful as the factory owners of the early 20th century and the media moguls of the late 20th century. The next steps are to harness these developments but also to ensure such power is not misused.

Experts meeting to discuss implications for the future is important, but we must not leave behind citizens in understanding key issues that impact their day to day lives, as well as the decisions they make at the ballot box.



The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of the University of Birmingham.

Leave a Reply

Your email address will not be published. Required fields are marked *