Navigating through the AI Storm: Upcoming UK Elections

Published: Posted on

Close up image of a screen

Dr Inci Toral, Associate Professor, Department of Marketing – Birmingham Business School and Dr Jean-Paul de Cros Peronard, Associate Professor – Aarhus University

As the UK gears up for its upcoming general election, Home Secretary James Cleverly’s warning about the potential misuse of “deepfake” technology requires a broader conversation about the role of artificial intelligence (AI) in general. The adoption of AI by the public and the advancements in the capabilities of AI are bewildering, offering both transformative opportunities and unprecedented challenges.

We use sensemaking to form a coherent understanding of the world around us. Sensemaking is a cognitive process to help us analyse and synthesize information to make sense of what is going on around us, our action and who we are. We use a number of ways to make this sense, for instance, we act (communicate), we compare information from different sources, we reduce information input to a comprehensive level, and we contextualize gathered information to understand the big picture. In this process, sensemaking tends to become a self-fulfilling prophecy in so far that we make sense of situations after we receive information and interpret it based on previous established meaning. Consequently, this may reinforce our previous action and self-concept.

In this context, AI, with its capacity to process and present information at an unprecedented scale, and speed plays a crucial role. AI is now able to learn our idiosyncrasies and to manipulate information to fit our self-image at the speed of light. AI uses algorithms to process information produced by the public or in other words, our encounters with AI are a two-way process. While we expect AI to synthesize and summarise vast information for us, AI also learns from us. This is called the machine learning. In a recent article in the Guardian, an encounter with an AI chatbot resulted in the AI swearing and calling its “employers” the worst company ever. Although entertaining, this, in fact, produces questions about AI and human interactions.

Mimicry is a well-known concept that is defined as the reflection of one’s self in a relational situation. While humans have a tendency of mimicking others, machine learning is also prone to this phenomenon. As a result, the mimicry of human behaviour by AI, particularly in the form of deepfakes, can distort our sensemaking, presenting a fabricated reality that’s difficult to distinguish from the truth. The risk here is that AI systems, including deepfakes, could confirm our prejudice through mirroring and amplifying divisive or extreme political behaviours, leading to greater polarization and even conflicts.

Let’s now introduce another concept into this equation. Confirmation bias, the tendency to favour information that aligns with our existing beliefs, is a universal phenomenon that AI can exacerbate through machine learning. In the context of elections, machine learning algorithms can create echo chambers, reinforcing voters’ preconceived notions and insulating them from diverse viewpoints, even though they would benefit. This bias not only narrows the scope of political discourse but also makes the electorate vulnerable to manipulation through tailored misinformation, as AI-generated content can be designed to prey on these biases.

But all is not that grim.

Despite these risks, AI also offers significant benefits to democracy. For instance, it can not only improve access to information, facilitate civic engagement, and streamline electoral processes, but also can make complex information available and comprehension easier and therefore help political discourse to reach more people. In addition, AI-driven data analysis can help policymakers better understand public needs, divert their attention to what matters the most. The key is to leverage AI in a way that supports democratic sensemaking without falling prey to confirmation bias and unethical mimicry.

To be successful in delivery, AI must align with the human meaning creating processes. However, the problem we are facing with AI, and for instance during elections, is that people are much less interested in dialogue, comparison of arguments, synthesizing information and seeing the bigger picture which is often complex. If AI could help people get out of virtual reality for a while and just to talk to one-another, slowing down on the quick fix of easy accessible digital information and start to compare and discover what we think in relation to other people, and expand our small world view with one of a more complicated nature, then much of the problem of AI could be solved.

However, this does not seem to be around the corner, so in the short run, the world needs a governance model for AI together with globally established legal systems. At the moment, the arguments are mostly local and reactive, undermining the potential misuse of AI platforms or at the very least, slow in taking any actions. The governance model should include transparency in AI-generated content, education to improve digital literacy among voters, and ethical guidelines to ensure AI supports democratic values. Service providers, as well as policymakers, must address the ethical implications of AI in democracy, ensuring that the technology is used to reinforce, rather than undermine, the electorate’s sense of self and agency. Until then, the ethical use of AI in mass public involvement keeps raising questions.



The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of the University of Birmingham.

Leave a Reply

Your email address will not be published. Required fields are marked *