The Curse and the Cure – Facebook’s use of Artificial Intelligence

Published: Posted on

By Immaculate Motsi-Omoijiade
Research Fellow, Lloyds Banking Group Centre for Responsible Business

Recently, Facebook announced its intention to increase its team of software developers and data scientists in order to develop algorithms that can detect and remove harmful content on its platform more quickly.  Having been issued a record-breaking $5 billion fine by the Federal Trade Commission (FTC) over the mishandling of users’ personal information, the company’s Community Integrity Team, responsible for designing tools to police posts on Facebook’s platforms, have had an excess of serious issues that need to be addressed. These issues include: removing posts promoting self-harm and political extremism, combating the rise of ‘deepfakes’ and ensuring data security.

With 2.45 billion active monthly users, 300 million daily photo uploads and 4.75 billion pieces of content shared daily, it would be impossible for Facebook to monitor and assess platform activity without using AI. Purpose-built, AI-powered analytics tools, including Natural Language Processing (NLP) and Machine Learning (ML) combined with automated reporting, will be key to identifying inappropriate content and flagging data breaches.

Facebook already uses AI for multiple purposes including targeted advertisements across its platforms for apps such as: Messenger, Instagram and WhatsApp. Deep text analysis, sentiment analysis and the algorithms behind the Facebook’s newsfeed are a core component of the company’s business model. However, Facebook’s use of AI has been mired in controversy. For example, an algorithm change in 2018 aimed at driving “more meaningful social interactions” ended up increasing the prominence of articles on divisive topics such as abortion and gun laws in the US instead.  Combined with faux pas in the handling of user data, including the inappropriate use of phone numbers to recommend friends, Facebook has recently blamed ‘technical issues’ for offensive translation and has had to remove misleading HIV drug adverts from its platform.

Whilst it is true that mistakes invariably lead to improvements in AI, in Facebook’s case, AI mistakes have a larger societal impact than would be the case with other companies, given its size and reach as a social media network. In particular, concerns have been raised about Facebook’s surveillance capability. Facebook continues to track users beyond its platform through the use of third-party advertising plug-ins, collecting data about individual users and selling the value of having that information to advertisers.

In a Financial Times article explaining how Facebook grew too big to handle, critics echoed calls from anti-trust lawyers to break up the company, arguing that is has becoming a sort of ‘digital Frankenstein’.  This means that Facebook, which some controversially suggest has become a public utility, needs to take extra care in ensuring ethical and responsible use of AI by addressing concerns to do with data privacy, bias, human agency and oversight. This would ensure that AI is auditable and explainable by taking on board recommendations such as those of the EU ethics guideline on trustworthy AI.

For Facebook, AI is a double-edged sword.  It is both the curse and the cure as whilst AI is the engine that drives social media hyper-connectivity, it also provides the solutions to the challenges it generates. Either way, precautions must be taken to ensure that its use is both ethical and responsible.


Leave a Reply

Your email address will not be published. Required fields are marked *