UK

Three-quarters of public fear misinformation will affect UK elections – report

A study from tech giant Adobe examined fears around misinformation and deepfakes.

A study found a majority of people fear misinformation and deepfakes will affect forthcoming elections and interfere with the democratic process
A study found a majority of people fear misinformation and deepfakes will affect forthcoming elections and interfere with the democratic process (Peter Byrne/PA)

More than three-quarters of people fear misinformation and deepfakes will affect forthcoming elections and interfere with the democratic process, according to a new report.

The Future Of Trust study, carried out by tech firm Adobe, found 94% of people it surveyed in the UK want the Government to work with tech companies to regulate artificial intelligence (AI) because of fears around misinformation.

The survey of 2,000 UK adults found 81% agree that misinformation is one of the biggest threats facing society, with 76% saying they currently find it hard to verify whether online content they see is trustworthy.

A number of high-profile UK politicians, including Prime Minister Rishi Sunak, Labour leader Sir Keir Starmer and London Mayor Sadiq Khan, have been the subject of deepfakes in attempts to spread misinformation about them online.

Adobe, the maker of popular photo-editing software Photoshop and AI-powered image generator Firefly, said it is vital tech firms educate the public on deepfakes and misinformation as well as use tools to clearly mark AI-generated content so people know when the content they are seeing was computer-generated.

Join the Irish News Whatsapp channel

The firm uses a third-party system called Content Credentials – which is run by cross-industry standards firm C2PA – and attaches a cache of information to AI-generated content which enables people to see clearly who made it, how it was made, and when.

In its trust study, Adobe found 83% of people believe that without widespread tools to help them clarify whether content they are seeing is genuine, political candidates in an election should be banned from using generative AI in their campaign materials.

Dana Rao, executive vice-president, general counsel and chief trust officer at Adobe, said: “We are all excited about the power of generative AI to transform creativity and productivity.

“As a leader in commercially deploying AI technology, we have long considered its implication in society. As the results of this study clearly show, it is critical that we educate consumers about the dangers of deepfakes and provide them with tools to understand what is true.

“With elections coming, now is the time to adopt protective technologies like Content Credentials to help restore trust in the digital content we are consuming.”

In December, Technology Secretary Michelle Donelan told MPs the Government is working with social media companies on schemes to combat misinformation and deepfakes around elections, and that “robust mechanisms” will be in place before the UK goes to the polls in the general election – which is due before January 2025.

Adobe’s study also found 29% said they had cut their social media activity because of the amount of misinformation they saw being spread on platforms.

Henry Ajder, misinformation expert and adviser to the Content Authenticity Initiative, a group of tech companies, academics and others campaigning for industry standard content authentication tools, said the findings of the report are a “real wake-up call”.

He added: “Deepfakes and AI-generated content continue to evolve at breakneck speed and are increasingly commonplace in our digital lives, whether we realise it or not.

“In this huge year for elections around the world, the question is no longer whether AI deepfakes will be used maliciously but how effective they will be in disrupting democratic processes.”