Could AI misinformation sway the Scottish election?
Persuasive AI assistants, deepfakes and bots are now commonplace. But could they be used by bad actors to influence Scotland’s elections?
Persuasive AI assistants, deepfakes and bots are now commonplace. But could they be used by bad actors to influence Scotland’s elections?
As the Scottish Parliament poll on 7 May draws closer, election officials are preparing for an influx of machine-generated content designed to fool and influence voters, and experts are raising concerns about the impact of AI on democracy around the world.
In our latest De-noiser, The Ferret looked at the risks outlined by researchers, how authorities and regulators are attempting to tackle malicious content, and what voters should be aware of ahead of the next Holyrood vote, and beyond.
The Ferret is Scotland's member-owned investigative journalism outlet. For ten years, we've been digging deeper into the stories that matter, holding power to account without fear or favour.
We don't have billionaire backers or corporate interests. We have you.
Every investigation you read is funded by readers who believe Scotland deserves better journalism. Join them.
Last month, a global consortium of experts warned of the emerging threat of “swarms of collaborative, malicious AI agents”, which could be harnessed by political leaders or foreign adversaries to flood social media feeds and messaging channels.
Authors, including the Nobel peace prize-winning free-speech activist Maria Ressa, and experts from Cambridge, Oxford, and top US universities said: “These systems are capable of coordinating autonomously, infiltrating communities and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy.”
Bot farmers create fake social media accounts that post content which aims to serve a particular agenda. They’re designed to act like real people, and, whilst once easy to spot as inauthentic, the capabilities of bots have accelerated.
Platforms try to prevent the use of bots by requiring account holders to enter a one-time password, usually via SMS message. But bot farmers provide SMS verification services to bypass this hurdle and it is reportedly cheap to do so.
Recent analysis by Cambridge university found that verifying fake accounts for use in the US and UK costs almost as little as in Russia, while Japan and Australia have high prices due to the cost of SIM cards and photo ID rules.
Prices for fake accounts on messaging apps Telegram and WhatsApp appear to spike in countries about to have national elections, it adds.

In Scotland, bots have been exposed trying to influence political discourse online.
One study by disinformation firm Cyabra last year found that more than 1,300 accounts on Elon Musk’s X were Iranian-backed bots posting pro-independence, anti-Brexit and pro-Tehran messaging, using AI-generated images to create credible fake personas.
The bots’ content, which used Scottish slang and expressions, was potentially viewed 224 million times, Cyabra said. These profiles routinely shared, liked and commented on one another’s content, which grew their reach, but fell silent during the Iranian uprising when the regime shut down the internet.
Fake grassroots activity on social media is also known as astroturfing, and can also come in the form of adverts, pages, and groups.
The Ferret has exposed human-made campaigns opposing Scottish independence and the SNP, the regulation of the vaping industry in Scotland, and a ban on electric shock collars for dogs.
Among those behind the campaigns were a pro-Brexit group, a London PR firm, a UK Government advisor, and a former Scottish Tory and Brexit Party politician.
Dr Paul Reilly, senior lecturer in communications, media and democracy at the University of Glasgow told The Ferret that “there's a lot of evidence of efforts to subvert or manipulate public discourse”.
This includes the apparent “orchestration of bots” on X designed to “sow confusion and discord”, undermine trust and push people towards a certain position.
“I think there’s a bigger question about amplifying polarisation, and that is often what those behind these campaigns want to happen,” he added.
AI deepfakes have progressed rapidly, as evidenced by the advancement of the Will Smith Eating Spaghetti test, where generative clips of the former Fresh Prince actor enjoying an Italian meal have evolved from surreal to convincing.
But deepfake images, videos and audioclips could have the capacity to fool viewers into believing a myth which may impact how they cast their vote.
Such deepfakes might depict, for example, a politician saying or doing something they never said or did, and be widely shared quickly, meaning that voters might view a deepfake and believe it to be genuine, long before it has been removed or debunked.
Reilly said that just 18 months ago, he would have considered the threat of AI-generated media on the democratic process to be an exaggeration. But the number of realistic deepfakes targeting political figures in recent months means it is now “a big concern”.
He said that while there’s not yet evidence that deepfakes can alter voting behaviour on a large scale, “it's certainly a threat in elections coming”.
Deepfakes of UK politicians have emerged in recent years, although most examples involve the use of manufactured audio. They include a viral clip of first minister John Swinney being made to say that after an election, the SNP would “do whatever we want”.
A fake clip of Green MSP Maggie Chapman watermarked with the Scottish Parliament TV logo made the politician claim that she wanted to abolish Scotland’s roads. It was reportedly viewed tens of thousands of times.
Ferret Fact Service has previously debunked fake audio clips including one of prime minister Keir Starmer abusing staff.

Image and video deepfakes have included Nigel Farage playing the video game Minecraft, and visiting a seven-year-old with terminal cancer in hospital to fulfil their “final wish” – a genuine image manipulated with AI to include the Reform UK leader.
These examples demonstrate how technology could be used to improve the reputations of some leaders, while denigrating others.
Evidence is emerging of people consulting AI chatbots – such as ChatGPT – for voting advice, and receiving persuasive arguments in return. But some virtual assistants have been found to have promoted mis and disinformation.
Reilly referenced a large-scale survey by the AI Security Institute ahead of the last general election which found that 32 per cent of chatbot users – equivalent to 13 per cent of all eligible UK voters – reported using chatbots to inform their electoral choice.
This is “a huge figure, given how AI is still relatively new”, Reilly said, adding that AI assistants are “shaping how elections are won and lost… which is quite alarming”.
While the study found that people were using chatbots to better inform them about issues, the technology can also “be used to mislead”, with “fake, inaccurate, or apparently intentionally misleading statements”, he stressed.
In a 2024 study, AI assistants bested humans at persuading opponents in debates, even when people were aware they were engaging with a machine. Last year, a major study reportedly found chatbots to be more persuasive than traditional political campaign materials, and even experienced campaigners.
The Massachusetts Institute of Technology (MIT) survey of thousands of voters in US, Canadian and Polish national elections found that chatbots were effective at convincing people to vote for a particular candidate, or change their support for an issue.
In a separate MIT study of nearly 77,000 people in the UK, chatbots were found to be most persuasive when they used factual claims.
But chatbots have also been found to spread climate conspiracy theories, climate denial and fake news. In 2024, NewsGuard found that the top 10 leading AI assistants mimic Russian disinformation claims a third of the time, and referenced fake news.
Last year, a major study by 22 public service media organisations found that four of the most commonly used chatbots misrepresented news content nearly half the time.
While AI assistants may refuse to help create misinformation, tests by the University of Technology Sydney found that they can be manipulated into doing so.
Ahead of the last general election, the International Bar Association warned that the UK’s “light-handed approach” to regulating AI left the electorate “vulnerable to disinformation and risks further undermining public trust in democracy”.
Research from The Alan Turing Institute found no evidence that AI enabled misinformation meaningfully impacted the last UK or European election results.
But at a January press briefing in Edinburgh, the Electoral Commission told journalists that its biggest concerns ahead of the Holyrood election are the use of AI – particularly deepfakes – and candidate safety.

As well as helping voters to spot mis and disinformation, it’s working with the Home Office on a pilot which will use software capable of detecting deepfakes. The commission says it will tackle fake media which targets candidates when it falls into its remit, or refer it to others like media regulator Ofcom, broadcasters and the police, as well as political parties.
“We closely monitor how deepfakes are used in campaigning in the UK and overseas and are aware of the risks they pose to election security and voter confidence,” a spokesperson told The Ferret.
The pilot “will develop our evidence base on the scale of threat deepfakes pose to the UK’s electoral system and inform our overall response to electoral misinformation from deepfakes in the future,” they added.
The commission meets with major social media companies ahead of elections to discuss their processes around elections, particularly mis and disinformation.
But it welcomed recommendations made by a Westminster committee on election and candidate security, including compelling platforms to remove abusive content.
The UK online safety act has a foreign interference offence which could be used to tackle bot swarms, but only in some circumstances.
It’s illegal to make a false statement about the personal character of a candidate in order to sway the result of the election. False statements not about a candidate’s personal character or conduct are not illegal, but could be considered to be defamatory.
Candidates, parties and non-party campaigners must use 'imprints' on certain digital and printed campaign content, which state who is responsible for publication and who they’re promoting it for. But an individual is allowed to express their personal opinion in material published on their own behalf, and on a non-commercial basis.
The commission lacks powers to regulate campaign material or what candidates say about each other, but flags issues to Police Scotland. The police may investigate allegations of false statements – a specific electoral offence – while defamation issues are a matter for civil courts.
Police Scotland said it has “an extensive track record” of supporting elections. “A proportionate policing plan is in place to support the run up to the election and polling day,” a spokesperson told The Ferret.
Reilly said there’s a lack of focus on AI compared to mis and disinformation, “but it's part of the same thing”.
He argues that all institutions have responsibility to tackle misleading content, including media outlets who can fact check and debunk claims, and social media companies, which, he argues, “are still not taking their roles seriously enough”.
Meta, which owns Facebook and Instagram, has disbanded its "responsible AI” team and ditched independent fact checkers, for example. While there are efforts to use automated software to detect AI mis and disinformation, there isn’t evidence to prove its effectiveness, Reilly said.
Because social platforms aren’t considered to be publishers, they're not subject to the same regulation as other media. “They frame themselves as platforms purposely to say they're neutral and benign,” he claimed.
But over the last decade, platforms have hosted “AI slop, mis and disinformation, hate speech – things that have gone largely unchecked, despite politicians and lots of countries trying to put pressure on those platforms, threatening to take away their licenses… it hasn't worked. It seems like the wild west at times”.
In the run up to the last Westminster election the Electoral Commission asked voters to think critically about campaign material, and urged political parties and campaigners not to mislead voters, particularly when using generative AI.
Reilly believes the public must have the awareness to try and identify AI-generated content, and the scepticism to question whether a post – or user – is genuine, particularly before sharing content and helping to legitimise it. “We're all guilty of this,” he stressed.
In the current landscape, people are more prone to believe claims and trust sources they might have previously dismissed, said Reilly.
“It's an information crisis and the lack of trust in institutions is something which has been brewing for well over a decade,” he added.
Header image credit: psdphotography