This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
UK elections could be affected by AI deepfakes, watchdog warns
UK deepfake laws are “very old and really need to be updated,” according to the Electoral Commission’s director of regulation and digital transformation, Louise Edwards.
Speaking to the Sunday Telegraph, Edwards warned that lawmakers need to update legislation to protect elections from AI-generated deepfakes which have already been used to impact elections in other countries.
A deepfake is a piece of content such as a video, audio, or photo that has been digitally manipulated to look or sound like someone else.
Using artificial intelligence tools, deepfakers can create content that completely mimics someone’s voice, making it appear like they have said something that they have actually never said before,
Scammers have used these tools across social media and in business environments to scam employees into believing their boss is calling them to ask for a bank transfer.
In the last couple of weeks, deepfakes have hit headlines in relation to the Turkish election, as one presidential candidate has accused Russia of sending out deepfake videos to sway voters to vote for his competitor.
In the UK, as it stands, the Representation of the People Act 1983, states that it is illegal to publish a “false statement of fact” about a candidate’s “personal character or conduct”.
Edwards said that this could potentially guard against some deepfakes, “depending on exactly what the content is”.
However, “electoral law is very fragmented and some of it is very old,” pointing out that the 1983 Act is “based on legislation from the 19th Century”.
“They are very, very old and really need to be updated. So, yes, potentially, no matter what medium, it’s true if somebody were to fall foul of that law, then the police could investigate,” she said.
“What we need to make sure, though, is that the way this is framed in the law is updated to reflect the different ways this can now happen.”
Commenting on Edward’s statement, Eduardo Azanza, CEO of cyber security company Veridas, said: “The rise of generative AI and new threats such as deepfakes, means that regulations must be updated to combat such risks.”
While Azanza says that government and industries must come together to solve the issue, people also need to be educated on how to detect deepfakes, too.
Visually, “a deepfake video usually contains inconsistencies that become evident when a face or body moves. An ear may have certain irregularities, or the iris doesn’t show the natural reflection of the light,” he said.
But, “technologies with AI techniques such as biometrics, can also be used to detect deepfakes and protect the integrity of elections,” Azanza added.
“Furthermore, having multi-factor authentication processes, which includes voice biometrics and facial recognition, makes it much harder to impersonate politicians or electorate spokespeople.”
“Ultimately, by having proactive security measures to detect and stop deepfakes, the spreading of disinformation before, during and after elections can be limited,” Azanza concluded.
#BeInformed
Subscribe to our Editor's weekly newsletter