This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Meta to require advertisers to disclose deepfake ads
Meta has announced that it will require advertisers to disclose when their ad has been digitally altered by AI.
The Facebook and Instagram owner’s new rules aim to help curb the spread of misinformation in political campaigns, where highly realistic deepfakes, and digitally manipulated media designed to be misleading, have been noted as a threat.
The new requirements come before the biggest year in history for global politics, with two billion people expected to vote in 50 countries in 2024, including the US, the European Union, and India.
Meta’s social media platforms Facebook and Instagram will expect advertisers to admit to AI alterations during the submission process if an ad “contains a photorealistic image or view, or realistic sounding audio”.
President of global affairs at Meta, and ex-deputy Prime Minister of the UK, Nick Clegg, also explained that: “Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad.”
CEO of Biometric digital identity firm Veridas, Eduardo Azanza, commented: “With Meta joining Google in requiring political ads to disclose the use of AI, we are on track to establish a more trustworthy and transparent media landscape.”
How synthetic media can upend reality
“This move could not come at a more important time, with the 2024 US Presidential elections approaching and political campaigns ramping up.”
The director of regulation and digital transformation of the UK’s Electoral Commission, Louise Edwards, called for better laws within government earlier this year, describing the current ones “very old” and suggesting that they “really need to be updated”.
Last month, a deepfake audio of the UK government’s opposition leader, Keir Starmer, purportedly swearing and abusing party staffers went viral on X, gathering 1.5 million views – to which the politician had to debunk.
Plus, Russia was accused by the leader of the opposition party in Turkey, of using deepfakes to sway Turkish elections earlier this year, too.
“We’ve already seen politicians take advantage of AI and deepfakes, leaving voters confused and questioning what is true,” said Azanza.
“Voters have the right to make political decisions on the truth and leaving AI-generated content unlabelled creates a powerful tool of deception, which ultimately threatens democracy.”
South African human rights activist, Siyabulela Mandela, also warns that AI-generated deepfakes could spark a civil war or genocide in areas of Africa with considerate tensions.
“The dangerous thing about the spread of deepfakes is that they are not easy to track because once it hits WhatsApp it can be forwarded to as many people as it possibly can and it is not easy to trace who the original author was,” he said.
#BeInformed
Subscribe to our Editor's weekly newsletter