Despite fears that artificial intelligence (AI) could influence the outcome of elections around the world, US tech giant Meta said it had detected little impact on its platforms this year.
That was partly due to defensive measures designed to prevent coordinated networks of accounts, or bots, from drawing attention to Facebook, Instagram and Threads, Nick Clegg, Meta’s president of global affairs, told reporters Tuesday.
“I don’t think the use of generative AI has been a particularly effective tool to evade our tripwires,” Clegg said of the actors behind the coordinated disinformation campaigns.
In 2024, Meta claims to have run multiple election operations centers around the world to monitor content issues, including elections in the United States, Bangladesh, Brazil, France, India, Indonesia, Mexico, in Pakistan, South Africa, the United Kingdom and the European Union. .
Most of the covert influence operations it has disrupted in recent years have been carried out by Russian, Iranian and Chinese actors, Clegg said, adding that Meta has shut down about 20 “covert influence operations” on its platform this year.
Russia was the leading source of these operations, with 39 disrupted networks in total since 2017, followed by Iran with 31 and China with 11.
Overall, the volume of AI-generated misinformation was low and Meta was able to quickly label or remove content, Clegg said.
This is despite 2024 being the biggest election year ever, with an estimated 2 billion people turning out to vote worldwide, he noted.
“People were understandably concerned about the potential impact that generative AI would have on elections over the course of this year,” Clegg told reporters.
In a statement, it said “any such impact was modest and limited in scope.”
AI content, such as videos and audio recordings of political candidates, was quickly exposed and failed to mislead public opinion, he added.
In the month leading up to Election Day in the United States, Meta said it rejected 590,000 requests to generate images of President Joe Biden, then-Republican candidate Donald Trump and his running mate, JD Vance, from Vice President Kamala Harris and Governor Tim Walz.
In an article in The Conversation, titled The apocalypse that wasn’tHarvard academics Bruce Schneier and Nathan Sanders wrote: “There has been some misinformation and propaganda created by AI, although it has not been as catastrophic as feared. »
However, Clegg and others have warned that misinformation has moved to social media and messaging sites not owned by Meta, particularly TikTok, where some studies have found evidence of fake videos generated by the AI containing political misinformation.
Public concerns
In a bench investigation Earlier this year, among Americans, nearly eight times as many respondents expected AI to be used for primarily nefarious purposes in the 2024 election, compared to those who thought it would be used mainly for positive purposes.
In October, Biden rolled out new plans to harness AI for national security as the global race for technological innovation accelerates.
Biden laid out his strategy in the first-ever AI-focused document. National Security Memorandum (NSM) Thursday, calling on the government to stay at the forefront of developing “safe, secure and trustworthy” AI.
Meta has itself been the source of public complaints on various fronts, caught between accusations of censorship and failure to prevent online abuse.
Earlier this year, Human Rights Watch accused Meta of silencing pro-Palestinian voices amid increased social media censorship since October 7.
Meta says its platforms were primarily used for positive purposes in 2024, to direct people to legitimate websites with information about candidates and how to vote.
Although it said it allows users of its platforms to ask questions or raise concerns about electoral processes, “we do not allow allegations or speculation about corruption, irregularities or election bias when combined with a signal that the content threatens violence.”
Clegg said the company was still feeling the backlash from its efforts to police its platforms during the COVID-19 pandemic, which resulted in some content being mistakenly removed.
“We think we probably overdid it a little bit,” he said. “While we’ve really focused on reducing the prevalence of bad content, I think we also want to redouble our efforts to improve the precision and accuracy with which we enforce our rules.”
Republican concerns
Some Republican lawmakers in the United States have questioned what they see as censorship of certain viewpoints on social media. President-elect Donald Trump has been particularly critical, accusing his programs of censoring conservative viewpoints.
In an August letter to the U.S. House Judiciary Committee, Meta CEO Mark Zuckerberg said he regretted some content takedowns the company made in response to pressure from the Biden administration .
During Clegg’s press conference, he said Zuckerberg hopes to help shape President-elect Donald Trump’s administration on technology policy, including AI.
Clegg said he did not know whether Zuckerberg and Trump discussed the tech platform’s content moderation policies when Zuckerberg was invited to Trump’s Florida resort last week.
“Mark is very keen to play an active role in the discussions that any administration must have about maintaining American leadership in technology… and in particular about the central role that AI will play in this scenario,” he said. he declared.