At the beginning of the year, there was general concerns on how AI Generative could be used to interfere in the world elections to spread propaganda and disinformation. Quick advance until the end of the year, Meta says that these fears did not play, at least on his platforms, because he shared that technology had a limited impact on Facebook, Instagram and Threats.
The company claims that its results are based on the content of the main elections in the United States, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the United Kingdom, South Africa, Mexico and Brazil.
“Although there have been cases of confirmed or suspected use of AI in this way, the volumes remained weak and our existing policies and processes have proven to be sufficient to reduce the risk around the AI generator content,” wrote the company in a blog. “During the electoral period of the main elections listed above, the notes on the content of the AI linked to the elections, politics and social subjects represented less than 1% of all information to verify the facts.”
Meta notes that her image generator Imagine IA rejected 590,000 requests to create images of president elected Trump, vice-president elected Vance, vice-president Harris, governor Walz and President Biden during the month preceding the day of the election to prevent people from creating femed elections.
The company also noted that the coordinated account networks which sought to distribute propaganda or disinformation “only made gains of increasing productivity and generation of content using a generative AI”.
Meta says that the use of AI has not hampered its ability to eliminate these secret influence campaigns because it focuses on the behavior of these accounts, and not on the content they publish, whether or not they were created with AI.
The technology giant also revealed that it had removed around 20 new secret influence operations around the world to prevent foreign interference. Meta says that the majority of the networks he disturbed did not have an authentic audience and that some of them used false likes and followers to appear more popular than they really were.
Meta then pointed her finger on other platforms, noting that the false videos concerning the American elections linked to influence operations based in Russia were often published on X and Telegram.
“While we take stock of what we have learned during this remarkable year, we will keep our policies being examined and will announce any change in the coming months,” wrote Meta.