Trending

Meta will require political advertisers to disclose use of AI in ads

Advertisers hoping to put political ads on Facebook or Instagram will soon be required to disclose whether the ads were created using artificial intelligence, Meta announced on Wednesday.

>> Read more trending news

In a blog post, Facebook and Instagram’s parent company said the new policy is aimed at helping “people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI.”

It will go into effect worldwide beginning Jan. 1.

As part of the process to run an ad on the social media platforms, Meta will ask whether content has been digitally created or altered to show real people doing or saying things they did not do or say, or a realistic-looking person who does not exist.

Advertisers will also be required to disclose whether their ad contains altered footage of a real event or shows “a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Meta will add a disclosure to the ad, if it’s deemed necessary.

Advertisers will not be required to say their ads were digitally altered when the changes are “inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” such as when an image is cropped or its size adjusted, “unless such changes are consequential or material” to the ad, according to Meta. The company said that advertisers who fail to comply with the new policy will see their ads rejected and could face penalties.

The announcement came days after Meta confirmed that it was prohibiting some advertisers, including political campaigns, from using its new generative AI advertising tools, according to Reuters.

“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries,” officials said in a note added to the Meta Business Help Center page for one of the affected features.

Online platforms and lawmakers have been grappling with how to address dis- and misinformation online ahead of the 2024 presidential election. Recent years have seen the rise in AI-generated “deepfakes,” manipulated video, audio or imagery that shows real people saying or doing things that never happened, according to The Associated Press.

Last month, President Joe Biden signed an executive order aimed at managing the risks of AI which requires that developers share information, including their safety test results, with the U.S. government and develop standards, tools and tests to ensure their systems are safe and secure.

0