Meta, the parent company of Facebook and Instagram, will require political advertisers to disclose the use of artificial intelligence (AI) or other digital methods, it announced Wednesday morning.
The update, which will go into effect in the new year ahead of the 2024 election, comes amid a rise in generative AI technology that can create realistic depictions of public figures’ appearance, voice or likeness.
Advertisers will have to disclose whenever a social issue, electoral or political ad contains a “photorealistic image or video, or realistic sounding audio” that was digitally created or altered for seemingly deceptive means, Meta said in a blog post.
The rule will apply in cases where AI is used in an ad to “depict a real person as saying or doing something they did not say or do,” “depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened,” or “depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”
Advertisers do not have to disclose the use of digital alterations in ways that are “inconsequential or immaterial to the claim” raised in the ad, for example altering image size or using color correction.
The updated policy comes amid growing concerns about how AI could lead to more misinformation, especially around elections, online.
The Senate is holding an AI Insight Forum about the impact of AI on democracy on Wednesday morning. It is the fifth in a series of forums the upper chamber is convening about the impact of AI on a wide range of sectors.
The Federal Elections Commission is also considering clarifying a rule that would address the use of AI in campaigns as the technology becomes more prevalent.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.