Facebook and Instagram giant Meta on Friday said it will begin labeling AI-generated media beginning in May, as it tries to reassure users and governments over the risks of deepfakes.
The social media juggernaut added that it will no longer remove manipulated images and audio that don't otherwise break its rules, relying instead on labeling and contextualization, so as to not infringe on freedom of speech.
The changes come as a response to criticism from the tech giant's oversight board, which independently reviews Meta's content moderation decisions.
The board in February requested that Meta urgently overhaul its approach to manipulated media given the huge advances in AI and the ease of manipulating media into highly convincing deepfakes.
The board's warning came amid fears of rampant misuse of artificial intelligence-powered applications for disinformation on platforms in a pivotal election year not only in the United States but worldwide.
Picture used for illustrative purposes only.
Meta's new "Made with AI" labels will identify content created or altered with AI, including video, audio, and images. Additionally, a more prominent label will be used for content deemed at high risk of misleading the public.
"We agree that providing transparency and additional context is now the better way to address this content," Monika Bickert, Meta's Vice President of Content Policy, said in a blog post.
"The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling," she added.
These new labeling techniques are linked to an agreement made in February among major tech giants and AI players to cooperate on ways to crack down on manipulated content intended to deceive voters.
Meta, Google and OpenAI had already agreed to use a common watermarking standard that would invisibly tag images generated by their AI applications.
Identifying AI content "is better than nothing, but there are bound to be holes," Nicolas Gaudemet, AI Director at Onepoint, told the media.
He took the example of some open source software, which doesn't always use this type of watermarking adopted by AI's big players.
Agence France-Presse