Meta’s new AI deepfake playbook More labels, fewer takedowns
Meta has announced changes to its rules on AI-generated content and manipulated media following criticism from its Oversight Board.
Starting next month, the company said, it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes.
The move could lead to the social networking giant labelling more pieces of content that have the potential to be misleading
Meta said it will stop removing content solely on the basis of its current manipulated video policy in July,
This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.”
Meta’s advisory Board, which the tech giant funds but permits to run at arm’s length, reviews a tiny percentage of its content moderation decisions but can also make policy
While the Board agreed with Meta’s decision to leave the specific content up they attacked its policy on manipulated media as “incoherent
“The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards.
The expanded policy will cover “a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling”,
These external entities will continue to review false and misleading AI-generated content, per Meta.