- Meta’s Oversight Board ordered the removal of a deepfake video ad featuring Ronaldo Nazário from Facebook.
- The ad misused an AI version of the footballer to promote a deceptive online game, misleading viewers.
- The video violated Meta‘s rules against fraud and spam but remained online even after being flagged.
- The situation highlights inconsistent enforcement of anti-fraud and deepfake policies on Meta’s platforms.
- Rising use of AI-generated images and videos for scams has prompted lawmakers to introduce new regulations.
On Thursday, the Meta Oversight Board directed Meta to take down a Facebook post featuring an AI-created video of Brazilian footballer Ronaldo Nazário. The video promoted an online game and led viewers to believe it was endorsed by the athlete.
According to the board, the video breached Meta‘s Community Standards concerning fraud and spam. The board said “Taking the post down is consistent with Meta’s Community Standards on fraud and spam. Meta should also have rejected the content for advertisement, as its rules prohibit using the image of a famous person to bait people into engaging with an ad.” The manipulated video was posted as an ad and reached over 600,000 views before being reported.
Despite several reports, the post was not prioritized for removal. When the user appealed, no human review took place until the issue reached the Oversight Board, an independent body that reviews content moderation decisions for Meta. The board was created in 2020 to ensure accountability and transparency in the company’s content policies.
The controversial video depicted a poorly synced voiceover of Ronaldo urging viewers to play a game called Plinko, promising earnings greater than typical jobs in Brazil. This was a false claim.
The board urged Meta to enforce its fraud rules more consistently, noting that only specialized teams have the ability to handle such cases. The event reflects ongoing concerns about deepfakes—AI-altered images or videos that falsely portray individuals doing or saying things they did not actually do. These technologies are increasingly used for scams and misinformation.
Earlier incidents, such as ads featuring actress Jamie Lee Curtis without her consent, have also drawn criticism for Meta‘s slow response. The company deactivated her ad but allowed the original post to remain on the platform.
The case comes as lawmakers address AI-related abuse. In May, President Donald Trump signed the Take It Down Act, requiring fast removal of non-consensual AI-generated images. The law targets the rise in deepfake content, including pornography and other forms of abuse impacting both celebrities and minors. Recently, Trump himself was the subject of a viral deepfake video.
The Meta Oversight Board‘s decision highlights a growing push for better enforcement and clearer guidelines as AI-generated media becomes more widespread.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Maple Finance Expands to Solana, Launches syrupUSD With $30M Liquidity
- Panel Orders Transfer of 52 Scam Domains to Tesla After Dispute
- Circle Surges 235% in NYSE Debut, CEO Allaire Becomes Billionaire
- Circle (CRCL) Soars 168% in NYSE Debut, Closes at $83 per Share
- Ex-CFTC Chair Warns Clarity Act Could Undermine Securities Law