- X will suspend creators from its revenue-sharing program for 90 days if they post undisclosed AI-generated war footage.
- The policy targets AI videos that can mislead users during conflicts, a tactic seen in the Middle East recently.
- Enforcement will rely on cues like Community Notes and metadata to identify AI-generated content.
- The United Nations has warned that such deepfakes threaten information integrity in conflict zones.
X‘s product head announced on Tuesday that the platform will penalize creators who post AI-generated videos of armed conflicts without clear disclosure. This policy revision is a direct response to the spread of misleading content during wartime.
In a post, X‘s Nikita Bier stated the move maintains timeline authenticity. He wrote, “During times of war, it is critical that people have access to authentic information on the ground.”
Consequently, violators will lose access to monetization for 90 days. Repeat offenders face permanent removal from the revenue program.
This crackdown follows a surge of fake videos depicting Middle East violence after recent missile strikes. Meanwhile, an AI clip of a Dubai airstrike garnered over 8 million views on X and another 42,000 views on Instagram.
The United Nations warns that deepfakes threaten information integrity in conflict areas. This concern was realized during Russia‘s invasion of Ukraine, when a fabricated video of President Volodymyr Zelensky circulated online.
According to Bier, enforcement will use signals like Community Notes and metadata. This policy directly targets the financial incentive creators have to post viral, fake content.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Semtech Buys Photonics Firm HieFo for $34M
- SoFi USD Stablecoin Added to Mastercard Network
- Theta’s Feb ‘26: AI Research, Analytics Launch & Esports Growth
- Ripple Expands Stablecoin Payments for Banks
- Sonic Launches X Ambassador Grant Program
