- Viral full-body AI swaps replaced a creator’s face and body with actors, reaching over 14 million views on X.
- Creators used Kling AI’s 2.6 Motion Control to produce seamless whole-body character swaps of actors like Millie Bobby Brown, David Harbour, and Finn Wolfhard.
- Researchers say full-body deepfakes remove many visual cues used to detect earlier face-only manipulations.
- Experts warn the tools could fuel impersonation scams, non-consensual imagery, political disinformation, and corporate fraud.
- Technologists urge investment in intrinsic detection methods, platform review, and clear liability and disclosure rules.
A viral clip this week showed Brazilian creator Eder Xavier swapping his face and body with actors from Stranger Things using Kling AI’s 2.6 Motion Control, and the versions posted to social platforms have exceeded 14 million views on X. The post drew attention from industry figures, including a share by a16z partner Justine Moore, who shared the video online. The clips demonstrate a shift from face-only deepfakes to full-body synthetic media.
The viral examples coincide with broader model development, including newer systems such as Veo 3.1, Nano Banana, FaceFusion, and OpenAI’s Sora 2, which expand access to high-quality generated video. Researchers warn these tools can quickly move beyond demos into widespread misuse. Emmanuelle Saliba of GetReal Security said “The floodgates are open. It’s never been easier to steal an individual’s digital likeness—their voice, their face—and now, bring it to life with a single image.”
Academic researchers emphasize technical escalation. Yu Chen, a professor at Binghamton University, said “Full-body character swapping represents a significant escalation in synthetic media capabilities.” He noted these systems must handle pose estimation, skeletal tracking, clothing transfer, and natural motion across the whole body. Chen added that older detection methods relied on boundary inconsistencies and temporal artifacts that full-body swaps can hide.
Security experts caution about harms beyond fraud, including non-consensual explicit content, political disinformation, and corporate deception. Observers on social media expressed alarm — one commentator tweeted “We’re not ready.” Researchers recommend platforms build automated detection, human review capacity, and escalation paths while policymakers consider liability and disclosure rules.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Robinhood CEO Backs Senate Crypto Bill After Coinbase Exit!!
- Retail Sidelined; Institutions Buy Spot ETFs, BTC Near 100K.
- Iran crypto activity hits $7.78B as Bitcoin withdrawals soar
- Ripple Invests $150M in LMAX to Integrate RLUSD Stablecoin!!
- Bitcoin whales add 46,000 BTC; dolphins continue selling now
