- BlueSky reports record-high 42,000 content reports in 24 hours amid massive user influx
- Platform gained 20 million new users in three weeks following Donald Trump‘s election victory
- Eight confirmed cases of child-oriented sexual content in 2024, up from two cases in 2023
- Platform implements mass moderation campaign, prioritizing removal speed over precision
- BlueSky partners with Thorn to deploy AI-powered Safer technology for content moderation
BlueSky Faces Moderation Crisis as User Base Surges
Record User Growth Triggers Content Challenges
BlueSky, the decentralized social media platform created by former Twitter CEO Jack Dorsey, is experiencing unprecedented moderation challenges following a massive influx of new users. The platform announced Monday that it received over 42,000 content reports in 24 hours, marking an all-time high.
The surge follows President-elect Donald Trump’s victory, which prompted millions of users to seek alternatives to X (formerly Twitter). In recent weeks, BlueSky gained 20 million new users, while Meta’s Threads attracted 35 million. This growth compounds the earlier influx of one million Brazilian users who joined after X was banned in their country.
Moderation Measures and Challenges
"We’re experiencing a huge influx of users, and with that, a predictable uptick in harmful content posted to the network," BlueSky’s Safety account stated. The platform implemented aggressive moderation policies, particularly for high-severity issues like child safety.
According to a Platformer report, BlueSky’s confirmed cases of child-oriented sexual content increased from two cases in 2023 to eight cases in 2024. The platform acknowledged that its accelerated moderation efforts might have resulted in some incorrect account suspensions, offering appeal options for affected users.
Technology Solutions and Partnerships
In January, BlueSky formed a partnership with Thorn, a Los Angeles-based internet watchdog group, to combat AI-generated deepfakes and inappropriate content. The platform implemented Thorn’s Safer moderation technology, which uses Artificial Intelligence to detect problematic content and suspicious conversations.
Thorn’s VP of data science Rebecca Portnoff told Decrypt: "While we knew going in that child sexual abuse manifests in all types of content, including text, we saw concretely in this beta testing how machine learning/AI for text can have real-life impact at scale."
Industry-Wide Response
X, while maintaining its policy allowing adult content, also implemented Thorn’s Safer technology in May to address similar challenges. BlueSky’s response represents a broader industry trend toward strengthening content moderation systems as social media platforms face increasing scrutiny over user safety.
The platform stated it is expanding its moderation team to improve response times and accuracy as it continues to grow. This development marks a critical phase in BlueSky’s evolution as it manages the challenges of rapid expansion while maintaining platform safety.
Previous Articles:
- El Salvador’s Bitcoin Bet Pays Off, VanEck Hails Latin American Success Story
- Drug Cartels Using Tether for Money Laundering, New Report Reveals
- Ripple Labs Injects $25M into Crypto Super PAC Fairshake for 2026 Elections
- The AI Lab Partners with Theta EdgeCloud to Power AI Education in South Korea
- Pump.fun Dominates Solana DEX Activity with 62% of November Transactions