- Prominent AI jailbreaker “Pliny” had his OpenAI account temporarily banned on April 1, 2025, for alleged policy violations related to “violent activity” and “weapons creation.”
- OpenAI later reinstated his account, acknowledging they had “incorrectly deactivated” it and apologizing for the inconvenience.
- Pliny is known for exposing AI vulnerabilities through jailbreaking techniques that bypass safety guardrails, which supporters argue contributes to AI safety by identifying weaknesses.
OpenAI temporarily banned one of the world’s most prominent AI jailbreakers, known as “Pliny,” from its platform on April 1, 2025. According to screenshots shared on X (formerly Twitter), the company cited policy violations related to “violent activity” and “weapons creation” as the reason for the ban. Many initially thought it was an April Fool’s joke, but Pliny confirmed to Decrypt that the deactivation was real.
“BANNED FROM OAI?! What kind of sick joke is this?” Pliny tweeted when he discovered the ban. The timing on April 1st led many of his 93,000 followers to assume it was humor, but Pliny later confirmed to Decrypt: “Yes, the account deactivation is real. I’m messaging someone at OpenAI now to try to get it resolved.”
Reinstatement and Apology
The ban proved short-lived. Later the same day, OpenAI restored Pliny’s access to ChatGPT. “I’m free,” he tweeted, sharing a screenshot of an email from the company stating: “We have determined that we incorrectly deactivated your organization’s account access. We sincerely apologize for any inconvenience this may have caused.”
When Decrypt asked ChatGPT about the situation, the AI gave an equivocal response, claiming there was “no publicly available information confirming that Pliny the Prompter’s access to ChatGPT has been restored.”
The Role of AI Jailbreaking
Pliny has established himself as one of the world’s leading AI jailbreakers, developing and openly sharing methods to circumvent AI safety restrictions. He maintains the “BASI PROMPT1NG” Discord community with over 15,000 members and the GitHub repository L1B3RT4S, which contains jailbreak prompts for various AI models.
Jailbreaking involves crafting prompts that trick AI systems into bypassing their safety guardrails. Advocates, including venture capitalist Marc Andreessen who previously donated to Pliny’s efforts, argue that jailbreaking contributes meaningfully to AI safety by exposing vulnerabilities before malicious actors can exploit them.
The temporary ban sparked criticism of OpenAI across social media. After being reinstated, Pliny celebrated by sharing a screenshot of his newest jailbreak—making ChatGPT use profanity, demonstrating that his work continues despite the brief interruption.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Bitcoin Volatility Hits 11-Month High as Price Swings Between $77K-$94K
- Trump Announces ‘Reciprocal Tariffs’ Against US Trading Partners
- Bitcoin Rallies to $88,500 as Traders Eye Key Resistance Level
- Trump Family Stablecoin Takes Center Stage at Congressional Hearing
- Justin Sun Bails Out TrueUSD as $456M in Reserves Held in Limbo