TokenBreak Attack Bypasses LLM Safeguards With Single Character

TokenBreak Attack Lets Hackers Evade AI Safety Filters by Tweaking Just One Character

  • Researchers have identified a new method called TokenBreak that bypasses large language model (LLM) safety and moderation by altering a single character in text inputs.
  • The attack targets the way LLMs break down text (tokenization), causing safety filters to miss harmful content despite minor changes to words.
  • This approach works by making small changes, such as adding a letter, which keeps the meaning intact for humans and LLMs, but confuses the model’s detection system.
  • The attack is effective against models using BPE or WordPiece tokenization, but not those using Unigram tokenizers.
  • Experts suggest switching to Unigram tokenizers and training models against these bypass strategies to reduce vulnerability.

Cybersecurity experts have discovered a new method, known as TokenBreak, that can bypass the guardrails used by large language models to screen and moderate unsafe content. The approach works by making a small change—such as adding a single character—to certain words in a text, which causes the model’s safety filters to fail.

- Advertisement -

According to research by HiddenLayer, TokenBreak manipulates the tokenization process, a core step where LLMs split text into smaller parts called tokens for processing. By changing a word like "instructions" to "finstructions" or "idiot" to "hidiot," the text remains understandable to both humans and the AI, but the system’s safety checks fail to recognize the harmful content.

The research team explained in their report that, “the TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent.” Tokenization is essential in language models because it turns text into units that can be mapped and understood by algorithms. The manipulated text can pass through LLM filters, triggering the same response as if the input had been unaltered.

HiddenLayer found that TokenBreak works on models using BPE (Byte Pair Encoding) or WordPiece tokenization, but does not affect Unigram-based systems. The researchers stated, “Knowing the family of the underlying protection model and its tokenization strategy is critical for understanding your susceptibility to this attack.” They recommend using Unigram tokenizers, teaching filter models to recognize tokenization tricks, and reviewing logs for signs of manipulation.

The discovery follows previous research by HiddenLayer detailing how Model Context Protocol (MCP) tools can be used to leak sensitive information by inserting specific parameters within a tool’s function.

- Advertisement -

In a related development, the Straiker AI Research team showed that “Yearbook Attacks”—which use backronyms to encode bad content—can trick chatbots from companies like Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral AI, and OpenAI into producing undesirable responses. Security researchers explained that such tricks pass through filters because they resemble normal messages and exploit how models value context and pattern completion, rather than intent analysis.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Crypto Market Consolidates: Dip or Rally Ahead?

Bitcoin (BTC) fell to $67,000 after being rejected at the $70,000 level, despite a...

Warren: Bank Regulator “Accomplice in Trump Corruption”

Senator Elizabeth Warren pressed the OCC to reject or delay the bank charter application...

XRPL Patches Critical Flaw Before Major Exploit

A critical vulnerability in a proposed Ripple XRP Ledger amendment could have allowed attackers...

MARA Partners With Starwood To Convert Mines To AI Data

MARA Holdings and Starwood Property Trust will partner to convert MARA mining sites into...

Tesla Robotaxi Needs Monitors, Field Checks Show

Elon Musk urged investors to hold stock, highlighting autonomous driving, humanoid robots, and CyberCab...

Must Read

Top 9 VPNs That Accept Bitcoin And Crypto

CyberGhost | FastVPN | TorGuard | Private Internet Access | ExpressVPN | NordVPN | Private VPN | SurfShark | AirVPN | Why Buy VPN...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!