TokenBreak Attack Bypasses LLM Safeguards With Single Character

TokenBreak Attack Lets Hackers Evade AI Safety Filters by Tweaking Just One Character

  • Researchers have identified a new method called TokenBreak that bypasses large language model (LLM) safety and moderation by altering a single character in text inputs.
  • The attack targets the way LLMs break down text (tokenization), causing safety filters to miss harmful content despite minor changes to words.
  • This approach works by making small changes, such as adding a letter, which keeps the meaning intact for humans and LLMs, but confuses the model’s detection system.
  • The attack is effective against models using BPE or WordPiece tokenization, but not those using Unigram tokenizers.
  • Experts suggest switching to Unigram tokenizers and training models against these bypass strategies to reduce vulnerability.

Cybersecurity experts have discovered a new method, known as TokenBreak, that can bypass the guardrails used by large language models to screen and moderate unsafe content. The approach works by making a small change—such as adding a single character—to certain words in a text, which causes the model’s safety filters to fail.

- Advertisement -

According to research by HiddenLayer, TokenBreak manipulates the tokenization process, a core step where LLMs split text into smaller parts called tokens for processing. By changing a word like "instructions" to "finstructions" or "idiot" to "hidiot," the text remains understandable to both humans and the AI, but the system’s safety checks fail to recognize the harmful content.

The research team explained in their report that, “the TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent.” Tokenization is essential in language models because it turns text into units that can be mapped and understood by algorithms. The manipulated text can pass through LLM filters, triggering the same response as if the input had been unaltered.

HiddenLayer found that TokenBreak works on models using BPE (Byte Pair Encoding) or WordPiece tokenization, but does not affect Unigram-based systems. The researchers stated, “Knowing the family of the underlying protection model and its tokenization strategy is critical for understanding your susceptibility to this attack.” They recommend using Unigram tokenizers, teaching filter models to recognize tokenization tricks, and reviewing logs for signs of manipulation.

The discovery follows previous research by HiddenLayer detailing how Model Context Protocol (MCP) tools can be used to leak sensitive information by inserting specific parameters within a tool’s function.

In a related development, the Straiker AI Research team showed that “Yearbook Attacks”—which use backronyms to encode bad content—can trick chatbots from companies like Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral AI, and OpenAI into producing undesirable responses. Security researchers explained that such tricks pass through filters because they resemble normal messages and exploit how models value context and pattern completion, rather than intent analysis.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Google Fined $314M for Misusing Android Users’ Cellular Data

Google must pay $314 million following a court ruling over its use of Android...

UPS Offers Buyouts to Drivers as UBS Cuts Target, 20,000 Jobs Cut

UPS is offering voluntary buyouts to its full-time U.S. drivers for the first time...

Ripple Labs Seeks National Trust Bank Charter Amid Stablecoin Push

Ripple Labs has applied to the OCC for a national trust bank charter, which...

Bitcoin Whales Move 20,000 BTC Worth $2B, Stir Market Speculation

Two major Bitcoin wallets transferred 20,000 BTC, valued at over $2 billion, to new...

Nissan Recalls 480,000 Vehicles; Stock Surges Despite Engine Issue

Nissan is recalling over 480,000 vehicles in the U.S. and Canada due to engine...

Must Read

8 Best Bitcoin Offshore Hosting Providers

In this blog post, we'll list the top 8 best bitcoin offshore hosting providers that accept Bitcoin and other cryptocurrencies.As Bitcoin continues to grow...