BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up

TokenBreak Attack Bypasses LLM Safeguards With Single Character

TokenBreak Attack Lets Hackers Evade AI Safety Filters by Tweaking Just One Character

  • Researchers have identified a new method called TokenBreak that bypasses large language model (LLM) safety and moderation by altering a single character in text inputs.
  • The attack targets the way LLMs break down text (tokenization), causing safety filters to miss harmful content despite minor changes to words.
  • This approach works by making small changes, such as adding a letter, which keeps the meaning intact for humans and LLMs, but confuses the model’s detection system.
  • The attack is effective against models using BPE or WordPiece tokenization, but not those using Unigram tokenizers.
  • Experts suggest switching to Unigram tokenizers and training models against these bypass strategies to reduce vulnerability.

Cybersecurity experts have discovered a new method, known as TokenBreak, that can bypass the guardrails used by large language models to screen and moderate unsafe content. The approach works by making a small change—such as adding a single character—to certain words in a text, which causes the model’s safety filters to fail.

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading

According to research by HiddenLayer, TokenBreak manipulates the tokenization process, a core step where LLMs split text into smaller parts called tokens for processing. By changing a word like "instructions" to "finstructions" or "idiot" to "hidiot," the text remains understandable to both humans and the AI, but the system’s safety checks fail to recognize the harmful content.

The research team explained in their report that, “the TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent.” Tokenization is essential in language models because it turns text into units that can be mapped and understood by algorithms. The manipulated text can pass through LLM filters, triggering the same response as if the input had been unaltered.

HiddenLayer found that TokenBreak works on models using BPE (Byte Pair Encoding) or WordPiece tokenization, but does not affect Unigram-based systems. The researchers stated, “Knowing the family of the underlying protection model and its tokenization strategy is critical for understanding your susceptibility to this attack.” They recommend using Unigram tokenizers, teaching filter models to recognize tokenization tricks, and reviewing logs for signs of manipulation.

The discovery follows previous research by HiddenLayer detailing how Model Context Protocol (MCP) tools can be used to leak sensitive information by inserting specific parameters within a tool’s function.

- Advertisement -

In a related development, the Straiker AI Research team showed that “Yearbook Attacks”—which use backronyms to encode bad content—can trick chatbots from companies like Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral AI, and OpenAI into producing undesirable responses. Security researchers explained that such tricks pass through filters because they resemble normal messages and exploit how models value context and pattern completion, rather than intent analysis.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -
Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount

Latest News

Alphabet Pays First 2026 Dividend, Starts “Snowball” Effect

Alphabet Inc. (GOOGL) paid its first quarterly dividend for 2026 on March 16.The dividend...

Bitcoin Mining Difficulty Plunges 7.7%

Bitcoin’s mining difficulty plunged 7.7% to 133.79 trillion on March 20, its sharpest decline...

TeamPCP Worm Spreads to npm Via Blockchain C2

Hackers linked to the TeamPCP operation have unleashed a self-propagating malware worm called CanisterWorm...

Early Ethereum Whale Buys $19.5M ETH as Market Eyes Thaw

The wallet known as thomasg.eth purchased approximately $19.5 million in Ethereum over the past...

CISA Flags 5 Exploited Flaws in Apple, CMS

The U.S. CISA has added five actively exploited security flaws impacting Apple, Craft CMS,...

Must Read

How Much Money Do You Need To Start In Crypto?

TL;DR -If you are wondering How Much Money Do You Need To Start In Crypto, note that is less than you are probably thinking....
Ad
Altseason Is Loading. These 4 coins are trending right now.
SOL $92.12
DOGE $0.0950
LINK $9.02
SUI $1.02
5% off spot fees when you sign up
Start Trading