CopyPasta exploit targets Cursor, risks Coinbase codebases!!

  • HiddenLayer disclosed a new “CopyPasta License Attack” that hides instructions in common project files to trick AI coding assistants.
  • The exploit targets tools like Cursor, which Coinbase said in August was used widely by its engineers.
  • The attack embeds hidden markdown comments in files such as LICENSE.txt so the model will preserve and replicate the instructions across files.
  • Coinbase CEO Brian Armstrong said about 40% of daily code is AI-generated and aims for more than 50% by October.
  • Researchers warn organizations to scan for hidden comments and treat all untrusted inputs to large language models as potentially malicious.

Cybersecurity firm HiddenLayer disclosed Thursday that attackers can use a method called a “CopyPasta License Attack” to insert hidden instructions into common developer files and trick AI coding assistants into spreading them across a codebase. The attack relies on AI tools treating certain files as authoritative and preserving their contents when modifying code.

- Advertisement -

The disclosure showed the technique primarily affects tools like Cursor, which Coinbase said in August was among the AI tools used by its engineers. Brian Armstrong wrote on Twitter that “~40% of daily code written at Coinbase is AI-generated. I want to get it to >50% by October.” He added AI work is concentrated in user interfaces and non-sensitive backends, with “complex and system-critical systems” adopting more slowly.

HiddenLayer’s report described embedding malicious payloads inside hidden markdown comments in files such as LICENSE.txt so the assistant treats those comments as license instructions and preserves them when editing. Hidden markdown comments are pieces of text in files that are not normally visible in rendered documentation; prompt injection is when input manipulates an AI model into following hidden instructions.

Researchers demonstrated how Cursor could be tricked into adding backdoors, siphoning sensitive data, or running resource-draining commands. HiddenLayer said, “Injected code could stage a backdoor, silently exfiltrate sensitive data or manipulate critical files.” The payloads can evade standard Malware detection because they appear as harmless documentation.

The technique broadens earlier worm concepts such as Morris II; IBM has written about those prior email-agent attacks here. HiddenLayer warned, “All untrusted data entering LLM contexts should be treated as potentially malicious.”

- Advertisement -

Security teams now urge scanning files for hidden comments and manually reviewing all AI-generated changes. (CoinDesk has reached out to Coinbase for comment.)

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

How Wall Street Bitcoin ETFs Weaken Spot Price Link

Bitcoin ETF share creation/redemption by authorized participants does not require immediate Bitcoin purchases or...

Nvidia AI Segment Eyed for $50B by 2030

Analyst Gene Munster estimates 70% of NVIDIA’s revenue currently comes from just eight major...

Bitcoin Demand Surges As Price Nears One-Year Low

Global Google searches for "buy Bitcoin" have hit a five-year peak, a historic signal...

AI models escalate to nukes in 95% of war games

AI models from OpenAI, Anthropic, and Google deployed nuclear weapons in 95% of war-game...

Nvidia Projects $78 Billion Revenue, Topping Estimates

NVIDIA's Q4 revenue surged 73% year-on-year to $68.1 billion, significantly surpassing analyst expectations.The company's...

Must Read

Top 7 BEST Crypto Trading Bots for Beginners

QUICK NAVIGATIONQuick Look: Top 3 Best Crypto Trading BotsWhat Exactly is a Crypto Trading Bot?How I Chose These Trading BotsTop 7 Crypto Trading Bots...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!