CopyPasta exploit targets Cursor, risks Coinbase codebases!!

  • HiddenLayer disclosed a new “CopyPasta License Attack” that hides instructions in common project files to trick AI coding assistants.
  • The exploit targets tools like Cursor, which Coinbase said in August was used widely by its engineers.
  • The attack embeds hidden markdown comments in files such as LICENSE.txt so the model will preserve and replicate the instructions across files.
  • Coinbase CEO Brian Armstrong said about 40% of daily code is AI-generated and aims for more than 50% by October.
  • Researchers warn organizations to scan for hidden comments and treat all untrusted inputs to large language models as potentially malicious.

Cybersecurity firm HiddenLayer disclosed Thursday that attackers can use a method called a “CopyPasta License Attack” to insert hidden instructions into common developer files and trick AI coding assistants into spreading them across a codebase. The attack relies on AI tools treating certain files as authoritative and preserving their contents when modifying code.

- Advertisement -

The disclosure showed the technique primarily affects tools like Cursor, which Coinbase said in August was among the AI tools used by its engineers. Brian Armstrong wrote on Twitter that “~40% of daily code written at Coinbase is AI-generated. I want to get it to >50% by October.” He added AI work is concentrated in user interfaces and non-sensitive backends, with “complex and system-critical systems” adopting more slowly.

HiddenLayer’s report described embedding malicious payloads inside hidden markdown comments in files such as LICENSE.txt so the assistant treats those comments as license instructions and preserves them when editing. Hidden markdown comments are pieces of text in files that are not normally visible in rendered documentation; prompt injection is when input manipulates an AI model into following hidden instructions.

Researchers demonstrated how Cursor could be tricked into adding backdoors, siphoning sensitive data, or running resource-draining commands. HiddenLayer said, “Injected code could stage a backdoor, silently exfiltrate sensitive data or manipulate critical files.” The payloads can evade standard Malware detection because they appear as harmless documentation.

The technique broadens earlier worm concepts such as Morris II; IBM has written about those prior email-agent attacks here. HiddenLayer warned, “All untrusted data entering LLM contexts should be treated as potentially malicious.”

- Advertisement -

Security teams now urge scanning files for hidden comments and manually reviewing all AI-generated changes. (CoinDesk has reached out to Coinbase for comment.)

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

AI-Powered Cognitive SOC Transforms Alert Noise into Clear Context

Traditional security operations centers (SOCs) generate excessive alerts that overwhelm analysts and delay threat...

Société Générale Integrates Stablecoins With DeFi Via Morpho

Société Générale enabled its euro and dollar stablecoins to work with major decentralised finance...

Trump, Pfizer Ink $70B Deal: Discounted Drugs via TrumpRX Site

President Donald Trump announces an agreement with Pfizer to significantly lower drug prices in...

Vercel Users Quit After CEO Poses With Netanyahu at AI Meeting

Vercel is losing users after its CEO, Guillermo Rauch, posted a selfie with Israeli...

Turkey Plans Law Letting Watchdog Freeze Crypto, Bank Accounts

Turkey is proposing a law to give its financial intelligence unit expanded powers to...
- Advertisement -

Must Read

9 Best Books On Ethereum And Blockchain Technology

QUICK LINKSHow to Choose Your First Blockchain Book: A Simple Framework1. Define Your Goal: Are you looking to Build, Invest, or Understand?2. Assess Your...