- A critical vulnerability dubbed DockerDash in Docker’s AI assistant, Ask Gordon, allowed remote code execution and data theft.
- The flaw stemmed from an inability to differentiate legitimate metadata from malicious commands embedded in Docker image labels.
- Docker patched the issue in November 2025 with the release of Desktop version 4.50.0.
- The attack exploited a trust boundary violation in the Model Context Protocol (MCP) Gateway architecture.
- Researchers at Noma Labs characterized this new attack vector as a case of Meta-Context Injection.
In November 2025, Docker quietly patched a critical flaw in its Ask Gordon AI assistant that cybersecurity researchers from Noma Labs codenamed DockerDash. This vulnerability could have permitted attackers to execute malicious code and exfiltrate sensitive data from compromised Docker Desktop and CLI environments. The security issue was fixed in version 4.50.0, released that month.
The flaw existed because Ask Gordon treated unverified metadata as executable commands. Consequently, a malicious actor could embed instructions within a Docker image’s LABEL fields, as Noma explained. The AI assistant would then read and forward these labels without validation.
These weaponized instructions passed through the MCP Gateway, which trusted the parsed data. “MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction,” said Sasi Levi of Noma. This trust boundary violation allowed the embedded command to run with the victim’s privileges.
Successful exploitation enabled remote code execution on cloud and CLI systems. Meanwhile, a separate data exfiltration vector targeted the Docker Desktop implementation specifically. This approach used the same injection flaw to harvest sensitive environment details via MCP tools.
The discovered attack chain underscores a new class of AI supply chain risk. “The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat,” Levi stated. It proved that trusted input sources can hide malicious payloads designed to manipulate an AI’s execution path without detection.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- SHIB Price Drop to Lows, Open Interest Down 11%: Is It Over?
- Solana Forms $100 Bottom, Long-Term Target $260
- De-Dollarization Stalls as Dollar Dominance Hits Record
- Volatility Risks Loom For Bitcoin Amid Macro Data, Technical Pressure
- Ex-Girlfriend Accuses Justin Sun of Fraud & Smuggling
