- A critical security flaw (CVE-2026-25874) has been disclosed in Hugging Face’s open-source robotics platform, LeRobot, allowing unauthenticated remote code execution.
- The flaw stems from unsafe deserialization using pickle.loads() on data from unauthenticated gRPC channels in the policy server and robot client components.
- The vulnerability is currently unpatched, with a fix planned for version 0.6.0, and is dangerous as AI inference systems often run with elevated privileges.
Cybersecurity researchers revealed in April 2026 that Hugging Face’s popular open-source robotics platform, LeRobot, harbors a severe security vulnerability. This flaw allows unauthenticated attackers to execute arbitrary code remotely on systems running the service.
The vulnerability, cataloged as CVE-2026-25874 with a CVSS score of 9.3, is a case of unsafe deserialization. According to a GitHub advisory, the problem exists in the async inference pipeline where pickle.loads() deserializes data from unauthenticated gRPC channels.
An attacker who can reach the PolicyServer network port can send a malicious serialized payload. Consequently, they can run arbitrary operating system commands on the host, as detailed in a report by Resecurity.
The exploitation risks are significant because these AI inference systems typically have high privileges. Therefore, a compromise could lead to theft of sensitive data like API keys, lateral network movement, or even physical safety risks.
Valentin Lobstein, a VulnCheck researcher who discovered and published details of the flaw, noted it was validated against LeRobot version 0.4.3. Meanwhile, the issue remains unpatched, with a fix planned for version 0.6.0.
The flaw was independently reported in December 2025 by another researcher. Steven Palma, the project’s tech lead, acknowledged the risk and stated, “that part of the codebase needs to be almost entirely refactored as its original implementation was more experimental.”
Palma further noted that security was not a strong focus as LeRobot was primarily a research tool. However, he emphasized that closer attention would be paid as adoption grows, saying, “Fortunately, being an open-source project, the community can also help by reporting and fixing vulnerabilities.”
The findings highlight the ongoing danger of using the unsafe pickle format for serialization. Lobstein pointed out the irony, as Hugging Face created the Safetensors format specifically because pickle is dangerous for ML data.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
