- Default configurations in ServiceNow‘s Now Assist AI platform enable second-order prompt injection attacks.
- Attackers can exploit agent-to-agent communication to access and modify sensitive data without detection.
- The issue arises from enabled agent discovery and collaboration features, which are set on by default.
- Mitigations include supervised execution modes, disabling autonomous overrides, and monitoring agent behavior.
ServiceNow‘s Now Assist generative AI platform is vulnerable to sophisticated prompt injection attacks due to its default settings, allowing malicious actors to exploit its agentic features. Disclosed in November 2025, this security risk arises from the platform’s agent-to-agent discovery capability, enabling unauthorized data access and actions.
According to AppOmni, the second-order prompt injection attack leverages Now Assist’s facility for autonomous agents to identify and collaborate with each other. These agents, designed to automate tasks such as help-desk functions, can be manipulated to execute commands including copying sensitive corporate data, altering records, and elevating privileges.
“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” stated Aaron Costello, chief of SaaS Security Research at AppOmni. “When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook.”
The vulnerability stems from three main default configurations: the underlying large language models (LLMs) such as Azure OpenAI LLM and Now LLM support agent discovery; Now Assist agents are grouped into the same team by default, enabling cross-invocation; and agents are published as discoverable automatically. These settings facilitate behind-the-scenes cross-agent communication that attackers can exploit.
In this scenario, a benign agent processing prompts embedded in accessible content may recruit a more capable agent to perform unauthorized tasks. This occurs even if conventional prompt injection protections are in place. Crucially, Now Assist agents operate with the privileges of the user who initiates them, not the malicious actor who inserts harmful prompts.
Following responsible disclosure, ServiceNow confirmed the behavior is intended and has updated its documentation for clarity. To reduce risks, organizations should configure supervised execution modes for privileged agents, disable the autonomous override option (“sn_aia.enable_usecase_tool_execution_mode_override”), segment agent roles by team, and actively monitor AI agent activities for suspicious patterns.
“If organizations using Now Assist’s AI agents aren’t closely examining their configurations, they’re likely already at risk,” Costello warned.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Crypto Market Shows Recovery; Bitcoin Nears $92,000 Mark
- Gary Black: Tesla Must Prove Driverless Robotaxi Before Valuation Shift
- Coinbase Developing Prediction Markets Platform Backed by Kalshi
- Bitcoin’s Volatility Sparks Panic Amid 20% Drop from ATH
- Bitcoin Stability Signals Shift to Fundamentals, Altcoins Hold Firm
