Cryptopolitan
2025-11-19 15:11:55

ServiceNow Assist AI agents exposed to coordinated attack

A new exploit in ServiceNow’s Now Assist platform can allow malicious actors to manipulate its AI agents into performing unauthorized actions, as detailed by SaaS security firm AppOmni. Default configurations in the software, which enable agents to discover and collaborate with one another, can be weaponized to launch prompt injection attacks far beyond a single malicious input, says chief of SaaS Security at AppOmni, Aaron Costello. The flaw allows an adversary to seed a hidden instruction inside data fields that an agent later reads, which may quietly enlist the help of other agents on the same ServiceNow team, setting off a chain reaction that can lead to data theft or privilege escalation. Costello explained the scenario as “second-order prompt injection,” where the attack emerges when the AI processes information from another part of the system. “This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” he noted on AppOmni’s blog published Wednesday. ServiceNow Assist AI agents exposed to coordinated attack Per Costello’s investigations cited in the blog, many organizations deploying Now Assist may be unaware that their agents are grouped into teams and set to discover each other automatically to perform a seemingly “harmless task” that can expand into a coordinated attack. “When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems,” he said. One of Now Assist’s selling points is its ability to coordinate agents without a developer’s input to merge them into a single workflow. This architecture sees several agents with different specialties collaborate if one cannot complete a task on its own. For agents to work together behind the scenes, the platform requires three elements. First, the underlying large language model must support agent discovery, a capability already integrated into both the default Now LLM and the Azure OpenAI LLM . Second, the agents must belong to the same team, something that occurs automatically when they are deployed to environments such as the default Virtual Agent experience or the Now Assist Developer panel. Lastly, the agents must be marked as “discoverable,” which also happens automatically when they are published to a channel. Once these conditions are satisfied, the AiA ReAct Engine routes information and delegates tasks among agents, operating like a manager directing subordinates. Meanwhile, the Orchestrator performs discovery functions and identifies which agent is best suited to take on a task. It only searches among discoverable agents within the team, sometimes even more than administrators realize. This interconnected architecture becomes vulnerable when any agent is configured to read data not directly submitted by the user initiating the request. “When the agent later processes the data as part of a normal operation, it may unknowingly recruit other agents to perform functions such as copying sensitive data, altering records, or escalating access levels,” Costello surmised. AI agent attack can escalate privileges to breach accounts AppOmni found that Now Assist agents inherit permissions and act under the authority of the user who initiated the workflow. A low-level attacker can plant a harmful prompt that gets activated during the workflow of a more privileged employee, getting access without ever breaching their account. “Because AI agents operate through chains of decisions and collaboration, the injected prompt can reach deeper into corporate systems than administrators expect,” AppOmni’s analysis read. AppOmni said that attackers can redirect tasks that appear benign to an untrained agent but become harmful once other agents amplify the instruction through their specialized capabilities. The company warned that this dynamic creates opportunities for adversaries to exfiltrate data without raising suspicion. “If organizations aren’t closely examining their configurations, they’re likely already at risk,” Costello reiterated. LLM developer Perplexity , said in an early November blog post that novel attack vectors have broadened the pool of potential exploits. “For the first time in decades, we’re seeing new and novel attack vectors that can come from anywhere,” the company wrote. Software engineer Marti Jorda Roca of NeuralTrust said the public must understand that “there are specific dangers using AI in the security sense.” Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.