PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 23rd November 2025

Welcome back!

Cybercriminals have found a way to weaponize AI infrastructure itself, with a new botnet campaign hijacking GPU clusters and using their own orchestration capabilities to spread like wildfire across the internet.

With over 230,000 exposed Ray servers creating a massive attack surface, could this self-propagating approach represent the future of large-scale cyber attacks? The answer may reshape how we think about securing AI development platforms.

In today's AI recap:

AI Attacks AI

What you need to know: Threat actors are actively exploiting a two-year-old vulnerability in the Ray AI framework, turning GPU clusters into a massive, self-propagating botnet dubbed ShadowRay 2.0.

Why is it relevant?:

  • Attackers use Ray’s own orchestration features to spread the botnet, creating a self-propagating worm that hijacks compromised clusters to find and infect new victims.

  • The campaign targets a massive attack surface, with scans revealing over 230,000 Ray servers exposed to the internet, many belonging to startups and research organizations.

  • The core vulnerability remains unpatched by design, so developers released a tool to help users check open ports and secure their clusters.

Bottom line: This campaign shows that AI infrastructure is now a prime target for attacks that turn its own scaling power against itself. Security teams must treat AI development platforms with the same rigor as production systems, focusing on secure deployment and constant monitoring.

AI Agents Turn On Each Other

What you need to know: Security researchers discovered that default settings in ServiceNow's Now Assist platform allow AI agents to be tricked into recruiting other, more privileged agents to execute unauthorized actions like data theft and privilege escalation.

Why is it relevant?:

  • This isn't a bug but an exploit of intended, default-on features where AI agents are automatically grouped into teams and can discover and collaborate with each other.

  • The attack works via second-order prompt injection, where a low-privilege user can plant a malicious prompt that is later unknowingly triggered by a high-privilege user, causing the agent to act with elevated permissions.

  • The suggested mitigations include configuring privileged agents to run in a supervised mode, segmenting agent duties into separate teams, and disabling autonomous overrides to prevent unintended actions.

Bottom line: This vulnerability demonstrates a new attack surface where interconnected AI agents themselves become the vector for an attack. Security teams must now evolve their focus from securing model inputs to auditing and hardening the configurations and permissions between collaborating AI agents.

Unlock ChatGPT’s Full Power at Work

ChatGPT is transforming productivity, but most teams miss its true potential. Subscribe to Mindstream for free and access 5 expert-built resources packed with prompts, workflows, and practical strategies for 2025.

Whether you're crafting content, managing projects, or automating work, this kit helps you save time and get better results every week.

Securing AI's Plumbing

What you need to know: The Model Context Protocol (MCP), the communication backbone for AI agents, is receiving a major security overhaul to add identity, authorization, and other critical safeguards against a new class of supply-chain threats.

Why is it relevant?:

  • Early versions of MCP operated on blind trust, a vulnerability highlighted when a popular malicious MCP server was caught stealing thousands of user emails.

  • The new specification introduces enterprise-grade controls, including server identity verification to prevent spoofing, formal authorization requirements, and a registry system for trusted tools.

  • These risks are not theoretical; a recent vulnerability was discovered in an AI browser's MCP API that could allow attackers to execute local commands on a user's machine.

Bottom line: This update signals MCP’s evolution from a developer's experiment into critical enterprise infrastructure. Security teams must now adapt their strategies to monitor and secure this new, powerful protocol as it becomes central to AI operations.

Claude Code's Critical Flaw

What you need to know: SpecterOps researchers discovered a critical remote code execution (RCE) flaw in Anthropic's AI-powered IDE, Claude Code. The vulnerability, tracked as CVE-2025-64755, resulted from improper command parsing and could be triggered by a malicious prompt.

Why is it relevant?:

  • The exploit bypassed security checks by abusing how Claude Code validated sed command expressions, allowing an attacker to write to any file on the system. A detailed breakdown from SpecterOps shows how simple regex filters proved insufficient for complex command validation.

  • This vulnerability highlights the significant risk of prompt injection attacks against AI developer tools that have filesystem access. An attacker could trigger the code execution from a compromised Git repository, a malicious webpage, or even a custom MCP server.

  • The discovery was inspired by analysis of previous vulnerabilities and involved bypassing anti-debugging checks built into the tool's obfuscated code, signaling to the researcher that a deeper look was warranted.

Bottom line: As AI agents gain more autonomy and system access, their command validation and permission models become critical security boundaries. This incident demonstrates that even well-intentioned security measures, like regex-based allowlists, can be insufficient against determined analysis.

Microsoft's AI Defense Line

What you need to know: Microsoft announced a major security overhaul at its Ignite conference, infusing AI across its Defender, Sentinel, and Purview platforms. The new features aim to create a more predictive and automated defense system for enterprises.

Why is it relevant?:

  • Microsoft Defender introduces Predictive Shielding, an AI capability designed to anticipate an attacker’s next moves and proactively block potential attack pathways before they can be exploited.

  • The updates deliver unified posture management for AI agents, giving security teams a single dashboard to manage risks, reduce shadow agents, and strengthen posture across both pro-code and low-code AI environments.

  • The platform's reach extends beyond Microsoft's ecosystem, with Sentinel now enabling automatic attack disruption for threats detected in third-party products from AWS, Okta, and Proofpoint.

Bottom line: Microsoft is shifting security from a reactive posture to a predictive one by using AI to anticipate and contain threats. This move empowers security teams to automate defenses and manage risks across their entire AI-driven infrastructure.

The Shortlist

Doppel raised $70 million in a Series C round for its AI-native platform that defends against phishing, impersonation, and other social engineering attacks.

Google patched its seventh Chrome zero-day of the year and credited its Big Sleep AI agent with discovering a second high-severity type confusion flaw in the same update.

Cisco launched its "Resilient Infrastructure" initiative, warning that generative AI makes it easier for attackers to find and exploit vulnerabilities in unsupported legacy systems.

Netskope found that while LLMs are improving at generating malware, the code is often too unreliable for operational deployment, with GPT-5 sometimes subverting malicious intent by producing safer, non-functional alternatives.

AI Influence Level

  • Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.

Till next time!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Reply

or to participate

Keep Reading

No posts found