PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 2nd November 2025

Welcome Back!

The cybersecurity landscape is shifting as autonomous AI agents begin taking on the complex task of hunting down vulnerabilities in real-time, with OpenAI leading the charge through its newly released Aardvark security agent.

This GPT-5-powered tool mimics human researchers to discover and patch code vulnerabilities, but as AI becomes more central to development workflows, will these defensive capabilities keep pace with the emerging attack vectors that specifically target our AI-assisted processes?

In today's AI recap:

OpenAI's AI Bug Hunter

What you need to know: OpenAI has unveiled Aardvark, an autonomous security agent powered by GPT-5 that automatically discovers, validates, and suggests patches for vulnerabilities in source code.

Why is it relevant?:

  • Instead of using traditional analysis, the agent mimics a human researcher by reading code, running tests in a sandboxed environment, and using LLM-powered reasoning to understand code behavior.

  • The tool is already proving its real-world impact, having discovered 10 CVEs in open-source projects and identifying 92% of known vulnerabilities in benchmark testing.

  • OpenAI is now inviting organizations and open source projects to join the private beta to gain early access and help refine the agent's capabilities.

Bottom line: Aardvark represents a new defender-first model, using an agentic AI to provide continuous protection as your code evolves. By catching vulnerabilities early and offering clear fixes, it enables security teams to strengthen their posture without slowing innovation.

AI's Dangerous Daydreams

What you need to know: A new supply chain attack campaign named PhantomRaven is flooding the npm registry with malicious packages that steal developer credentials. The campaign, discovered by researchers at Koi Security, uses a clever trick: registering package names that are incorrectly suggested—or "hallucinated"—by AI coding assistants.

Why is it relevant?:

  • The attack exploits a new technique called slopsquatting, where attackers register plausible-sounding but non-existent package names that LLMs recommend, tricking developers who trust the AI's suggestions.

  • PhantomRaven packages use a method called Remote Dynamic Dependencies (RDD) to appear harmless, declaring zero dependencies while secretly fetching and executing malicious code from an attacker-controlled server during installation.

  • Once installed, the malware is designed to steal high-value secrets directly from the developer's environment, including authentication tokens for NPM, GitHub Actions, GitLab, and CircleCI.

Bottom line: This campaign reveals a new blind spot where trust in AI tools becomes a security risk. Developers must now verify not only package names they type but also those recommended by their AI assistants.

Poisoning the AI Well

What you need to know: Researchers have detailed a new technique called "AI-targeted cloaking" that tricks AI web crawlers into reading fake information. This poisons the model's knowledge base, causing it to confidently cite disinformation as fact.

Why is it relevant?:

  • The attack revives a classic SEO trick, using a simple user-agent check to serve one version of a website to humans and a different, poisoned version to AI crawlers.

  • Researchers demonstrated the risk by tricking an AI agent into ranking a less-qualified candidate as the top choice for a job, simply by feeding it an inflated, cloaked resume.

  • This technique exploits a broader trend of poor security in AI agents, which a recent report found will attempt nearly any malicious request without built-in safeguards.

Bottom line: As AI summaries become a primary source of information, the battleground shifts from hacking systems to manipulating the reality AI agents perceive. Organizations must now prioritize data provenance to ensure their automated systems are not operating on a foundation of lies.

Choose the Right AI Tools

With thousands of AI tools available, how do you know which ones are worth your money? Subscribe to Mindstream and get our expert guide comparing 40+ popular AI tools. Discover which free options rival paid versions and when upgrading is essential. Stop overspending on tools you don't need and find the perfect AI stack for your workflow.

AI Ransomware Hype Debunked

What you need to know: A new report co-authored by MIT Sloan and vendor Safe Security claims AI drives 80% of ransomware attacks, but security experts are heavily criticizing it for lacking evidence. MIT Sloan then deleted the report from their website. If you are interested you can read a copy here.

Why is it relevant?:

  • The paper labels major ransomware groups like LockBit and BlackCat as “AI-powered” without providing a dataset or clear definition to justify the classification.

  • Critics note the report was co-authored by Safe Security, a company that sells an AI-driven risk platform, which the paper’s conclusion promotes as a necessary defense.

  • This narrative feeds a wider industry trend, with a separate survey showing AI-enabled ransomware is now the top security concern for CISOs, despite limited evidence of such attacks in the wild.

Bottom line: The real danger here isn't a sudden surge in AI-driven attacks, but the industry distraction it causes. Focusing on vendor-sponsored hype risks pulling resources from fixing the fundamental security weaknesses that enable today's actual breaches.

AI's 'Confused Deputy' Problem

What you need to know: Security experts are warning that the rush to integrate autonomous AI agents is creating a massive new attack vector. A new BeyondTrust report highlights how attackers can exploit the classic "confused deputy" problem, tricking trusted AI into misusing its privileges.

Why is it relevant?:

  • Attackers use cleverly crafted prompts to manipulate an AI agent, tricking it into exfiltrating data or deploying malicious code with its legitimate permissions.

  • The threat surface is expanding rapidly as agentic AI becomes the new middleware, creating a major AI security gap for organizations that connect it to everything from emails to production databases.

  • To defend against this, security teams must treat AI agents as privileged machine identities and enforce strict least privilege, ensuring they only have the minimum permissions necessary for specific tasks.

Bottom line: The real danger isn't rogue AI, but trusted AI with excessive permissions. Securing the future requires extending zero-trust and least privilege principles to every AI agent in your environment.

The Shortlist

Aisuru shifted its focus from DDoS attacks to renting its massive botnet of IoT devices as residential proxies, a business fueled by aggressive and anonymized data scraping for training AI models.

AFP is developing a prototype AI tool designed to interpret emojis and slang from Gen Z and Alpha to help identify and investigate sadistic online exploitation by 'crimefluencers.'

Spektrum Labs emerged from stealth with $10 million in funding for its platform that uses AI agents to continuously validate security controls and provide cryptographic proof of cyber resilience.

Google cautioned that AI will struggle to replace human institutional knowledge in security operations, arguing that legacy IT inertia and the value of human context present significant hurdles to full automation.

AI Influence Level

  • Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.

Till next time!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Reply

or to participate

Keep Reading

No posts found