PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 16th October 2025
Welcome back!
More than 100 popular VS Code extensions have been exposing secret publisher tokens that could allow attackers to push malicious updates directly to over 150,000 developers' machines. The vulnerability stems from basic publisher mistakes like accidentally including configuration files rather than any platform breach.
With even simple theme extensions becoming potential malware delivery vehicles, this incident raises a critical question: how many other development tools in our daily workflows are quietly creating attack vectors we haven't considered?
In today's AI recap:
and more!
VS Code's Extension Backdoor
What you need to know: A recent Wiz Research investigation discovered that over 100 popular VS Code extensions were leaking secret publisher tokens, creating a critical supply chain risk that could have allowed attackers to push malicious updates to more than 150,000 developers.
Why is it relevant?:
The leaks stemmed from simple publisher error, such as accidentally including hidden configuration files or hardcoding credentials, rather than a platform-level breach.
A compromised token would allow an attacker to publish a malicious update to a legitimate extension—even a seemingly harmless theme—and automatically distribute malware to its entire user base.
In response, Microsoft is enhancing marketplace security by implementing pre-publish scanning that blocks extensions containing verified secrets before they go live.
Bottom line: This incident highlights that the attack surface of your development environment includes every installed tool, no matter how simple it seems. Vigilance in managing extensions and their permissions is critical for securing the modern software supply chain.
OpenAI's AI Guardrails Bypassed
What you need to know: Security researchers almost immediately bypassed OpenAI's new Guardrails, a safety toolkit for AI agents. They used prompt injection to fool the very LLM that was supposed to act as the security judge.
Why is it relevant?:
The bypass reveals that using an LLM to police another LLM is fundamentally flawed since both are vulnerable to the same attacks.
Researchers successfully broke the safeguard by manipulating the judge LLM itself, tricking it into reporting a lower confidence score for a malicious prompt.
Relying solely on these types of built-in safety checks can create a false sense of confidence for organizations deploying AI in sensitive workflows.
Bottom line: This incident shows that self-policing AI systems can be compromised by turning their own logic against them. Effective AI security requires independent validation layers and continuous adversarial testing, not just internal safety filters.
Tired of newsletters vanishing into Gmail’s promotion tab — or worse, being buried under ad spam?
Proton Mail keeps your subscriptions organized without tracking or filtering tricks. No hidden tabs. No data profiling. Just the content you signed up for, delivered where you can actually read it.
Built for privacy and clarity, Proton Mail is a better inbox for newsletter lovers and information seekers alike.
Google's AI Ransomware Shield
What you need to know: Google is rolling out a new AI-powered defense in Google Drive that automatically detects and halts ransomware attacks by analyzing file modification patterns, allowing users to restore their data with a few clicks.
Why is it relevant?:
The system uses a specialized AI model that identifies the core signature of a ransomware attack—mass file encryption—and immediately pauses file syncing to the cloud to prevent the corruption from spreading.
Instead of complex re-imaging, affected users receive a desktop alert and can use an intuitive web interface to restore multiple files to a previous, healthy state, minimizing downtime.
This protection is specifically built for the Drive for desktop client on Windows and macOS, defending file types like PDFs and Microsoft Office documents that are common ransomware targets.
Bottom line: This moves ransomware defense beyond prevention and into active attack disruption. By containing the damage and simplifying recovery, Google is giving security teams a powerful tool to reduce the blast radius of an incident.
AI's Reconnaissance Superpowers
What you need to know: Attackers are now using AI to dramatically accelerate reconnaissance, allowing them to map target environments with unprecedented speed and precision. In response, leading cybersecurity researchers are gathering to share findings on how these new threats work and how defenses must evolve.
Why is it relevant?:
AI enables adversaries to launch precision attacks at scale by inferring an organization's tech stack and architecture from seemingly harmless public data.
This offensive shift is forcing a move away from reactive defense toward a proactive security culture that continuously validates controls against emerging AI-driven attack vectors.
Security teams are now in an arms race, working to expose new attacker tradecraft and develop countermeasures that can keep pace with AI's rapid evolution.
Bottom line: AI is no longer just a tool for defenders; it has become a powerful weapon for attackers that changes the fundamentals of cyber warfare. For security professionals, this means adapting strategies to focus on continuous exposure management and staying ahead of threats that operate at machine speed.
AI-Powered Extortion Scams
What you need to know: A new report reveals that AI is supercharging extortion scams like deepfakes and virtual kidnapping. These advanced tactics are proving highly effective, with Gen Z and Millennials now accounting for two-thirds of all victims.
Why is it relevant?:
These scams leverage generative adversarial networks (GANs) that can clone a person's voice and convincingly replicate their facial expressions, making fraudulent requests seem alarmingly authentic.
The threat now extends well beyond simple hoaxes, with attackers using this technology for targeted blackmail, financial market manipulation, and large-scale identity theft.
Defenders can spot potential deepfakes by looking for visual inconsistencies, such as unnatural body movements, a lack of normal blinking, and mismatched audio or coloring.
Bottom line: The growing accessibility of these AI tools lowers the barrier for attackers to create convincing, personalized extortion campaigns at scale. This shift requires security teams to focus on both advanced detection methods and robust, ongoing employee awareness training.
The Shortlist
JPMorgan plans to invest up to $10 billion in US companies vital to national security, with a key focus on strategic technologies like artificial intelligence and cybersecurity.
Resistant AI raised $25 million in a Series B round to expand its AI-powered financial crime and document fraud detection platform to new markets.
FuzzingLabs accused Y Combinator-backed startup Gecko Security of replicating its vulnerability disclosures for AI tools like Ollama, sparking a public debate over CVE credit and research integrity.
Taiwan's reported a 17% increase in daily cyber intrusions from China, with its National Security Bureau warning of a coordinated campaign using AI-generated content to spread disinformation.
AI Influence Level
Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

