PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 22nd March 2026

Welcome back!

Last week the story was AI agents operating autonomously as attackers. This week the database confirms something less dramatic but arguably more important: the developer endpoint is now the most reliable attack path into enterprise infrastructure, and AI tools installed on those endpoints are making it worse by design.

208 articles processed this week. 17 tagged with confirmed ai_enabled_attack vectors. Six articles scored 9 or higher by the AI threats agent. What stands out is not a single catastrophic event but a pattern consolidating across multiple independent incidents - the GlassWorm supply chain cascade, the trivy-action CI/CD compromise, the SaaS agentic AI abuse in UNC6395's Salesloft chain, and the shadow AI analysis across 23,000 environments - all pointing at the same structural problem. Developers have AI assistants with broad, privileged access to their environments. Those AI tools sit in the middle of the path from initial access to production secrets. Nobody audited that attack surface when the tools were rolled out.

The Interpol number is worth naming directly: AI-enhanced fraud schemes are 4.5 times more profitable than non-AI variants. That is not a prediction or a model output. It is an empirical measurement from active criminal operations. The economics have settled. AI adoption by threat actors is not accelerating because attackers are curious about technology. It is accelerating because it works.

AI Threat Tempo

🤖 AI-Enabled Social Engineering & Deepfake Operations: ↑↑ High activity

Interpol's published analysis quantified AI-enhanced fraud at 4.5x the profitability of non-AI variants, with voice cloning now requiring only 10 seconds of reference audio and deepfake-as-a-service kits commoditised on dark web markets. Separately, a Wired investigation documented industrialised pig-butchering operations running purpose-built "AI rooms" in Southeast Asian scam compounds, with individual operators conducting 100–150 deepfake video calls per day. The campaigns specifically target victim verification requests - when a victim asks for a video call to confirm identity, the AI room is the response.

Significance: Deepfake video now defeats the most common folk remedy for social engineering. Telling users to "request a video call" to verify identity is no longer a useful defensive instruction.

🏴‍☠️ Nation-State AI Operations: ↑↑ North Korea confirmed, EU sanctions China and Iran

OFAC sanctioned six individuals and two entities tied to North Korea's IT worker fraud scheme, with Jasper Sleet and Coral Sleet confirmed using Faceswap for identity document manipulation and jailbroken LLMs for malware development at operational scale. Separately, the Lazarus Group's BlueNoroff sub-unit breached Bitrefill via a compromised employee laptop, draining wallets and extracting 18,500 purchase records in a textbook pattern now representing their sixth major cryptocurrency theft this cycle. DPRK's cumulative cryptocurrency theft since 2022 now exceeds $6.8 billion. The EU sanctioned Integrity Technology Group (Flax Typhoon enabler, 260,000-device IoT botnet) and i-Soon (confirmed hack-for-hire contractor) in the same week.

Significance: North Korea has operationalised AI for identity fraud at scale. The Faceswap confirmation is not aspirational tradecraft - it is sanctioned reality.

💀 AI-Augmented Ransomware & Cybercrime: ↑↑↑ Critical - zero-day exploitation confirmed

Interlock ransomware exploited CVE-2026-20131, a CVSS 10.0 zero-day in Cisco Secure Firewall Management Center, for 36 days before public disclosure on March 4. Amazon's MadPot threat intelligence infrastructure discovered the exploitation through a misconfigured Interlock staging server, exposing the group's full toolkit: custom dual-language RATs (JavaScript and Java), memory-resident web shells with anti-forensic log purging every five minutes, and HAProxy-based infrastructure laundering. Confirmed victims span healthcare (DaVita, Kettering Health), government (City of Saint Paul, Minnesota National Guard activated for recovery), and education. Interlock previously deployed the Slopoly AI-generated backdoor, placing them in the small set of ransomware groups confirmed to use LLM-assisted malware development.

Significance: A ransomware group with a 36-day zero-day window against firewall management software, an AI-generated malware lineage, and the sophistication to activate anti-forensic countermeasures is not the same threat category as opportunistic ransomware. Treat this group as an APT with a monetisation motive.

🔗 AI Supply Chain & Developer Ecosystem Attacks: ↑↑↑ Multiple simultaneous campaigns

Three separate campaigns hit the developer ecosystem simultaneously. GlassWorm's ForceMemo variant force-pushed obfuscated malware into 433+ packages and extensions across GitHub, npm, and VSCode/OpenVSX using a Solana blockchain memo field as C2 - a censorship-resistant command channel that bypasses traditional IOC-based blocking. The trivy-action GitHub Action (Aqua Security's vulnerability scanner) was compromised for the second time in a month, with 76 of 77 release tags repointed to a credential stealer harvesting GitHub secrets, SSH keys, and cloud provider credentials. AppsFlyer's Web SDK suffered a domain registrar hijack that silently injected cryptocurrency-stealing JavaScript across 100,000+ applications for approximately 36 hours. The common thread across all three: the attack surface is developer tooling, and the payload consistently targets cloud credentials and access tokens.

Significance: If you have not audited what your CI/CD pipelines are permitted to pass as secrets to third-party GitHub Actions, you have an unreviewed attack surface. Pinning Actions by commit hash rather than tag is no longer optional hygiene.

🏢 Enterprise AI Risk: ↑↑ Shadow AI and agentic SaaS abuse

A Grip Security analysis of 23,000 SaaS environments found that 100% of organisations run AI-enabled SaaS applications, with an average of 140 per organisation, most deployed without IT or security visibility. The UNC6395 campaign exploited this precisely: stolen OAuth tokens from the Salesloft/Drift breach enabled attackers to send natural language prompts to Drift's AI chatbot infrastructure, cascading a single compromised SaaS application into 700+ downstream organisational breaches, including Cloudflare, Palo Alto Networks, and Zscaler. Separately, researchers disclosed critical vulnerabilities in Amazon Bedrock AgentCore (DNS-based data exfiltration bypassing sandbox isolation), LangSmith (CVE-2026-25750, CVSS 8.5, account takeover exposing LLM trace history, SQL queries, and proprietary source code), and SGLang (unauthenticated RCE via unsafe pickle deserialization in LLM serving infrastructure).

Significance: The "lethal trifecta" - private data access, untrusted content exposure, external communication capability - describes most enterprise AI deployments today. The Bedrock finding is structurally important: Amazon classified the DNS exfiltration path as intended functionality, not a defect. You own the mitigation.

☁️ AI-Driven Cloud Attacks: ↑↑ Identity and CI/CD as primary vectors

Iran-linked Handala wiped approximately 80,000 Stryker employee devices by compromising an administrator account, creating a new Global Administrator in the Azure tenant, and using Microsoft Intune's legitimate remote wipe functionality - no malware required, no novel exploitation, just privileged cloud identity abuse.
The ShinyHunters group maintained operational tempo with a claimed 1 petabyte Telus Digital breach via GCP credentials stolen from the prior Salesloft compromise. TeamPCP (Cloud Stealer) abused the second trivy-action compromise to harvest cloud provider tokens from CI/CD pipelines, with the malware establishing systemd persistence for continued credential harvesting beyond the initial run.

Significance: Eighty thousand devices wiped in three hours using only legitimate cloud management tooling is the clearest possible argument for privileged identity governance in cloud environments. The attack required no malware, no exploit, and no persistence mechanism other than Global Administrator access.

Interesting Stats

4.5x - The profitability multiplier of AI-enhanced fraud schemes over non-AI variants, per Interpol's operational analysis. Threat actors are not adopting AI because it is interesting. They are adopting it because this is the return.

36 days - The zero-day exploitation window Interlock had on CVE-2026-20131 before Cisco's public disclosure. Amazon found it through a misconfigured attacker server, not through vulnerability research. Detection by accident is not a security programme.

140 - The average number of AI-enabled SaaS applications running per organisation across 23,000 surveyed environments, per Grip Security. The average security team knows about a fraction of them.

Three Things That Actually Matter

1. Shadow AI Is Not a Future Risk

The Grip Security research published this week does something useful: it puts a number on the problem. 100% of the 23,000 SaaS environments they analysed run AI-enabled applications. The average organisation has 140 of them. The average security team has visibility into a small subset. The rest are shadow AI - AI systems processing organisational data, holding OAuth tokens, capable of taking autonomous actions, and entirely outside the security control model.

The UNC6395 campaign demonstrates what this means in practice. Attackers stole OAuth tokens from the Salesloft/Drift breach. They then sent natural language instructions to Drift's AI chatbot infrastructure, which had legitimate, authenticated access to customer data across hundreds of organisations. The AI complied. 700+ organisations were breached through a single compromised SaaS application none of them had deployed themselves. The breach radius wasn't determined by the attackers' lateral movement capability - it was determined by the blast radius of a single AI application's authenticated access.

The IdentityMesh vulnerability pattern identified by Grip explains why this will keep happening: AI-enabled SaaS applications share unified authentication contexts, meaning a single compromised OAuth token can cascade across all AI systems in an organisation's interconnected environment. Your organisation probably approved one of these applications. You almost certainly didn't audit what else it could reach.

The practical question for Monday: open your SaaS discovery tool, filter for applications with AI agent capabilities, and ask for each one - what OAuth scopes does it hold, what can it reach with those scopes, and what happens if an attacker sends it well-crafted instructions? Most organisations cannot answer that question. The ones that can are ahead.

2. The Developer Ecosystem Is Being Systematically Harvested

The three simultaneous campaigns against developer tooling this week - GlassWorm/ForceMemo, trivy-action, AppsFlyer SDK - are not coincidental. Developer endpoints have become the most economically efficient attack surface in the enterprise because they concentrate high-value credentials (GitHub PATs, cloud provider tokens, SSH keys) in environments with limited security visibility, and because developer tooling is trusted in ways that general-purpose software is not.

The GlassWorm campaign is the most technically instructive of the three. It weaponises the VS Code extension ecosystem as initial access, steals GitHub credentials, then uses those credentials to force-push malicious code into the victim's repositories - poisoning downstream consumers without the repository owner's awareness. The C2 channel is a Solana blockchain wallet's memo field. That is not a signature that appears in a SIEM rule. The trivy-action compromise used git tag repointing - the repository history looks correct, the version tag exists, only the commit it references has changed. Both techniques are specifically designed to defeat the tooling security teams rely on for supply chain detection.

The credentials being harvested tell you what the end objective is. GitHub PATs, cloud provider tokens, SSH keys, Kubernetes tokens - these are not credentials for user accounts. These are credentials for infrastructure. The GlassWorm operators are not interested in exfiltrating your source code. They are interested in using your source code pipelines to reach your cloud environment.

The response to this is not a new tool. It is a configuration change that most organisations have not made: GitHub Actions pinned to commit hashes rather than tags, least-privilege scoping on CI/CD secrets, and periodic audits of which GitHub Actions your pipelines call and what permissions they request. None of this is technically complex. All of it requires someone deciding it matters.

3. The Interpol Number Changes the Argument

There is a persistent tendency in AI threat discussions to frame the question as capability: can AI-generated malware match human-written code? Can deepfakes fool trained investigators? Can AI agents execute novel attacks autonomously? These are reasonable research questions that mostly produce reassuring answers - AI-generated malware is usually less sophisticated, deepfakes fail against adversarial testing, autonomous agents still make mistakes that human operators would not.

Interpol's 4.5x profitability figure breaks this frame. The right question was never capability. It was economics. A fraud scheme that is 4.5 times more profitable than the alternative will attract capital and operational investment. It does not need to be technically impressive. Voice cloning requiring 10 seconds of audio is not impressive as a technical achievement - it is decisive as a cost structure. The criminal operation that previously needed a fluent native speaker to run a vishing call can now run it with anyone. The "AI room" running 150 deepfake video calls per day at a Southeast Asian scam compound is not a research demonstration. It is a production operation that industrialised deepfake video at a cost that makes it economically rational.

The implication for defenders is uncomfortable but important. Controls designed to detect AI-generated content - watermarks, detection classifiers, behavioural patterns - are in an arms race against a threat actor with enormous economic incentive to defeat them. Controls designed to make the fraud economically irrational regardless of the AI tooling - verification architecture that survives deepfakes, transaction limits that cap the upside even on successful fraud, callback verification protocols that require out-of-band confirmation - are not in that arms race. The security industry has been focused on the wrong variable.

Interpol's recommendation is a band callback protocol as the primary countermeasure. That is not a technology recommendation. That is an argument for process design over detection technology. In a field that defaults to buying tools, it is worth taking seriously.

In Brief - AI Threat Scan

🤖 AI-Enabled Attacks Interpol's analysis of AI-enhanced financial fraud quantifies AI-assisted schemes at 4.5x the profitability of non-AI alternatives, with deepfake-as-a-service kits commercially available and agentic AI flagged as an emerging autonomous fraud vector. A Wired investigation documented industrialised pig-butchering compounds operating dedicated AI rooms for real-time deepfake video calls, with individual operators handling 100–150 calls per day against victims who requested video verification.

🏴‍☠️ Nation-State AI Activity OFAC sanctioned six individuals and two entities from North Korea's IT worker fraud network, formally confirming Jasper Sleet and Coral Sleet use Faceswap for identity document fabrication and jailbroken LLMs for malware development. The EU sanctioned Integrity Technology Group and i-Soon for Flax Typhoon botnet operations and hack-for-hire activities respectively, with i-Soon's contractor ecosystem now confirmed across both US and EU enforcement actions.

💀 AI in Ransomware / Cybercrime Interlock ransomware exploited CVE-2026-20131 as a zero-day for 36 days before Cisco's public disclosure, with Amazon's threat intelligence discovering the campaign via a misconfigured attacker staging server. Interlock's toolkit - exposed by that operational security failure - includes dual-language custom RATs, memory-resident web shells, and anti-forensic log purging, alongside the previously documented AI-generated Slopoly backdoor.

🔗 AI System Vulnerabilities The GlassWorm ForceMemo campaign compromised 433+ packages across GitHub, npm, and VSCode using Solana blockchain memos as C2 infrastructure - a channel that bypasses conventional IOC blocking. OpenClaw's prompt injection vulnerabilities enabled zero-interaction data exfiltration via messaging app link preview triggers, with China's CNCERT issuing a national advisory and banning the platform from government and banking systems.

🏢 Enterprise AI Risk Grip Security's analysis of 23,000 SaaS environments found the average organisation runs 140 AI-enabled SaaS apps, most outside security visibility, with UNC6395 demonstrating the blast radius: one compromised OAuth token cascading through agentic AI to breach 700+ organisations. Critical vulnerabilities in Amazon Bedrock, LangSmith, and SGLang expose AI infrastructure to DNS exfiltration, account takeover, and unauthenticated RCE respectively.

☁️ Cloud Attacks Iran-linked Handala wiped 80,000 Stryker devices using Microsoft Intune's legitimate remote wipe functionality after compromising an Azure Global Administrator account - no malware, no novel exploitation, just identity abuse. Trivy-action was compromised for the second time in a month, with 75 version tags hijacked to harvest cloud provider tokens from CI/CD pipelines.

🔗 Supply Chain The AppsFlyer Web SDK domain registrar hijack delivered cryptocurrency-stealing JavaScript to applications across 100,000+ downstream deployments for approximately 36 hours, via subversion of the SDK distribution infrastructure rather than AppsFlyer's internal systems. Speagle malware abused Cobra DocGuard - a Chinese document security platform previously weaponised exclusively by Chinese APT groups - with one variant hardcoded to search for files referencing the Dongfeng-27 ballistic missile.

SPONSORED BY

The decision is yours

Confusing, jargon-packed, and time-consuming. Or quick, direct, and actually enjoyable.

Easy choice.

There’s a reason over 4 million professionals read Morning Brew instead of traditional business media. The facts hit harder, it’s built to be skimmed, and for once, business news is something you actually look forward to reading.

Try Morning Brew’s newsletter for free and realize just how good business news can be.

The Bottom Line

The thread running through this week is not AI capability. It is AI surface area. Shadow AI in SaaS environments, AI tools on developer endpoints, AI-enabled SaaS applications with OAuth tokens reaching across organisational boundaries, AI agent platforms with unauthenticated API endpoints - all of it represents attack surface that grew faster than the security model that governs it.

The Stryker incident is the clearest illustration: 80,000 devices wiped in three hours using only Microsoft Intune. No malware. No zero-day. No novel attack technique. Just Global Administrator access to a cloud tenant and a legitimate management function used for illegitimate purposes. The entire attack was within the designed operating parameters of the technology. The failure was not technical. It was governance - insufficient controls on who can reach that level of privilege, insufficient monitoring to detect when it changes.

Apply that lens to AI. The AI systems your organisation deployed in the last 18 months have legitimate access to data, legitimate communication channels, and legitimate tool access. Many of them are operating without the access controls you would apply to a contractor doing the same work. Some of them - the shadow AI in SaaS applications you haven't audited - you don't know about at all. The Grip Security number (140 AI-enabled SaaS apps on average, most outside security visibility) is not a distant risk. It is the current state of most organisations.

The Interpol profitability figure settles the economic question: AI adoption by threat actors is rational and accelerating. The question that actually matters for Monday morning is a simpler one. Of the AI systems you have approved, how many of them have simultaneous access to sensitive data, untrusted content, and external communication channels? That is the audit that needs to run. Everything else is commentary.

If you have been enjoying the newsletter, it would mean the world to me if you could share it with at least one person 🙏🏼 and if you really really like it then feel free to offer me a coffee ☺️

Simon

Wisdom of the Week

AI Influence Level

  • Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.

Till next time!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Reply

Avatar

or to participate

Keep Reading