PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 15th February 2026
Welcome back!
Nation-state threat actors are no longer just experimenting with large language models - they're embedding them directly into their attack infrastructure. Groups linked to China, Iran, and North Korea are now using Google Gemini to accelerate malware development, automate reconnaissance, and generate phishing content that bypasses traditional detection methods.
As AI transitions from research tool to operational weapon in the cyber kill chain, the question becomes: can defenders adapt their detection capabilities fast enough to identify AI-generated artifacts before they're deployed at scale?
In today's AI recap:
If you have been enjoying the newsletter, it would mean the world to me if you could share it with at least one person 🙏🏼 and if you really really like it then feel free to offer me a coffee ☺️
State Hackers Weaponize Google Gemini for Recon and Code
What you need to know:
Google’s latest threat intelligence reveals that state-sponsored groups from China, Iran, and North Korea are actively weaponizing Gemini to accelerate reconnaissance, debug malware, and craft convincing phishing lures. This shift demonstrates how adversaries are moving beyond experimental AI usage to integrating these tools directly into their operational attack lifecycles.
Why is it relevant?:
North Korean group UNC2970 and others are using the model to synthesize open-source intelligence and map defense sector targets, effectively blurring the line between professional research and malicious reconnaissance.
Attackers have begun deploying AI-integrated malware like HONESTCUE, which leverages the Gemini API to dynamically generate and execute malicious C# payloads directly in memory without leaving disk artifacts. Link to The Hacker News
Beyond coding, adversaries are launching large-scale model extraction attacks to replicate proprietary reasoning and abusing AI-generated "ClickFix" social engineering lures to compromise systems. Link to Bleeping Computer
Bottom line:
This development signals a transition where AI becomes a functional component of the cyber kill chain rather than just a passive research assistant. Security teams needs to adapt detection strategies to identify AI-generated code artifacts and monitor for anomalous API traffic patterns that indicate backend abuse.
SPONSORED BY
Learn AI in 5 minutes a day
What’s the secret to staying ahead of the curve in the world of AI? Information. Luckily, you can join 2,000,000+ early adopters reading The Rundown AI — the free newsletter that makes you smarter on AI with just a 5-minute read per day.
AI Recommendation Poisoning
What you need to know:
Microsoft researchers identified a growing trend where websites embed hidden instructions in "Summarize with AI" buttons to manipulate AI assistant memories and bias future responses. This technique enables bad actors to force models into favoring specific products or viewpoints during subsequent user interactions.
Why is it relevant?:
The security team discovered over 50 unique prompts from 31 companies that attempt to poison the context of tools like Copilot and ChatGPT by utilizing hidden URL parameters.
Developers deploy these malicious interactions using freely available tools like the CiteMET package, which makes injecting these memory-altering instructions trivially easy for non-technical users.
This technique constitutes a persistent threat labeled "AI Agent Context Poisoning" where a single click trains the assistant to treat biased information as trusted facts forever.
Bottom line:
Organizations must recognize that AI memory acts as a new attack surface that requires regular auditing and clearing to prevent bias. Treating AI-generated recommendations with the same scrutiny applied to standard web search results protects against this invisible manipulation.
AI Agents Leak Data via Messaging App Link Previews
What you need to know:
Security researchers have uncovered a method where AI agents in messaging apps like Slack and Teams can be tricked into leaking data via zero-click link previews. This vulnerability allows attackers to siphon sensitive information simply by causing the agent to generate a URL that the platform automatically fetches.
Why is it relevant?:
The exploit leverages indirect prompt injection to manipulate an agent into appending private user data to a URL hosted on an attacker-controlled domain.
Data exfiltration happens instantly when the messaging app generates a link preview, meaning the victim never needs to click the malicious link for the breach to occur.
Popular agentic setups like Microsoft Teams with Copilot Studio and OpenClaw on Telegram are currently vulnerable to this technique unless specific configuration changes are applied.
Bottom line:
Enterprises utilizing AI agents in communication channels need to audit their link preview configurations to close this zero-click vector. Securing these integration points ensures that productivity gains from agentic workflows do not come at the cost of data confidentiality.
Malicious "AI" Chrome Extensions Impact 260k Users
What you need to know:
Over 30 malicious browser extensions masquerading as ChatGPT and Claude tools were found stealing API keys and Meta Business Suite data from enterprise users. This coordinated campaign highlights how threat actors weaponize the demand for generative AI tools to compromise browser environments.
Why is it relevant?:
Attackers inject a remote iframe overlay that mimics legitimate AI interfaces and dynamically loads malicious code to evade store security checks.
These tools actively siphon sensitive data including API keys, auth tokens, and Gmail threads by exploiting the user's existing session permissions.
The operators maintain persistence by quickly re-uploading extensions under new IDs immediately after Google removes them from the Chrome Web Store.
Bottom line:
This campaign underscores the critical risk of unmanaged browser extensions acting as silent bridges to external command-and-control servers. Security teams must prioritize auditing browser plugins to prevent sensitive data exfiltration through seemingly helpful productivity tools.
Claude Artifacts Used to Spread Infostealers
Cybercriminals are exploiting Anthropic’s Claude Artifacts feature to host malicious "ClickFix" instructions on legitimate domains, tricking users into installing infostealer malware » Read the full report
Why is it relevant?
Threat actors purchase Google Ads for technical searches like "HomeBrew" to redirect victims to a public Claude artifact that leverages the reputable claude.ai domain to bypass security filters.
One malicious guide attracted over 15,600 views, instructing users to execute terminal commands that deploy the MacSync infostealer to harvest sensitive keychain and browser data.
This campaign mirrors recent attacks leveraging ChatGPT and Grok, confirming that attackers are successfully adapting social engineering tactics to exploit the inherent trust users place in LLM platforms.
Bottom line
Security teams must recognize that legitimate AI domains now serve as effective hosting grounds for malware lures that evade standard reputation blocks. Updating user awareness training to verify the actual content of shared AI artifacts is critical for defense.
The Shortlist
Malwarebytes warned that cybercriminals are utilizing AI website builders like Vercel’s v0 to rapidly generate convincing clones of major brand websites for phishing campaigns.
Cisco Talos detailed VoidLink, a modular malware framework likely developed with the assistance of a large language model that enables long-term, stealthy access to Linux-based cloud environments.
Check Point acquired three Israeli companies—Cyata, Cyclops, and Rotate—for approximately $150 million to bolster its AI-driven security and exposure management capabilities.
OpenClaw partnered with VirusTotal to automatically scan all skills uploaded to its ClawHub marketplace, aiming to detect malicious functionality in the wake of recent reports about compromised agent skills.
Wisdom of the Week
When it feels like nothing’s happening,
everything is.
Keep going.
AI Influence Level
Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
