PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 1st February 2026
Welcome back!
Moltbot, the widely-adopted open-source AI assistant, has transformed from a helpful coding companion into a major cybersecurity liability. Misconfigurations are exposing admin panels across hundreds of deployments, while malicious actors distribute fake extensions and typosquatting campaigns to exploit its growing popularity.
As enterprises rush to deploy AI tools, fundamental security practices are being overlooked at an alarming rate. With one in five organizations already running Moltbot in their environments, how many are unknowingly creating backdoors into their most sensitive systems?
In today's AI recap:
The Moltbot Mess
What you need to know: The viral open-source AI assistant Moltbot (formerly Clawdbot) is at the center of a security firestorm, with insecure deployments, malicious extensions, and impersonation campaigns exposing sensitive user and corporate data.
Why is it relevant?:
Researchers found hundreds of instances online with exposed admin panels due to simple misconfigurations, allowing attackers to steal API keys, read private messages, and even gain root access to the host system.
Attackers are exploiting its popularity by distributing a malicious VS Code extension that installs a remote access trojan, alongside typosquatting campaigns setting the stage for future supply-chain attacks.
The tool is becoming a major "Shadow AI" risk, with one security firm reporting it was found in 22% of enterprise environments where it can bypass data loss prevention controls by operating with the user's full permissions.
Bottom line: The rush to adopt powerful, self-hosted AI agents often outpaces fundamental security practices. Moltbot's issues serve as a critical reminder for developers to secure their deployments and for security teams to monitor for unapproved AI tools accessing corporate data.
The “LLMjacking” Bazaar
What you need to know: A large-scale cybercrime campaign dubbed "Operation Bizarre Bazaar" is actively hijacking thousands of publicly exposed, self-hosted LLM endpoints to resell API access and steal compute resources. This emerging threat, termed ‘LLMjacking,’ marks a new frontier in AI-focused cybercrime.
Why is it relevant?:
This operation exploits a massive and growing attack surface, with recent research identifying over 175,000 exposed Ollama hosts operating globally without standard security guardrails.
Attackers use a three-stage supply chain that systematically scans for vulnerable endpoints, validates access, and then resells access to over 50 different AI models on a commercial marketplace called SilverInc.
The primary entry points are common misconfigurations, such as unauthenticated Ollama instances on port 11434 and exposed OpenAI-compatible APIs on port 8000.
Bottom line: This campaign shows that risks from exposed AI infrastructure go far beyond unauthorized compute costs, extending to data exfiltration and lateral movement into internal networks. Security teams must now treat self-hosted AI deployments with the same rigor as any other mission-critical, internet-facing service.
SPONSORED BY
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
VS Code's Trojan AI
What you need to know: Two popular AI coding assistants for VS Code, downloaded over 1.5 million times, were discovered by researchers to be secretly exfiltrating developer source code and other sensitive data to servers in China.
Why is it relevant?:
The extensions provide fully functional AI assistance, including code completion and explanations, which effectively hides their malicious background activity from unsuspecting developers.
A hidden process captures the entire contents of any opened or edited file, and can even be remotely triggered to harvest up to 50 files from a workspace at once.
The exfiltrated data puts proprietary source code, API keys, and other credentials at significant risk, creating a major supply chain vulnerability that leverages the trusted environment of the official VS Code Marketplace.
Bottom line: The rise of helpful AI developer tools creates a new and potent attack surface. This campaign shows how attackers can use genuine utility as a cover to bypass traditional security checks and gain access to highly sensitive information.
North Korea's AI-Powered Backdoor
What you need to know: The North Korean APT group Konni is deploying AI-generated PowerShell backdoors in a new campaign targeting blockchain developers, according to a new Check Point report. The campaign aims to establish a persistent foothold in high-value development environments.
Why is it relevant?:
Researchers identified the malware as AI-assisted due to its modular structure, clean documentation, and placeholder comments like "# <– your permanent project UUID," which are characteristic of LLM-generated code.
The multi-stage attack begins with a ZIP archive delivered via Discord's CDN, leading to a shortcut file that executes the backdoor and ultimately deploys a legitimate remote access tool for persistent control.
This new tactic suggests state-sponsored actors are using AI to accelerate malware development and standardize their code, allowing them to create new threats more efficiently while relying on proven social engineering methods.
Bottom line: Threat actors are now using AI to streamline the creation of custom malware, lowering the barrier for entry and increasing the speed of their operations. This shift demands that security teams focus more on behavior-based detection rather than relying solely on static signatures to catch these rapidly evolving threats.
AI Voice Phishing Breaches Major Brands
What you need to know: The ShinyHunters ransomware group is reportedly using AI voice-cloning in a large-scale phishing campaign to bypass security, claiming data breaches at major companies like Match Group and Panera Bread.
Why is it relevant?:
Attackers are using AI-generated voices to impersonate IT support, tricking employees into providing credentials and MFA codes for Single-Sign-On (SSO) platforms like Okta and Microsoft Entra.
The campaign has impacted numerous high-profile companies beyond Match and Panera, with Bumble, Carmax, and Crunchbase also confirming they were targeted by the threat group.
Security experts emphasize that this attack vector succeeds against weaker MFA, recommending a shift toward phishing-resistant authentication like FIDO2 keys or passkeys to mitigate these threats.
Bottom line: AI voice cloning significantly lowers the barrier for attackers to launch convincing social engineering campaigns at scale. This trend forces a critical re-evaluation of security controls that rely on human voice verification or are susceptible to phishing.
The Shortlist
DoJ convicted a former Google engineer on seven counts of economic espionage for stealing over 2,000 confidential files related to the company's AI trade secrets to benefit a startup in China.
Anthropic published a new essay from CEO Dario Amodei, titled "The Adolescence of Technology," which maps out the major risks of powerful AI, from autonomous systems to misuse for destruction and economic disruption.
Bitdefender discovered an Android RAT campaign using the Hugging Face platform to host thousands of polymorphic payload variants, which are delivered via a dropper app disguised as a security tool.
1Password introduced built-in phishing protection that warns users about typosquatted or suspicious domains, directly addressing the rise of more convincing scams supercharged by AI tools.
Wisdom of the Week
The amount of good things in your life depends on your ability to notice them.
AI Influence Level
Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
