PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 14th December 2025

Welcome back!

Cybercriminals have found a new way to distribute malware by combining trusted platforms in unexpected ways—using Google Ads to direct Mac users to malicious ChatGPT and Grok conversations that trick them into installing the AMOS infostealer. This social engineering approach exploits user trust in multiple legitimate services simultaneously, bypassing typical security awareness.

The attack highlights how AI platforms are becoming both targets and tools for cybercriminals, raising questions about whether our security frameworks can keep pace with these evolving attack vectors. When trusted AI interfaces become distribution channels for malware, how do we maintain user confidence while securing these new digital workflows?

In today's AI recap:

You can find Project Overwatch on the dedicated LinkedIn page. Feel free to follow the page so you can get daily updates and summary of key cyber and AI news articles. It’s the ideal place to also interact and share your comments and ideas.

AI-Powered Scams Target Mac Users

What you need to know: Attackers are weaponizing Google Ads and AI chatbots in a new campaign that tricks macOS users into installing the AMOS infostealer by following malicious instructions in shared ChatGPT and Grok conversations.

Why is it relevant?:

  • The attack begins with sponsored Google Ads for common Mac troubleshooting queries that link directly to seemingly helpful but malicious public AI conversations.

  • This social engineering tactic cleverly exploits user trust in multiple legitimate platforms at once, from the search engine to the AI chatbot's interface, bypassing typical suspicion.

  • Once a user executes the provided command, the AMOS malware harvests browser cookies, passwords, cryptocurrency wallets, and system files, while installing a backdoor for persistent access.

Bottom line: This method bypasses traditional security warnings by manipulating trusted user workflows rather than tricking users into downloading a file. It serves as a critical reminder that even legitimate platforms can be used as conduits for malware distribution.

Make Newsletter Magic in Just Minutes

Your readers want great content. You want growth and revenue. beehiiv gives you both. With stunning posts, a website that actually converts, and every monetization tool already baked in, beehiiv is the all-in-one platform for builders. Get started for free, no credit card required.

The GeminiJack Hack

What you need to know: Google patched a critical zero-click vulnerability in Gemini Enterprise, dubbed GeminiJack, which let attackers exfiltrate sensitive corporate data via indirect prompt injection. The flaw, discovered by AI security firm Noma Security, abused how the AI processed shared documents and emails.

Why is it relevant?:

  • This zero-click attack vector required no explicit user action; a maliciously crafted Google Doc or email could trigger the exploit when the AI processed content during routine queries.

  • The exploit used indirect prompt injection, embedding hidden instructions in documents that caused Gemini to search for and return sensitive material as part of normal responses.

  • Data exfiltration was stealthy, packaging stolen emails, calendar items, and files in ways that bypassed traditional security tools by resembling innocuous image or resource requests.

Bottom line: AI agents with broad access to enterprise data constitute a new attack surface that requires different controls. Security teams must complement access controls with validation of how AI systems interpret and act on ingested content.

An Unsolvable Problem

What you need to know: The UK's National Cyber Security Centre (NCSC) warns that prompt injection is a fundamental design flaw in LLMs that may never be fully mitigated. Attackers can craft inputs that cause AI systems to ignore or override original instructions.

Why is it relevant?:

  • The root cause is core to how LLMs operate: models process everything as tokens and don't reliably separate trusted instructions from untrusted user data.

  • Don't treat this like SQL injection — that analogy misleads. Prompt injection currently has no silver-bullet fix because the boundary between data and command sits inside the model's prediction process.

  • The better framing is a "Confused Deputy" problem: a privileged system can be tricked into misusing its authority, so risk management and system design are required.

Bottom line: As AI agents gain access to internal tools and APIs, prompt injection becomes a direct vector for data loss and fraud. Security teams should prioritize constrained architectures, strict privilege controls, and defense-in-depth rather than relying on perfect input sanitization.

The AI Browser Battle

What you need to know: As Gartner urges enterprises to block new agentic AI browsers over security risks, browser developers are racing to build in advanced defenses against threats like prompt injection before the technology goes mainstream.

Why is it relevant?:

  • The central concern is indirect prompt injection, where malicious code hidden on a webpage can trick an AI agent into performing unintended actions, like exposing user data or making fraudulent transactions.

  • Google is tackling this with a multi-layered system, including a novel “User Alignment Critic”—a second, isolated AI model that vets every action the primary agent takes to ensure it aligns with the user’s original goal.

  • Similarly, Brave is testing its agentic features in a separate, isolated profile that has no access to a user’s cookies or login data, and also uses an alignment-checking model to supervise the AI’s actions.

Bottom line: The race for AI-powered browsing is quickly becoming a battleground for security and user trust. These emerging defense-in-depth strategies are setting a critical new standard for how AI agents can safely operate on the open web.

Docker's Secret Sprawl

What you need to know: A recent analysis from cybersecurity firm Flare found over 10,000 public container images on Docker Hub leaking sensitive data. The report highlights thousands of active API keys for AI services exposed by developers, often from personal "shadow IT" accounts.

Why is it relevant?:

  • The most frequently leaked credentials were AI model keys, with almost 4,000 access tokens for services like OpenAI, Gemini, and Anthropic found in just one month, showing how fast AI adoption is outpacing security controls.

  • This exposure often stems from "shadow IT" accounts outside of corporate monitoring, with one Fortune 500 company and a major national bank having their secrets leaked from personal developer registries.

  • Even when developers remove a secret from a public image, the underlying credential is rarely revoked; in about 75% of cases, the key or token remained active and usable by anyone who had already found it.

Bottom line: The rush to build with AI is creating systemic risk, turning public code repositories into a goldmine of live credentials. This confirms a new attack paradigm where adversaries don't need to break in—they can just log in using the keys developers accidentally publish themselves.

The Shortlist

The FBI warned that scammers are using AI to manipulate photos scraped from social media as "proof-of-life" evidence in virtual kidnapping extortion schemes.

Microsoft patched a publicly disclosed remote code execution zero-day in GitHub Copilot for JetBrains that could be exploited via cross-prompt injection in untrusted files.

OWASP released the Top 10 for Agentic Application 2026 as a framework that identifies the most critical security risks facing autonomous and agentic AI systems.

Quote of the week

Life is strange.

You arrive with nothing, spend your whole life chasing everything, and still leave with nothing.

Make sure your soul gains more than your hands.

AI Influence Level

  • Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.

Till next time!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Reply

Avatar

or to participate

Keep Reading

No posts found