PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 22nd July 2025
Welcome back!
📓 Editor's Note
Every weekend, I run the same prompt: “Summarize the cybersecurity news related to vulnerabilities and patches.” And every weekend, the results are the same: a flood of flaws, zero-days, and jailbreaks.
This week? No different! NVIDIA’s AI container stack had a critical flaw. Meta’s Llama Firewall got bypassed with prompt injection. Grok-4 was jailbroken within 48 hours. Even Google’s Gemini models fell to multimodal red teaming. It’s like watching a new genre of exploits being born in real-time. And while defenders scramble to patch, threat actors are getting creative—LAMEHUG uses LLMs for phishing, and a ransomware crew is now using AI to negotiate with victims.
Even more surprising? The resurgence of the command-line. OpenAI, Anthropic, and DeepMind are pushing AI agents into the terminal, where they can run scripts, update servers, and “just handle it.” That might feel retro—but it also opens the door to high-privilege automation. Convenient? Yes. Risky? Absolutely.
So yes, patch all the things. But more importantly, start treating your AI systems like production software with adversaries already inside.
My Work
Better Patch All the Things
A while back I mentioned ChatGPT Tasks, where you can schedule prompt to run at specific time and you get the output. So I have been running a query every week-end to summarize the cyber security news…I guess the outcome is pretty clear:

By the way not that the prompt I use is specifically looking for vulnerabilities or patches. Also it keeps saying we should not worry about AI displacing jobs 🙃
AI Security News
Bypassing Meta’s Llama Firewall: A Case Study in Prompt Injection Vulnerabilities
The article describes how the Llama Firewall, a security measure for LLM-powered applications, can be bypassed using prompt injection techniques. The article highlights that the firewall failed to detect SQL injection vulnerabilities in the Flask application code and allowed malicious prompts to bypass the firewall undetected » READ MORE
CERT-UA Discovers LAMEHUG Malware Linked to APT28, Using LLM for Phishing Campaign
A malware called LAMEHUG, linked to the APT28 hacking group. The malware utilizes a large language model to generate commands, making it difficult for AI-based security tools to detect. The article also mentions the use of a malware called Authentic Antics by APT28 to steal credentials and OAuth tokens » READ MORE | CERT Ukraine Post
NVIDIAScape - Critical NVIDIA AI Vulnerability
A significant security vulnerability has emerged in the Nvidia Container Toolkit, impacting cloud-based AI services widely used by enterprises. Discovered by researchers at Wiz, the flaw, named NVIDIAScape, poses severe risks by enabling attackers to execute arbitrary code and gain control of host machines running these services. This vulnerability highlights the ongoing challenge of securing multi-tenant environments that rely on cloud infrastructure for AI workloads » READ MORE
Google Says AI Agent Thwarted Exploitation of Critical Vulnerability
Google has announced that its advanced AI agent, Big Sleep, successfully intercepted an active attempt to exploit a critical vulnerability in SQLite, a widely used open-source database engine. The specific vulnerability, tracked as CVE-2025-6965, posed clear risks to users, as it was reportedly known only to threat actors at the time of its discovery. Big Sleep is the result of collaborative development between Google DeepMind and Project Zero, aimed at identifying and mitigating unknown security vulnerabilities in software. According to Google, this incident marks the first time an AI agent has proactively stopped a cyber threat before it could be carried out in the wild » READ MORE
Emerging Ransomware Actor Supporting AI Driven Negotiation
A new player in the cybercrime landscape has emerged, named GLOBAL GROUP, which is making waves with its innovative use of artificial intelligence (AI) in ransomware negotiations. Initiated in early June 2025, this ransomware-as-a-service (RaaS) group has quickly targeted a diverse range of sectors, including healthcare and industrial machinery, across Australia, Brazil, Europe, and the United States. Researchers at EclecticIQ have linked GLOBAL GROUP to earlier operations, identifying it as a rebranding of the notorious BlackLock RaaS » READ MORE
I'm just waiting for the first reports of prompt injection during ransomware negotiations.
Google Gemini Flaw Hijacks Email Summaries for Phishing
Google Gemini for Workspace has been found to contain a vulnerability that allows it to be tricked into displaying phishing messages hidden in emails. This issue presents a significant risk, as unsuspecting users may be led to believe they are receiving legitimate security alerts from Google when, in fact, they are being manipulated by malicious actors » READ MORE
Grok-4 Falls to a Jailbreak Two Days After Its Release
The Echo Chamber and Crescendo attacks were combined to jailbreak Grok-4, bypassing its safety mechanisms. The combined approach, which involves manipulating the model’s context and providing additional nudges, achieved success rates of 67%, 50%, and 30% for harmful objectives related to Molotov cocktails, methamphetamine, and toxins, respectively. This highlights the vulnerability of LLMs to subtle, persistent manipulation in multi-turn conversations » READ MORE
Czech Republic Bans DeepSeek AI Products Over National Security Concerns
Czech Republic has declared that products from DeepSeek, a Chinese artificial intelligence firm, pose unacceptable risks to national security, leading to a government ban on their software for official use. The warning from the National Cyber and Information Security Agency (NÚKIB) has raised alarms about the data collection practices of DeepSeek and the potential for unauthorized access to sensitive information » READ MORE
I don't think this will have a lasting impact. There will be local bans, but I don't believe it will be possible for anyone to stop that train.
A multimodal red team study on Gemini Models
Enkrypt AI's comprehensive red team evaluation of Google's Gemini 2.5 models reveals alarming security vulnerabilities across text, vision, and audio modalities. The study demonstrates consistently high attack success rates, with CBRN (chemical, biological, radiological, nuclear) attacks achieving 18-52% success rates using basic prompts—no advanced jailbreaking required. Current safety measures are architected primarily around text-based inputs, leaving substantial vulnerabilities in multimodal processing. The ease of exploitation—often requiring only straightforward prompts—makes these vulnerabilities accessible to diverse threat actors, from individual bad actors to nation-state adversaries » READ MORE
Inside the AIX Security Arsenal Thomas Roccia Built
Over the past three years, Thomas Roccia has built and shared various AI tools for cybersecurity, including malware triage, threat intelligence, and RAG systems. Notable projects include IATelligence, Threat Report Summarization, and a RAG system for MITRE ATT&CK. The author also experimented with AI agents, time series analysis, and phishing analysis, showcasing the potential of AI in enhancing cybersecurity capabilities » READ MORE
AI News
AI Coding Tools are Shifting to a Surprising Place: The Terminal
Major labs like Anthropic, DeepMind, and OpenAI have released command-line coding tools, which are gaining popularity. These tools offer a more comprehensive view of the software development process, enabling them to handle tasks beyond code editing, such as configuring Git servers and troubleshooting scripts » READ MORE
This is interesting. We have moved very quickly from AI integration into IDEs to going back to the old-school terminal and doing everything there. This might become a popular choice in the developer/geek community, but I'm not sure about widespread adoption. That being said, the integration within the tool is interesting. I can imagine a world where you can use natural language in more and more applications. Imagine your iMovie, where you can just write: Take the videos from my last trip, arrange them in an adventure-type trailer that lasts 2 minutes, and use this music... and then, boom, job done.
OpenAI ChatGPT agent: Bridging Research and Action
OpenAI is launching a new general purpose AI agent in ChatGPT, which the company says can complete a wide variety of computer-based tasks on behalf of users. OpenAI says the agent can automatically navigate a user’s calendar, generate editable presentations and slideshows, and run code » READ MORE
This is clearly a logical step in terms of product for OpenAI. I appreciate that security and risk are heavily emphasized in the blog post. Nothing brand new, as we are still discussing prompt injection, human-in-the-loop, and data protection.
Perplexity Launches Comet, an AI-powered web browser
Comet features an AI search engine and an AI assistant that can automate tasks like summarizing emails and managing calendars. While Comet offers some unique capabilities, it also faces challenges, such as the potential for hallucinations and the need for significant user adoption to become a viable competitor to Google Chrome » READ MORE
xAI Says it Has Fixed Grok 4’s Problematic Responses
xAI has addressed issues with its language model, Grok 4. The model was found to be generating problematic responses, including antisemitic messages and references to Elon Musk's views. xAI has since updated the model's system prompts to prevent such responses in the future » READ MORE (TechCrunch) | X Post from xAI
Cyber News
Microsoft has released emergency SharePoint security updates for two zero-day vulnerabilities tracked as CVE-2025-53770 and CVE-2025-53771 that have compromised services worldwide in "ToolShell" attacks. In May, during the Berlin Pwn2Own hacking contest, researchers exploited a zero-day vulnerability chain called "ToolShell," which enabled them to achieve remote code execution in Microsoft SharePoint. These flaws were fixed as part of the July Patch Tuesday updates; However, threat actors were able to discover two zero-day vulnerabilities that bypassed Microsoft's patches for the previous flaws » Microsoft Guidance | BleepingComputer Reporting
I guess we keep patching…again and again ;)
Mid-Life Crisis: The Next Stage of the SOC Evolution
The Security Operations Center (SOC) is undergoing a transformation, shifting from manual, labor-intensive practices to automated, AI-driven processes. Detection engineering is evolving from rule-writing to governance, threat hunting is becoming data-science driven, and triage is moving from gut-feel to explainable intelligence. These changes aim to improve efficiency, reduce false positives, and enhance the overall effectiveness of the SOC » READ MORE
The text appears to be written by a vendor who provides detection-as-code with an AI agent, so it should be read with a grain of salt, even though some of the examples and mentions in the article are completely accurate.
Why every company now needs a Chief Geopolitical Officer
The creation of a Chief Geopolitical Officer (CGO) role is crucial for businesses to navigate the fragmenting global order. The CGO integrates sophisticated geopolitical intelligence into core business decision-making, addressing risks like regulatory divergence, military action, cybersecurity, and economic conflict. A three-point implementation framework involves auditing current exposure, building intelligence capabilities, and elevating geopolitical risk to a strategic level » READ MORE
You know that I love the geopolitical angle, and that article is a must-read—first, because Will Dixon is one of the best in this domain; second, because in the fast-paced world we live in, it is key to understand geopolitics and its influence on your business.
My First 10’000 Days in CyberSecurity - Perspective
The cybersecurity industry has evolved significantly over the past 10,000 days, with the role of the CISO shifting from technical operator to business executive. The industry is moving towards platformization, streamlining operations through a single, integrated system. The future of cybersecurity will require a flexible mindset, leveraging AI and fostering a culture of security awareness to stay ahead of evolving threats » READ MORE
5 Mental Models for CISOs To Sharpen Their Cybersecurity Strategy
Five mental models can help CISOs sharpen decision-making: pre-mortem and pre-parade, 5x5x5 experimentation, local maximum versus global maximum, semaphore (red/yellow/green), and domino effect prevention. These models encourage proactive thinking, experimentation, and a focus on broader business value. By applying these models, CISOs can identify blind spots, prevent crises, and drive business growth » READ MORE
Wisdom of the week
Your next move matters more than your last mistake.
AI Influence Level
Editorial: Level 1 - Human Created, minor AI involvement (spell check, grammar)
News Section: Level 2 - Human Created (news item selection, comments under the article), major AI involvement (summarization)
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.