PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 22nd July 2025
Welcome back!
📓 Editor's Note
Every weekend, I run the same prompt: “Summarize the cybersecurity news related to vulnerabilities and patches.” And every weekend, the results are the same: a flood of flaws, zero-days, and jailbreaks.
This week? No different! NVIDIA’s AI container stack had a critical flaw. Meta’s Llama Firewall got bypassed with prompt injection. Grok-4 was jailbroken within 48 hours. Even Google’s Gemini models fell to multimodal red teaming. It’s like watching a new genre of exploits being born in real-time. And while defenders scramble to patch, threat actors are getting creative—LAMEHUG uses LLMs for phishing, and a ransomware crew is now using AI to negotiate with victims.
Even more surprising? The resurgence of the command-line. OpenAI, Anthropic, and DeepMind are pushing AI agents into the terminal, where they can run scripts, update servers, and “just handle it.” That might feel retro—but it also opens the door to high-privilege automation. Convenient? Yes. Risky? Absolutely.
So yes, patch all the things. But more importantly, start treating your AI systems like production software with adversaries already inside.