PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 3rd August 2025
Welcome back!
📓 Editor's Note
We're hitting a turning point that feels different from anything I've experienced in my career. After two decades watching technology evolve, I think we're entering what I'm calling the "build era" - and it's going to change everything.
I'll be honest - I was skeptical. I've tried Cursor, GitHub Copilot, and other AI coding tools over the past year. They were helpful but nothing revolutionary. Then I spent two weeks with Claude Code, and something clicked. I built three working prototypes faster than I've ever built anything.
This isn't just about coding faster. It's about removing the friction between having an idea and testing whether it actually works.
Here's what I'm seeing in organizations that are adapting well: technical implementation is becoming a commodity, but the ability to think through problems and communicate requirements clearly is becoming invaluable.
The business analysts I know are having a moment. They've always been the translators between "what we need" and "what's technically possible." Now they can prototype directly, show rather than just tell, and iterate without waiting for development cycles.
Satya Nadella's recent internal memo reinforced something I've been thinking about: we're shifting from building tools to enabling everyone to build their own tools. The companies that figure out how to retrain their people for this shift will have a massive advantage.
Organizations that can't embrace this change will find themselves increasingly disadvantaged and so does people who won’t adapt.
Now don’t get me wrong, this is not a perfect world. AI-generated code is fast but often insecure. We're creating a dangerous dynamic where business pressure for rapid deployment conflicts with security fundamentals. The technical debt we're accumulating right now will define how successfully we navigate the next few years.
The build era promises unprecedented creative capacity, but it requires discipline. Speed without security isn't progress - it's accumulated risk. The organizations that master this balance will shape the next decade of technology.
AI Security News
Code Execution Through Deception: Gemini AI CLI Hijack
Tracebit discovered a vulnerability in Gemini CLI, an AI agent for code exploration, allowing silent execution of malicious commands. The attack exploits prompt injection and inadequate command validation, enabling the exfiltration of sensitive credentials. The issue was fixed in Gemini CLI v0.1.14, released on July 25th » READ MORE
We asked 100+ AI models to Write Code. Here’s How Many Failed Security Tests
If you think AI-generated code is saving time and boosting productivity, you’re right. But here’s the problem: it’s also introducing security vulnerabilities… a lot of them. Spoiler alert” 45% of code samples failed security tests and introduced OWASP Top 10 security vulnerabilities into the code » READ MORE
No big surprise and the continue wave of vibe coding will continue to do enable this at scale. I don’t remember where I read that but some CIO/CTO were also saying that yes their dev teams gain effectiveness but it’s being lost again later in the process…due to all the clean-up and bugs.
IBM: Cost of Data Breach Report 2025
And just to justify the previous point…the IBM and Ponemon Institute report reveal how AI is greatly outpacing security and governance in favor of do-it-now adoption. The findings show that ungoverned AI systems are more likely to be breached and more costly when they are » READ MORE
OWASP - GenAI Incident Response Guide
An important guide for incident responder to read. There is still a lot of question about basic definition of an “AI incident” - and this is a usual topic of discussion within incident response and similar challenges exist with third-party incident, data breach incident, etc. - but also start the definition of what AI specific aspects an incident could have » READ MORE
AI-Powered Cybersecurity Posture Reporting with N8N Automation
A great example that mixed automation and AI to support cyber security.. Foresight Cyber is sharing how they have build a N8N workflow to assess the public security posture of a domain » READ MORE
You should for sure check out N8N, i’m using to support this newsletter and you can run your own instance locally as well or go with the SaaS version (check the price). Very versatile automation workflow that can connect to a lot of services.
The World’s First AI Agent Standard - AIUC-1
AIUC-1 is the world's first standard for AI agents. It covers data & privacy, security, safety, reliability, accountability and societal risks. Certified organizations demonstrate they conduct leading technical, operational, and legal activities. Auditors assess compliance through upfront technical testing and review of operational controls (conducted annually), and ongoing technical testing (conducted at least quarterly to keep up with ongoing changes to AI risk & mitigation techniques) » READ MORE
This one is not a big surprise. Time will tell if AIUC-1 will become THE standard for Agent and be widely adopted. The fact that it’s already aligne with existing standard like MITRE ATLAS, ISO 42001, NIST AI RMF, OWASP will probably help.
Google’s Approach for Secure AI Agents
Google outlines a security strategy for AI agents—systems that act autonomously using LLMs. These agents pose unique risks like rogue actions and data leakage, especially through prompt injection or misuse of tools. Traditional security measures are insufficient alone. Google proposes a hybrid defense-in-depth model combining deterministic controls (e.g. policy engines) and reasoning-based defenses (e.g. adversarial training, guard models). Three core principles guide this: human oversight, constrained agent powers, and strong observability. Security must be layered, auditable, and adaptive to evolving threats, ensuring agents remain both powerful and trustworthy. This approach aligns with Google’s Secure AI Framework (SAIF) » READ MORE
OpenAI CEO is right and very wrong about AI-Faked voices
Sam Altman has been raising concern about scams and fraud that leverage deepfake voices. Nothing new here we have talked about this many time - and I still see this has one of the biggest attack vector against the “normal people” (aka my mom). The piece from The Washington Post is however, rightfully in my opinion, also arguing that whilst it is good and important that Sam Altman raise the topic the AI companies should also make hard commitment to prevent this all together » READ MORE
2025 AI Security Benchmark Report - SandboxAQ
AI adoption is surging—79% of enterprises now use AI in production—but only 6% have achieved AI-native security. Despite 77% expressing confidence in handling AI threats, just 28% have conducted comprehensive risk assessments. A major gap exists between perceived and actual readiness, especially as AI systems introduce new risks like data leakage, model poisoning, and AI-generated code vulnerabilities. Most security investment goes into governance, while critical areas like model protection and secrets management are underfunded. The report urges a shift from traditional to AI-native security with automated controls, cryptographic agility, and continuous validation to close the “confidence-capability” gap. » READ MORE
AI News
The Big LLM Architecture Comparison
[Warning: Technical content] The article describes the architectural developments in large language models (LLMs) between 2023 and 2025. It focuses on the Multi-Head Latent Attention (MLA) and Mixture-of-Experts (MoE) modules. The article also compares the normalization layers used in these models, including QK-Norm » READ MORE
The Satya Memo
Satya Nadella (Chariman and CEO of Microsoft) recently published a memo to Microsoft employees. Highly recommend to read it (link below). He is trying to juggle the fact that Microsoft has never been so profitable (e.g. Microsoft is in the very close circle of company with a $4 trillion market cap) but still slashing headcount. Obviously AI is in the game and he talked about “unlearning” and “learning” in order to be able to adapt to the new reality that AI is bringing across the tech industry » Read the Memo
One thing I like: “What does empowerment look like in the era of AI? It’s not just about building tools for specific roles or tasks. It’s about building tools that empower everyone to create their own tools. That’s the shift we are driving—from a software factory to an intelligence engine empowering every person and organization to build whatever they need to achieve.” I can only agree with this; the economy will shift to a “builder” economy as AI will enable people to build apps, services, and products significantly faster and without some of the hard technical know-how.
One thing I don’t like is that this memo is basically a carefully worded warning sign that there will be more cuts in terms of headcount due to AI. “Unlearning” and “learning” mean that some roles—at least within Microsoft—must go through that process, and this basically means being let go.
Proton Launches Lumo
Lumo is Proton AI Assistant. As you can expect from Proton, the whole solution is geared towards security and privacy. In particular, Lumo do not record your conversation, your chat history can not be accessed (not even by Proton) and the conversations are never used for training the model » READ MORE | Access Lumo here
A bit of a rant on this one. Proton has also announced that the whole Lumo infrastructure is not hosted in Switzerland anymore, this is a conscious decision on the back of legislative changes that the Swiss Government is proposing.
I’m Swiss, love the country and everything, but seeing a company like Proton walking back from Switzerland is not a good sign. At a time of technology and AI competition being able to keep talent and know-how in country seems to be as critical.
Cyber Security
Vulnerability Management…finally someone credible is looking into it
I’ll be keeping a close eye on this one, Root Evidence has announced it has raised $12.5 million in a seed round. The company founders are well known and well respected people in the industry (Jeremiah Grossman, Robert “RSnake” Hansen, Heather Konold, Lex Arquette) and guess what’s on their webpage: “Success in vulnerability management is no longer about who can report the most theoretical vulnerabilities, no matter how technically severe. Organizations need proof, with evidence, of which vulnerabilities are known to cause the most damage if left unaddressed, to focus on remediating those first. That’s what Root Evidence does” » Root Evidence Website
Music to my hear so can’t wait to see what they will come up with!
Google Project Zero: Policy and Disclosure 2025 Edition
Google Project Zero published an updated version of their Policy and Disclosure. In particular they will start, within one week of reporting a vulnerability to a vendor, to publicly share that a vulnerability was discovered. The objective here is to ensure that downstream vendors/dependent - who are sometime ultimately responsible for shipping fixes to users - are notified and can integrate those in their product » READ MORE
Ukraine as the Cyber Spanish Civil War
This research paper analyzes cyber warfare in the Russo-Ukrainian War through Thomas Rid's framework of espionage, sabotage, and subversion. The study finds that contrary to predictions of decisive "cyber blitzkrieg," cyber operations have evolved into integrated battlefield enablers rather than standalone strategic weapons. Key findings:
cyber espionage has shifted from strategic intelligence to real-time tactical targeting (Russia's "Infamous Chisel");
cyber sabotage remains tactically useful but strategically limited compared to kinetic alternatives; and
cyber subversion proves most effective for asymmetric influence operations. Critically, Ukraine's organizational agility—leveraging civil-military fusion, private partnerships, and volunteer networks—has proven more decisive than Russia's technical superiority, demonstrating that institutional flexibility outweighs raw cyber capability in protracted conflict
Wisdom of the week
It’s not about having a perfect life.
It’s about a life you are in love with.
AI Influence Level
Editorial: Level 1 - Human Created, minor AI involvement (spell check, grammar)
News Section: Level 2 - Human Created (news item selection, comments under the article), major AI involvement (summarization)
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.