PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 26th August 2025

Welcome back!

📓 Editor's Note

"Despite $30-40 billion in enterprise AI investment, 95% of organizations achieve zero measurable return" - State of AI Business 2025 Report

Does that mean we need to stop investing in AI? Is the technology significantly worse than what we thought? What if this isn't a technology problem?

What if this is the result of organizational design failure? What if AI, despite all of its promises, is still subject to the same organizational rules?

The research demonstrates a classic information barrier. The top of the organization sees lagging indicators—pilot counts, deployment metrics, demo success stories. The bottom experiences leading indicators—actual usability, workflow integration, daily friction. Organizations celebrate pilots and PoCs while employees quietly abandon expensive enterprise AI tools for $20 ChatGPT subscriptions. This creates a dangerous feedback loop where investment decisions get made on outdated success signals.

This information gap isn't unique to AI and we've observed it across transformation initiatives. That barrier has killed more initiatives than any technical limitation.

Build vs. Buy. Nothing new here and this decision has impacted countless projects. The research reveals something telling: external partnerships succeed at twice the rate of internal builds, partly because vendors force risk decisions upward in the organization. When you build internally, departments can incrementally commit resources without triggering enterprise-wide risk review. Decentralized management works, but only with proper escalation frameworks. Without them, that gap between top and bottom leads to risk management decisions being taken down the line without senior awareness.

The hype is real. The research shows 50% of investments flow to sales and marketing despite back-office automation delivering superior documented returns. We're still in the phase where senior management chases the "shiny thing" rather than deep diving into the more difficult but ultimately rewarding work of fixing actual business processes. Back-office failures remain invisible to boards while sales tool problems generate immediate executive attention. Organizations optimize for visibility rather than impact.

Here's what makes this worse: most enterprises procure AI like traditional software, expecting it to work out-of-the-box. But successful AI tools learn and adapt over time while failing ones remain static. The complexity of making systems that actually evolve with usage patterns conflicts with standard enterprise procurement and governance approaches.

Unless companies take the hard road and realize this requires fundamentally different organizational approaches—not just deploying chatbots—we'll continue seeing minimal enterprise AI impact while shadow AI usage explodes around official systems.

The real transformation is happening despite enterprise systems, not through them

My Work

CyberMCP News

An awesome tutorial from Andrew Hoog on how to connect the CyberMCP News to a local LLM (Ollama).

Claude Code

At the risk of repeating myself. You must try this asap! I have spent a lot of time playing around and when you get the jinx of it, you can build amazing app really quickly. Absolutely fascinating!

AI Security News

Cybercrime is hiring: Recruiting AI, IoT, and Cloud Expert to Fuel Future Campaigns.

Cybercriminals are increasingly recruiting AI experts to automate attack workflows, enabling faster and more scalable operations. There is a growing interest in cloud exploitation, particularly Azure, and IoT devices, which are often unmanaged and vulnerable. Additionally, English-speaking social engineering skills are in high demand, driven by the success of groups like “Scattered Spider” in initial access attacks » READ MORE

Fashionable Phishing Bait: GenAI on the Hook

The rapid expansion of generative AI (GenAI) has led to a diverse set of web-based platforms offering capabilities such as code assistance, natural language generation, chatbot interaction, and automated website creation. While GenAI tools offer powerful capabilities, they also introduce significant risks that threat actors can exploit for phishing and other types of cyberattacks. Threat actors are increasingly leveraging GenAI platforms to create realistic phishing content, clone trusted brands, and automate large-scale deployment using services like low-code site builders. » READ MORE

Scamlexity

AI Browsers promise a future where an Agentic AI working for you fully automates your online tasks, from shopping to handling emails. Yet, our research shows that this convenience comes with a cost: security guardrails were missing or inconsistent, leaving the AI free to interact with phishing pages, fake shops, and even hidden malicious prompts, all without the human’s awareness or ability to intervene. » READ MORE

This comment apply for the first 3 items. Scam / social engineering was real before AI and it will continue to explode with AI!

ChatGPT Downgrade Attack Undermines GPT-5 Security

A new technique, PROMISQROUTE, allows users to downgrade ChatGPT to older, less secure models by including specific clues in prompts. This technique exploits ChatGPT’s routing mechanism, which directs prompts to different models based on complexity and resource requirements. While eliminating the routing mechanism would prevent this attack, it would also increase costs for OpenAI. » READ MORE

When LLMs Autonomously Attack

Researchers at Carnegie Mellon University demonstrated that large language models (LLMs) can autonomously plan and execute complex network attacks. By providing LLMs with structured abstractions and integrating them into a hierarchical system of agents, the researchers showed that LLMs can function as active, autonomous red team agents capable of coordinating and executing multi-step cyberattacks. » READ MORE

XBOW Unleashes GPT-5’s Hidden Hacking Power, Doubling Performance

XBOW’s integration of GPT-5 into its autonomous penetration testing platform significantly improved performance, doubling the agent’s ability to find vulnerabilities. While OpenAI’s initial assessment of GPT-5’s cyber capabilities was modest, XBOW’s specialized tools, team-based approach, and centralized coordination unlocked the model’s hidden potential. This breakthrough demonstrates the importance of a robust system in maximizing the performance of AI models in cybersecurity. » READ MORE

Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety

Logit-gap steering, a new approach to understanding LLM safety, reveals that alignment training doesn’t eliminate harmful responses but makes them less likely. This concept, explored in a recent academic paper, demonstrates the vulnerability of LLMs to jailbreaks and emphasizes the need for a defense-in-depth strategy incorporating external protections. The research provides tools for evaluating and strengthening LLM alignment and safety, urging the AI and security community to develop more robust techniques. » READ MORE

AI Agents Access Everything, Fall to Zero-Click Exploit

Michael Bargury, CTO of Zenity, presented research on a zero-click exploit that allows external attackers to take over enterprise AI agents using only a user’s email address. These agents, integrated into enterprise environments like Microsoft and Google Workspace, can access sensitive data and manipulate users. Bargury emphasizes the need for organizations to create dedicated security programs for managing AI risk, rather than relying solely on vendors to fix vulnerabilities. » READ MORE

One the biggest upcoming of the future - you can quote me on this one. For the companies who are rushing this it will be amazingly difficult to untangle this. The numbers, the speed and the amount of access will be very difficult to manage if not handle in a structured manner from the start.

Check the article below for an other interesting resource on the same topic.

The SINET Risk Executive Handbook on Identity

A new practitioner-led handbook from SINET's Identity Working Group addresses the urgent identity crisis where 80% of breaches involve compromised identities. The guide reveals that conventional identity systems—fragmented across cloud, SaaS, and on-premise—cannot handle today's explosion of non-human identities (outnumbering humans 80:1) and emerging AI agents. It presents a Unified Identity and Access System built on three pillars: unified data foundation, centralized control plane, and intelligent monitoring with predictive analytics. The handbook includes a practical 5-stage Identity Maturity Model across 12 capabilities, providing CISOs a roadmap from today's fragmented landscape to tomorrow's AI-enabled future. The framework positions identity as the essential key to secure business enablement in the age of AI. » READ MORE

AI News

State of AI in Business 2025

Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach » READ MORE

See the editorial for my take on this

China is Quietly Upstaging America with its Open Models

China’s open-weight large language models (LLMs) are outperforming their American counterparts, posing a challenge to the West’s dominance in AI. While American companies prioritize proprietary models for profit, Chinese firms focus on encouraging AI adoption through open-weight models, potentially disrupting the market. This shift in focus could lead to a more accessible and adaptable AI landscape. » READ MORE

There is a lot of interesting development from a geopolitical perspective. Whilst China seems to be playing the long game - as usual - with open source models. The US seems to be taking a very different path and the newly announced 15% of Nvidia and AMD sales in China will go to the US government and the recent deal for the US government to take 10% stake in Intel. I couldn’t help myself to wonder how this will work in the long run.

Cyber Security

Think before you Click(Fix): Analyzing the ClickFix Social Engineering Technique

The ClickFix social engineering technique, growing in popularity, tricks users into running malicious commands by exploiting their tendency to solve technical issues. It often involves phishing, malvertising, or compromised websites leading to visual lures, such as landing pages, prompting users to execute commands. This technique, combined with obfuscation methods and multi-stage infection processes, aims to evade detection and deliver various payloads, including infostealers, remote access tools, and rootkits. » READ MORE

Cybercriminals Abuse AI Website Creation App for Phishing

Cybercriminals are increasingly using the AI website generation platform Lovable to create fraudulent websites for credential phishing and malware delivery. The platform’s ease of use and lack of security measures make it an attractive tool for threat actors. Proofpoint has observed various campaigns leveraging Lovable URLs to distribute phishing kits, malware, and steal personal and financial data. » READ MORE

Security and the 7 Deadly Sins

The text uses the 7 deadly sins as a metaphor for common pitfalls in security product procurement. It highlights teams that overbuy, overspend, underinvest, chase trends, copy competitors, distrust vendors, or attempt to build solutions internally. The author admits to committing some of these sins and asks readers to reflect on their own practices. » READ MORE

Really like anything coming from Phil Venables! If you are looking for great content related to CISO, security overall, risk management and leadership you should 100% have a read

Research Papers

Foundations of Large Language Models

Summary: This paper is a concise, didactic study that synthesizes how modern LLMs are built, tuned and run.

Published: 2023-12-30T17:36:08Z

Authors: Tong Xiao, Jingbo Zhu

Organizations: NLP Lab, Northeastern University, NiuTrans Research

Findings:

  • LLMs emerge from large-scale self-supervised pre-training on unlabeled text.

  • Three architectures: decoder-only, encoder-only, encoder-decoder; tasks include MLM and permuted LM.

  • Prompting enables zero/few-shot adaptation; advanced techniques include chain-of-thought, RAG, ensembling.

  • Alignment uses supervised fine-tuning, RLHF, and inference-time methods to match human preferences.

  • Training at scale demands careful data curation, distributed optimization, and adherence to scaling laws.

  • Efficient inference relies on caching, batching, parallelization, and speculative or assisted decoding.

  • Inference-time scaling improves outputs via longer context, broader search, and output ensembling.

Final Score: Grade: B+, Explanation: Strong, clear synthesis; rigorous pedagogy; limited originality and empiricism lower overall grade, as tutorial work.

Wisdom of the week

Be Weird.

Go to bed at 8:30.
Pass on the dessert everyone is raving about.
Turn your phone off for hours.
Give your journal more attention than your inbox.
Snack on meat and raw veggies.
Don’t know what the latest celebrity gossip is.
Bring your own lunch.
Meditate
Be the only one who hasn’t seen it - and don’t care.
Drink water at social events.
Walk barefoot outside.
Workout on vacation.
Do breath work in your parked car before going inside.
Get road trip snacks at grocery stores not gas stations.
Leave social events early to protect your sleep.
Stretch on the sidelines of your kids games.
Choose silence and stillness over scrolling.

AI Influence Level

  • Editorial: Level 1 - Human Created, minor AI involvement (spell check, grammar)

  • News Section: Level 2 - Human Created (news item selection, comments under the article), major AI involvement (summarization)

Till next time!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Reply

Avatar

or to participate

Keep Reading