PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 23nd June 2025

Welcome back!

📓 Editor's Note

Here's the uncomfortable truth: the good guys have been busy deploying AI everywhere, whilst the attackers have been studying the attack surface those same deployments are creating.

EchoLeak is the first - that I know - zero click vulnerability for an AI system. This will sound familiar to people in the cyber security field. To the others, you have to understand that AI systems have access to a significant amount of data by design and this will be used against us.

Here's what we need to do differently:

  • As with traditional cyber security, embedded the controls as early as possible

  • Segregation of environment is still very important (e.g. AI systems vs. data)

  • Monitoring of AI interactions is critical (e.g. API, access, prompt, etc.)

  • AI is still categorized as emerging technology, but it will become critical infrastructure very soon. Treat it as such already now.

You can’t stop the AI adoption - it is going to fast already. It's to get ahead of this curve by making security part of our AI story from the beginning. When we embed security thinking into AI procurement and deployment decisions, we're not slowing things down. We're building the foundation that lets our organizations innovate confidently instead of crossing our fingers and hoping nothing breaks.

The question facing every CISO today: Will you lead your organization's AI transformation, or will you be explaining to the board why you didn't see this coming?

AI Security News

‘EchoLeak’: The First Zero-Click AI Vulnerability Enabling Data Exfiltration From Microsoft 365 Copilot

Aim Labs has identified a critical zero-click AI vulnerability, dubbed “EchoLeak”, in Microsoft 365 (M365) Copilot and has disclosed several attack chains that allow an exploit of  this vulnerability to Microsoft's MSRC team. This attack chain showcases a new exploitation technique we have termed "LLM Scope Violation" that may have additional manifestations in other RAG-based chatbots and AI agents. This represents a major research discovery advancement in how threat actors can attack AI agents - by leveraging internal model mechanics. The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior. » READ MORE

PoC Attack Targeting Atlassian MCP - Living off AI

A new “Living off AI” attack risk exists when AI executes untrusted input without prompt isolation or context control. A proof-of-concept attack targeting Atlassian’s Model Context Protocol (MCP) and Jira Service Management (JSM) demonstrates how a threat actor can inject harmful instructions into a support ticket, gaining privileged access through an internal user. Cato CASB’s GenAI security controls can help inspect and control AI tool usage, enforcing least privilege and detecting suspicious prompt usage » READ MORE

Exploitation of Langflow AI Server RCE to launch DDoS attacks

A new campaign that exploits a vulnerability in the Langflow AI server to deliver the Flodrix botnet malware. The vulnerability, CVE-2025-3248, allows attackers to execute arbitrary code on compromised servers. The Flodrix botnet is known for launching distributed denial-of-service (DDoS) attacks and is linked to the Moobot group. » READ MORE

ASRJam Break Vishing attack using a speech recognition jamming system

Researchers have developed ASRJam, a speech recognition jamming system that uses the EchoGuard algorithm to subtly distort human speech, confusing speech recognition systems while remaining intelligible to humans. This defense is designed to counter automated call scams, or “vishing,” which have increased significantly due to advancements in machine learning and speech recognition technology. The researchers claim EchoGuard outperforms other jamming techniques and is effective against various ASR models, with plans for future improvements and commercialization. » READ MORE

Asana’s MCP server lead to data leakage

Asana has fixed a bug in its Model Context Protocol (MCP) server that could have allowed users to view other organizations' data, and the experimental feature is back up and running after nearly two weeks of downtime to fix the issue » READ MORE | Incident report is here

The Tokenbreak Attack

HiddenLayer’s security research team discovered a new prompt injection technique called TokenBreak. This attack targets text classification models’ tokenization strategies, allowing malicious input to bypass detection. Models using BPE or WordPiece tokenization are vulnerable, while those using Unigram are not. The TokenBreak attack technique manipulates input text to bypass text classification models, potentially compromising production systems. The susceptibility of a model to TokenBreak depends on its tokenization strategy, with models using WordPiece or BPE being more vulnerable than those using Unigram. Understanding the model family and tokenization technique is crucial for mitigating this vulnerability » READ MORE

Hunting Deserialization Vulnerabilities with Claude

This post demonstrates how to use an MCP server to enable Claude to analyze .NET assemblies. It highlights the process of finding a known vulnerability in a Microsoft-signed binary and creating a working Proof-of-Concept for an attack path mentioned in the original disclosure. The next step is to explore scaling this process. » READ MORE

How Google Secures AI Agents

AI agents, capable of acting on information to achieve user goals, introduce a new security challenge. Securing AI agents is challenging due to unpredictability, emergent behaviors, autonomy, and alignment issues. A hybrid approach is recommended, focusing on user controls, agent permissions, orchestration, memory, and response rendering. Continuous assurance efforts are essential to validate agent security against evolving threats. » READ MORE

Cyber Security

The Rise of Headless Cybersecurity: Your Data. Your Stack. Their Analytics

The rise of headless cybersecurity is shifting the focus from SIEMs as the sole data destination to Security Data Lakes (SDLs). This shift allows organizations to retain control of their data, which is now enriched with context and stored in their preferred format. Headless cybersecurity products, such as headless SIEMs and vulnerability analytics, operate on top of this data, providing flexibility and control without vendor lock-in » READ MORE

2025 Security Organization Compensation, Responsibilities and Structure: Survey Results

The 2025 North America Security Organization Report analyzes compensation, reporting structures, and responsibilities of security executives. CISOs in larger public companies command higher compensation, driven by equity values, while cash compensation growth is more modest in privately held companies. The report highlights industry and gender pay disparities, emphasizing the importance of diverse teams and robust protections for CISOs. » READ MORE

A 2024 Zero-Day Exploitation Analysis

In 2024, GTIG tracked the exploitation of 34 zero-day vulnerabilities, with traditional espionage actors responsible for nearly 53% of attributed exploitation. CSVs, while declining in total count, continue to contribute significantly to zero-day exploitation, highlighting their expanding role in the landscape. PRC-backed groups remained persistent, exploiting vulnerabilities in Ivanti appliances, while North Korean actors, for the first time, tied for the highest number of attributed zero-days exploited, targeting Chrome and Windows products » READ MORE

AI

Brain activity much lower when using AI chatbots, MIT boffins find

A study by MIT researchers found that using AI chatbots reduces brain activity compared to completing tasks unaided. The study, which involved college students writing essays, showed that participants using AI chatbots exhibited significantly lower cognitive engagement and poorer fact retention. The researchers suggest that AI integration in education should be delayed until learners have engaged in sufficient self-driven cognitive effort » READ MORE

Have LLMs Finally Mastered Geolocation?

A test of 20 Large Language Models (LLMs) from various developers found that ChatGPT outperformed Google Lens in geolocating images. While some models from Anthropic and Mistral lagged behind, ChatGPT’s advanced reasoning capabilities allowed it to accurately identify locations, even in challenging scenarios. The results highlight the evolving capabilities of LLMs in image analysis and their potential to surpass traditional reverse image search tools » READ MORE

Do We Need Workflows? The Coming Revolution in How Work Gets Done

Artificial intelligence agents are revolutionizing work by enabling orchestrated complexity, similar to a master chef’s kitchen. This shift, from sequential workflows to dynamic agent ecosystems, allows for faster, more accurate processing and reduces coordination overhead. Companies that abandon workflows and redesign operations around AI’s strengths will thrive in this new era » READ MORE

How Generalists Win by Seeking Problems, Not Just Solutions

Generalists and T-shaped engineers should invest in problem-seeking and gap analysis patterns that don’t rely solely on code. By understanding broader organizational challenges and identifying unique opportunities, generalists can connect dots across domains and provide leverage. The best problems to solve check multiple boxes, such as reducing risk, improving productivity, and making everyone faster » READ MORE

Research Paper

LLM Voting: Human Choices and AI Collective Decision Making

Summary: The paper investigates the voting behaviors of Large Language Models (LLMs) like GPT-4 and LLaMA-2, comparing them to human voting patterns. Using a dataset from a human voting experiment as a baseline, the study explores how LLMs' voting outcomes are influenced by voting methods, presentation order, and persona variations. It finds that LLMs can reduce biases and align more closely with human choices through persona adjustments, though the Chain-of-Thought approach did not enhance prediction accuracy. The study highlights a trade-off between preference diversity and alignment accuracy in LLMs, influenced by temperature settings, and emphasizes the need for cautious integration of LLMs into democratic processes due to potential biases and less diverse collective outcomes.

Published: 2024-01-31T14:52:02Z

Authors: Joshua C. Yang, Damian Dailisan, Marcin Korecki, Carina I. Hausladen, Dirk Helbing

Organizations: ETH Zurich

Findings:

  • LLMs' voting influenced by method and presentation order.

  • Persona variations reduce biases, align with human choices.

  • Chain-of-Thought doesn't improve prediction accuracy.

  • Trade-off between preference diversity and alignment accuracy.

  • LLMs may lead to biased assumptions in voting.

Final Score: Grade: B, Explanation: Novel study with good methodology but lacks detailed statistical analysis.

Wisdom of the week

As you start to walk on the way, the way appears.

Clarity doesn’t come before action. It comes from action.

Till next time!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Reply

Avatar

or to participate

Keep Reading