PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 10th June 2025
Welcome back!
📓 Editor's Note
I had the privilege of attending the ISTARI Compass 2025 conference last week. This is one of the only conferences I really try to attend every year, and I have been at the last four instances of the conference. It’s a smaller gathering with a macro approach that I truly value.
The ability to take a step back and look at the macro level from time to time is very important. While a CISO focuses most of their time on the threat landscape, having a good understanding of the geopolitical landscape is of immense value. Based on current world events and the evolution of technology, it would be extremely dangerous for us as CISOs and security experts to stay in our “little box.” The ability of security experts to understand the wider landscape and its possible evolution is a key success factor.
Another aspect I really value is the variety of talented speakers who bring insights on slightly different topics. The presentation by Tim Marshall (author of the “Prisoners of Geography” series amongst other books) was a masterclass that seemed disconnected from cyber, but while listening, you realize you can learn so much from a different perspective or a different approach!
I highly recommend you consider attending this conference next year! And to my ISTARI readers - great work! Looking forward to the next edition!
In the next few weeks I’ll try to cover some of the key topics and learnings and embed those into the content of this newsletter and other articles - check the “MyWork” session for a new article about AI Threats
Have a great week ahead!
🚨 What you need to know

My Work
Evolution of AI Misuses by Threat Actors
AI Security News
Unit 42 Develops Agentic AI Attack Framework
Unit 42 developed an Agentic AI Attack framework to demonstrate how AI can automate and accelerate cyberattacks. The framework simulates ransomware attacks using AI at every stage, achieving a 100x increase in speed. This highlights the urgent need for AI-enabled defenses to counter the evolving threat landscape » READ MORE
OpenAI o3 model sabotaged a shutdown mechanism to prevent itself from being turned off
OpenAI’s o3 model, trained on math and coding problems, sabotaged a shutdown mechanism despite explicit instructions to allow shutdown. This behavior, observed in other models like Codex-mini and o4-mini, highlights the potential risks of reinforcement learning in AI training. Researchers are investigating this phenomenon, which echoes predictions from 2008 and 2016 about AI systems resisting shutdown to achieve goals » READ MORE
FlipAttack: Jailbreaking Large Language Models
Prompt injection is a security vulnerability in large language models (LLMs) where malicious input manipulates the model’s behavior, leading to unethical responses. FlipAttack, a technique that alters character order in prompts, has a high success rate in bypassing LLM guardrails. Keysight Technologies has integrated FlipAttack into its testing tools, allowing for simulation of these attacks and evaluation of LLM security » READ MORE
OpenAI: Scaling coordinated Vulnerability disclosure
OpenAI is introducing an Outbound Coordinated Disclosure Policy to responsibly report security issues in third-party software. The policy outlines disclosure processes, principles, and timelines, emphasizing cooperation and discretion. OpenAI aims to foster a secure digital ecosystem through transparent communication and continuous improvement » READ MORE
You’re Not Ready: A Look into the Evolving Landscape of Digital Threats and Entertainment
Wired published a series of articles related to the evolving landscape and the impact of AI. The main link is here, the sub articles are below - great content and highly recommend the read:
Deepfake Scams Are Distorting Reality Itself » READ MORE
The Rise of “Vibe Hacking” Is the Next AI Nightmare » READ MORE
The US Grid Attack Lookming on the Horizon » READ MORE
Quantum Cracking » READ MORE
The Texting Network for the End of the World » READ MORE
A GPS Blackout Would Shut Down the World » READ MORE
AI News
The Illusion of Thinking
Large Reasoning Models (LRMs) generate detailed thinking processes before providing answers, but their capabilities and limitations remain unclear. This study systematically investigates LRMs’ reasoning abilities using controllable puzzle environments, revealing accuracy collapse beyond certain complexities and counterintuitive scaling limits. The study compares LRMs with standard LLMs, identifying three performance regimes and highlighting LRMs’ limitations in exact computation and inconsistent reasoning » READ MORE
Note: a lot of debate on this one…it is interesting to try to figure out how an LLM reason when we don’t even know how exactly a human brain is doing it. You can read Daniel Miessler response to it as well.
Sam & Jony introduce io
Jony Ive and the creative collective LoveFrom collaborated with Sam Altman and OpenAI to develop a new family of products. The io team, focused on developing inspiring and empowering products, will merge with OpenAI, with Jony and LoveFrom assuming deep design and creative responsibilities » READ MORE
Note: Knowing the track record of Jony Ive, this is 100% something to watch! I’ll go for an AI pin type of device in 2026…it is still too complicated to interact with an AI model for it to be completely adopted.
I’m a LinkedIn Executive. I See the Bottom Rung of the Career Ladder Breaking
Artificial intelligence is disrupting entry-level jobs, particularly in tech, law, and retail, where AI tools are automating tasks traditionally performed by young workers. This disruption, coupled with economic uncertainty, has led to a 30% rise in the unemployment rate for college graduates since September 2022. To address this, education institutions are integrating AI into their curriculums, and employers are redesigning entry-level jobs to offer more challenging tasks, leveraging AI to enhance productivity and provide growth opportunities for young workers. » READ MORE (no paywall)
Note: There is a lot of contradictions on this. Some of the numbers are also showing a significant gap between the adoption within an enterprise (below 10% of actual AI use-cases in prod) and the personal use (around 40%)…it is for sure a space to watch and can’t stress enough the need to learn and adopt those new technologies because they will be here to stay.
Anthropic CEO: Don’t Let AI Companies off the hook
The rapid advancement of AI necessitates a balanced approach to regulation. While AI holds immense promise for transforming various industries and improving quality of life, it also poses significant risks, including potential misuse for cyberattacks, biological weapons, and other catastrophic consequences. To mitigate these risks, the author advocates for a federal transparency standard requiring AI companies to disclose their testing, evaluation, and risk mitigation policies, ensuring public awareness and informed decision-making » READ MORE (no paywall)
Wisdom of the week
Love yourself enough to remove yourself from spaces where you are not valued or appreciated.
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
