PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 22nd December 2024
Welcome back!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
Table of Contents
What I learned this week
TL;DR
Agentic AI isn’t just the next big thing—it’s here, and it’s transforming cybersecurity as we know it. By combining automation, reasoning, and adaptability, AI agents empower organizations to stay ahead in an ever-evolving threat landscape. The key is to start small, learn fast, and scale wisely. The organizations that embrace this shift today will be the leaders of a safer, more resilient digital future. » READ MORE
Covering the AI news feeds is like mission impossible. Everyday there is a big announcements by some of the key players. So this week we have OpenAI announcing o3 their next reasoning model. Some people are claiming o3 has achieved AGI, more on that on the great summary from TechCrunch. Google introduce Gemini 2.0, which is categorized as “the new AI model for the agent era”. Microsoft announced a free Github Copilot for VS Code, basically going frontal with some of the existing solution on the market (cursorAI, etc.).
What is really impressive is the speed of delivery. If you check where we were at the start of 2024 and where we are now, its like night and dayAnother week in cyber, another set of critical vulnerabilities (here and here and here) and another set of data breaches (here and here). Won’t comment more on this and i’ll do my weekly reminder: the basics in security matters more than the shinny thing from the vendor and yes that includes the ability to patch edge technology in less than 24h.
Wishing everyone a Merry Christmas and hopefully you can spend time with your family and loved ones. For those working in the trenches of cyber operations teams a big thank you and hopefully you can enjoy some time off a bit later in January!
The Evolution of AI Agents and Their Role in Cybersecurity
If you’ve been keeping an eye on AI advancements, you’ve probably heard the term “agentic AI” thrown around. Maybe you’re intrigued by it, or perhaps you’re still skeptical about whether these agents are truly as transformative as they claim to be. Either way, let’s cut through the buzz and focus on what agentic AI really means for cybersecurity—and, more importantly, how it’s already reshaping the landscape.
Much like the shift toward defining risk appetites in risk management, the evolution of AI agents demands we move beyond abstract possibilities and look at real, actionable applications. The goal isn’t just to marvel at the technology but to understand how it adds tangible value to protecting systems, people, and data.
First, let’s establish some context.
A Rapidly Growing Market
The agentic AI market has exploded. In 2024, it’s valued at $5.1 billion, and by 2030, it’s expected to reach $47.1 billion, growing at a staggering 44.8% annually. This isn’t just because the tech is flashy; it’s because AI agents solve real problems. They’re being embedded into workflows across industries to tackle challenges that once required significant human effort—streamlining processes, making better decisions, and even predicting risks before they materialize .
The cybersecurity industry has been quick to adopt these innovations. The need for automation in a space where attackers continuously refine their methods is critical. AI agents have transitioned from theoretical possibilities to practical tools, performing tasks like automating SOC workflows, managing vulnerabilities, and integrating real-time threat intelligence.
Agentic AI Architecture: Breaking Down the System
To understand how agentic AI works in practice, it’s helpful to visualize the flow of tasks and decision-making within these systems. The diagram below provides a clear framework for how agentic AI integrates reasoning, execution, and reflection into cybersecurity workflows:
User Interface: Multi-modal inputs such as text or voice feed into the system, where users or LLMs propose a plan of action. Feedback loops ensure continuous improvement through evaluation.
Agentic Flow Orchestration: This layer acts as the central coordinator, orchestrating tasks and delegating them to task-specific agents. It ensures smooth transitions between reasoning, execution, and evaluation.
Deterministic Runtime: This is where the heavy lifting happens. Task-specific agents interact with application code, external APIs, and data sources, all while adhering to predefined guardrails for safety and compliance.
Reflection and Evaluation: Once tasks are executed, the system evaluates the outcomes. Reflection mechanisms enable continuous improvement by adjusting future actions based on what worked and what didn’t.
External Interactions: Agents draw from structured and unstructured data sources, external tools, and development environments to execute tasks effectively.

Source: Insight Partners
This layered architecture exemplifies how AI agents not only act autonomously but also ensure accountability and adaptability in critical operations. By leveraging such frameworks, organizations can automate complex workflows while maintaining control and oversight.
From Tools to Outcomes
Here’s where things get interesting. Traditional cybersecurity tools required human input to deliver results. If you’ve ever spent hours combing through logs or manually prioritizing alerts, you know the grind. But with agentic AI, we’re shifting toward “Outcome as a Service” (OaaS). Instead of simply providing tools, vendors are guaranteeing results—faster threat detection, improved patch management, and better overall resilience.
This doesn’t mean AI is taking over SOCs entirely. What it does mean is that organizations can offload repetitive tasks to these agents while human analysts focus on higher-value activities. It’s not just about doing things faster; it’s about doing them smarter.
Practical Applications in Cybersecurity
Let’s talk specifics. How are agents actually making an impact?
Incident Triage and Response: AI agents like Dropzone.AI are revolutionizing SOC operations by triaging alerts faster and with more accuracy than traditional methods. They take over the mundane but necessary tasks, freeing up human analysts to focus on the complex incidents that demand expertise. CommandZero is using an interesting approach to codifying what an analyst does within an expert system and using AI to aid in the automation efforts
Proactive Vulnerability Management: Vulnerabilities aren’t just discovered—they’re exploited at unprecedented speeds. AI agents excel in this area, continuously scanning for weaknesses in systems and even suggesting or implementing fixes. Startups like Pixee AI are leading the way, integrating seamlessly into CI/CD pipelines to ensure secure coding practices.
Real-Time Threat Intelligence: The days of static threat feeds are over. Agents now integrate with APIs and leverage retrieval-augmented generation (RAG) to provide real-time insights. This keeps organizations one step ahead of attackers, adapting to evolving threats in real time.
Enhanced Code Security: AI copilots like Microsoft Security Copilot are helping developers detect vulnerabilities and maintain secure coding standards without disrupting workflows. These agents act as advisors, streamlining the development process while keeping security top of mind.
Preparing for the Agentic Revolution
Like any transformative technology, the rise of agentic AI comes with a learning curve. To fully harness their potential, organizations need to focus on upskilling their teams. Analysts should learn to interpret agent outputs, customize workflows, and ensure these systems operate within ethical and regulatory boundaries .
It’s also critical to invest in guardrails. While agents are powerful, they require robust oversight to prevent errors or misuse. Think of them as talented apprentices—they’re great at what they do but still need supervision.
What’s Next?
So, what does the future hold for agentic AI in cybersecurity? Here’s what I see happening:
More Autonomy: By 2025, agents will handle multi-step, complex tasks with minimal intervention. This means they’ll move beyond mere assistants to become indispensable partners in cybersecurity.
Mainstream Adoption in SOCs: Enterprises will integrate agents as core components of their SOCs, automating routine tasks and improving overall efficiency. This shift will allow organizations to scale their security operations without scaling costs.
Outcome-Based Pricing Models: The way we buy cybersecurity solutions will change. Vendors will offer pricing tied directly to results—such as the number of threats neutralized or vulnerabilities patched—making ROI clearer than ever.
Stronger Regulatory Oversight: As agentic AI becomes ubiquitous, regulators will demand greater transparency and accountability. Organizations will need to prove that their agents operate securely and ethically, particularly in sensitive industries like finance and healthcare.
Getting Started
If you’re considering adopting agentic AI, start small. Focus on specific, high-impact areas like vulnerability management or alert triage. Build confidence in the technology before scaling it across your organization. And don’t forget the human element—upskilling your team is just as important as investing in the technology itself.
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
Worth a full read
Threat modeling your generative AI workload to evaluate security risk
Key Takeaway
Threat modeling is crucial for addressing generative AI's unique security challenges.
Security risks in AI workloads arise from non-deterministic outputs and data sensitivity.
Threat modeling involves understanding business context and architecture comprehensively.
Structured threat statements maintain consistency in documenting potential security threats.
Mitigation strategies should be specific, including input sanitization and access controls.
Continuous testing and validation of mitigations ensure security measures work effectively.
Human interaction across teams is essential for effective threat modeling processes.
A culture of security-mindedness is fostered through regular threat model validation.
Threat models must be living documents, revisited and updated periodically.
Adopting new technologies requires rapidly evolving threat landscape awareness.
Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet | Lex Fridman Podcast
Key Takeaway
AI's strength lies in enhancing human curiosity and expanding knowledge boundaries.
Perplexity's focus is on being a knowledge-centric engine, not just a search tool.
True innovation in search lies in rethinking user interfaces and experience.
Combining search with language models reduces hallucinations and improves reliability.
Human curiosity is a fundamental trait that AI can enhance, not replace.
Knowledge discovery is about guiding users on a continuous journey of inquiry.
The challenge is in integrating ads without compromising user trust or truth.
Iterative compute could lead to significant AI reasoning breakthroughs.
Human bias is a challenge AI can help mitigate through better knowledge dissemination.
AI should focus on making humans smarter and reducing bias in understanding.
Wisdom of the week
The joy of leadership comes from seeing others achieve more than they thought they were capable of.
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon


