PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 9th March 2025

Welcome back!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Table of Contents

What I learned this week

TL;DR

  • On the back of last week post about the evolution of AI, this week i’m helping you to understand how to write effective prompt for Deep Research model. Often overlooked prompt engineering is still the key to get the output you want with the right level of details » READ MORE

  • Still link to prompt engineering, Atom of Thoughts seems to be the new kid on the block. Worth having a read to understand how this one works.

  • At this stage trying to summarize AI news is more like mission impossible. The AI page from TechCrunch is like a never ending feed of new articles. The news about OpenAI planning to charge up to $20’000 a month for specialized AI ‘agents’ is at the bottom of the second page of news…Meta is not far behind and also planning to get the benefit of Agentic AI. Anthropic raised another $3.5 billion with a valuation of $61 billion following the release of Claude Sonnet 3.7.
    I could continue but at this stage i’ll let you read the news. The only thing I would say is that in my 20 years of experience I have never seen a technology topic move that fast!

  • Enjoy the read and if you have any topic you want me to cover feel free to reach out!

Mastering Deep Research for Cyber

Deep research represents a fundamental evolution in how LLMs are working. Over recent years, it has transformed from theoretical concept to essential practice—a sophisticated methodology that leverages AI to synthesize disparate data sources into cohesive, actionable insights. What makes deep research truly revolutionary isn't the technology itself, but how it enables us to extract meaning from an otherwise overwhelming data landscape.

The Art of AI-Driven Research in Cybersecurity

Whilst deep research might look like another buzzword and it certainly is if you look at the number of Youtube video on it. It is however a sophisticated approach to knowledge synthesis that addresses the fundamental challenge in modern security—information overload. Most organization across all industry are struggling not because they lack data but because they are enable to use the data and extract meaningful insights from it.

As with any LLM, one of the keys is to master prompt engineering—the art of crafting precise instructions that transform AI from a generic information retriever into a focused research partner. This skill bridges the gap between information abundance and actual intelligence. When it comes to deep research, effective prompt engineering becomes even more critical for extracting meaningful insights.

Practical Techniques

1. Define Risk Parameters with Crystal Clarity

Within a CISO organisation there is a constant need to have reports about specific threats or incident. One of the key to success is to have a report that share actionable information for your company. Compare that with:

Ineffective: "Tell me about ransomware."

Effective: "Research recent ransomware attacks in 2025 targeting healthcare institutions like [Company Name]. Identify attacker profiles, their specific TTPs, and actionable mitigation strategies that align with our resource constraints."

The difference is night and day. The second approach frames the question within your risk context.

2. Provide Risk Context and Boundaries

Context isn't just helpful—it's essential for meaningful risk intelligence:

"In the context of our healthcare organization's cybersecurity posture, identify emerging ransomware threats from the past 12 months. Specifically, analyze attack techniques against EHR systems, affected clinical workflows, and prioritize mitigation strategies based on our limited security team bandwidth."

Notice how this frames the quest within your specific risk universe? That's the key.

3. Structure Your Risk Intelligence Queries

The ability to break down the ask is also key. Providing a step by step approach to the prompt can significantly increase the quality of the output:

Analyze zero-day vulnerabilities threatening our cloud infrastructure:

Step 1: Identify critical vulnerabilities discovered in the past quarter that affect our technology stack.
Step 2: Assess exploitation likelihood based on our industry and attack surface.
Step 3: Recommend pragmatic mitigation steps, prioritized by implementation difficulty and risk reduction potential.

4. Leverage AI to Refine Your Prompt

One of my most successful implementations involved using a multi-tier approach:

  1. Use a basic AI model to draft initial queries

  2. Explicitly define your goal: "Help me craft a detailed prompt for analyzing emerging threats to our financial services API ecosystem, with emphasis on authentication bypass techniques."

  3. Iteratively improve through targeted questions: "Can you modify the prompt to focus on exploits that could bypass a MFA implementation?"

  4. Reserve deep research models for the final, refined query

5. Demand Critical Evaluation of Sources

Risk intelligence without source quality assessment is dangerous. You have to ensure the right sources are included and you can definitely ask this in the prompt

"Evaluate the reliability of sources used, noting potential biases or conflicts of interest. Highlight areas where expert opinions diverge, and explain implications for our risk assessment."

Real-World Pitfalls

In implementing these techniques across organizations, I've seen recurring challenges:

  • The Vague Request Trap: Avoid asking broad questions ("What are our cyber risks?") and receiving equally broad, unhelpful responses

  • The Cognitive Overload Problem: Cramming too many parameters into one request, diluting focus

  • The Context Vacuum: Failing to specify whether you need a quick risk overview for a board meeting or an in-depth analysis for control implementation

Let’s test it!

I used Perplexity.ai Deep Research with two different prompt. One was fairly basic and the other one was refined using step 4 above.

Do note that in both case there is a confusion with the initial vector. They both mention Telegram and a PDF file. This not mention in the official Safe update on the incident. The reason for this is that one of the article used this link as a source…and that article mention a previous attack that used Telegram.

So switching to OpenAI, here is the output using the Refined Prompt approach. Telegram is mentioned but the mixed up that happen with Perplexity did not happen here.

Reminder #1: ALWAYS check the output for accuracy, this is not a magic wand!
Reminder #2: Don’t know for you but despite those glitches, having a multi page report put together in less 5min for you to review and tweak is really impressive.

Moving Forward

Start small: take one pressing risk concern and apply these techniques. Evaluate the results, adjust your approach, and gradually expand. The goal isn't perfect intelligence (which doesn't exist), but rather increasingly useful insights that drive better risk decisions.

Effective risk management isn't about eliminating uncertainty—it's about making more informed decisions despite it. In the cybersecurity landscape, complete certainty is an illusion. Even with the most sophisticated deep research capabilities, there will always be unknown variables, emerging threats, and unforeseen vulnerabilities. The true power of deep research lies not in eliminating these uncertainties, but in providing a structured framework to understand them better.

SPONSORED BY

Unlock the full potential of your workday with cutting-edge AI strategies and actionable insights, empowering you to achieve unparalleled excellence in the future of work. Download the free guide today!

Worth a full read

HiddenLayer’s 2025 AI Threat Landscape Report

Key Takeaways

  • AI's greatest threat comes from exploitation by people, not the technology itself.

  • Open-source models accelerate innovation but also widen AI systems' attack surfaces.

  • AI breaches and vulnerabilities are rising, demanding comprehensive security measures.

  • AI's role in business is critical, yet traditional security struggles to keep pace.

  • Adversarial AI threats blend traditional cybersecurity with AI-specific methods.

  • Deepfakes and AI-generated content erode trust and complicate disinformation efforts.

  • Regulatory developments reflect growing concerns over AI security and ethical standards.

  • AI red teaming evolves to address adversarial attacks and vulnerabilities.

  • AI supply chains remain complex, creating opportunities for adversaries.

  • AI's rapid integration necessitates comprehensive security programs and practices.

Black Basta Chat Leak - Organization and Infrastructure

Key Takeaway

  • Black Basta's internal organization and operations reveal a highly structured ransomware group.

  • The leak highlights the significance of specialized roles in cybercriminal organizations.

  • Cybercriminals often rely on legitimate providers and resellers for hosting infrastructure.

  • Independent affiliates play a crucial role but can create tensions with core management.

  • Obfuscation and frequent server changes are key strategies for operational security.

  • Internal strife within cybercriminal groups can lead to leaks, exposing their operations.

  • Cybercriminal activity largely occurs outside public forums, often hidden from view.

  • The use of proxies and offshore hosting is essential for concealing sensitive infrastructure.

  • Identifying threat actor handles and services can provide valuable insights into cybercriminal networks.

  • Social engineering remains a powerful tactic for gaining initial access to target networks.

Wisdom of the week

A wise man was asked “What is anger?” He gave a beautiful answer, “It is a punishment you give to yourself, for somebody else’s mistakes.”

Unknown

Contact

Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!

Thanks! see you next week! Simon

Reply

Avatar

or to participate

Keep Reading