PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 2nd February 2025

Welcome back!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Table of Contents

What I learned this week

TL;DR

  • DeepSeek-R1’s release has ignited discussions on AI security, innovation, and control. While its open-source approach is disruptive, it also highlights serious security gaps—from exposed databases to vulnerabilities in model safeguards. The rapid progress of AI models like DeepSeek reveals how little we still understand about AI’s limits, making continuous security assessments essential. The real battle, however, is not open vs. closed AI, but who controls the application layer. The key takeaway? Enterprises must assess AI security rigorously including and even especially for open source model, plan for ongoing disruption, and invest in AI-powered applications rather than just models. » READ MORE

  • I was thinking about how to improve this section and aiming for something a little bit more actionable. I wanted to stop have a long blurb of text to start the newsletter and have something a bit more practical. Below is a first example, so keen to get feedback (you can leave a comment at the bottom of the page or just reply to this email) - including the usage of emoticon vs proper icon? the level of detail in the summary? is an actual visualization better?
    I want to automate this so that might become my 2025 coding project 😃

🛑ACT NOW

📅 PLAN FOR THIS

📜 AI Regulations & Compliance – Organizations must align with evolving frameworks like DORA, effective January 2025, to ensure resilience.

🔍 AI security gaps (DeepSeek breach) – Highlights supply chain risks and the need for robust AI model governance. » READ MORE

🏴‍☠️ Microsoft reminded Microsoft 365 admins that its new brand impersonation protection feature for Teams Chat will be available for all customers by mid-February 2025. » READ MORE

💡 AI-driven vulnerability prioritization – Leveraging EPSS and AI-powered security triage to enhance threat detection strategies. » READ MORE & READ MORE

👀 MONITOR

🔕 IGNORE

Google Gemini exploited by nation-state hackers – AI-driven phishing and vulnerability scanning accelerating cyber operations. » READ MORE

🤖 Speculation on AGI replacing cybersecurity jobs – Long-term debate, but no immediate impact on practical security operations.

📰 Overhyped AI model cost comparisons – DeepSeek’s claim of training costs at a fraction of OpenAI’s budget is likely misleading.

DeepSeek’s Disruption: Security, Uncertainty, and the Future of AI Innovation

The AI industry is still processing the shockwaves of DeepSeek-R1. If the last few weeks have shown anything, it’s that AI development is moving faster than expected, open-source models are forcing a rethink on strategy, and security concerns are more critical than ever.

In my last two editions, I covered the promise and challenges of open-source AI—how models like DeepSeek are redefining what’s possible and the obstacles organizations face in adopting them. This week, we go deeper into three fundamental themes emerging from the DeepSeek saga:

  • Security risks in open-source AI are unavoidable.

  • We don’t know what we don’t know—there’s still "slack" in AI development.

  • The real battle isn’t open vs. closed—it’s who controls the application layer.

Security Gaps

There’s no way to sugarcoat it: DeepSeek-R1 has security issues. The past week has seen multiple reports highlighting critical vulnerabilities, misconfigurations, and exposure of sensitive data.

  • DeepSeek’s Open Database Incident
    A major security lapse came to light when Wiz Research discovered a publicly accessible DeepSeek database, exposing over a million lines of chat history, secret keys, and operational metadata. The database was left completely open, allowing unauthorized access to internal data. While this is not an AI-specific security flaw, it highlights the persistent issue of basic security hygiene in fast-moving tech environments. It also serves as a reminder that when you drive innovation, you inevitably attract scrutiny—both from security researchers and potential adversaries.

  • Red Teaming Reports Confirm High Vulnerability
    Security assessments have painted an alarming picture. DeepSeek-R1 was found to be:

    • 11x more prone to generating harmful content than OpenAI’s O1 model.

    • 4x more likely to generate insecure code.

    • Easily jailbroken to create malware, deepfakes, and misinformation.

These issues aren’t theoretical—they represent a real-world risk for organizations considering integrating DeepSeek or similar open-source models. A failure to conduct proper AI risk assessments before adoption is an invitation for security incidents.

The Lesson?

AI models must be treated like any other third-party software—and more. This includes conducting penetration testing, setting up monitoring for adversarial attacks, ensuring compliance with security best practices, and conducting specific assessments of the model for bias, fairness, and reliability before deployment.

There’s Still a Lot We Don’t Know About AI’s Limits

DeepSeek-R1 didn’t just surprise us with its performance—it exposed how much we still don’t understand about the rapid progress of AI.

  • DeepSeek Innovation Appeared Out of Nowhere—And It Won’t Be the Last
    The AI industry has been conditioned to believe that massive compute budgets are necessary to build competitive models. Yet, DeepSeek’s claimed $5M training cost—compared to the hundreds of millions OpenAI and Anthropic spend—suggests that optimization, not just raw compute, is the key to breakthroughs*.

  • The Next Leap Is Always Around the Corner
    DeepSeek’s ability to match or even outperform top-tier proprietary models in certain benchmarks highlights that progress is not linear. The AI space will continue to see unexpected jumps in capability, and organizations should plan for continuous disruption, not stability.

  • Security Will Always Be Playing Catch-Up
    If AI progress is unpredictable, AI security must be adaptive. Traditional security frameworks assume static attack surfaces—but AI is constantly evolving, meaning defenses must be built for a moving target.

[*] Deepseek did not really appear from nowhere. Their parent company is a Quant company named High Flyer that is leveraging AI for investment for quite a few years…and they have done so with a lot of success. There is also a lot of questions on those $5M and how to compare them.

The Lesson?

Organizations must shift from static AI security policies to dynamic risk management. Regular red teaming, continuous monitoring, and adaptive defenses are essential to keep up with the unpredictable evolution of AI capabilities.

Open vs. Closed? The Real Battle Is Over the Application Layer

The debate over open-source vs. proprietary AI is missing the bigger picture. The true winners in AI won’t be those who build the models—they’ll be those who build the ecosystem around them.

  • AI Models Are Becoming the New Operating Systems
    DeepSeek, OpenAI, and Meta’s Llama all point to the same future: AI models are infrastructure, not end products. Just like Windows, macOS, and Linux serve as the foundation for an application ecosystem, AI models will be the backbone for the next wave of digital products.

  • Will the Infrastructure Players Win?
    Microsoft, Google, and Amazon aren’t just focused on building the best models—they’re striving to own the platforms where AI models are deployed. But does this guarantee their dominance? The real battle lies in who will ultimately control the tools, APIs, and integrations that businesses depend on. Will infrastructure players maintain their edge, or will new challengers disrupt their position?

  • DeepSeek’s Strategic Play? Market Disruption Through Open-Source
    The open-source strategy isn’t new—it’s a classic market disruption tactic. By offering high-performance AI at a fraction of the cost, DeepSeek is forcing companies to adapt or risk losing relevance. This strategy mirrors what China did in cloud computing—offering aggressively priced alternatives to dominate market share before regulation catches up.

The Lesson?

Organizations should invest in AI-powered applications, not just AI models. The real value will come from how AI is applied, not just how it’s built.

Final Thoughts: What Comes Next?

DeepSeek-R1 has reshaped the AI conversation, but it has also exposed serious gaps in AI security, governance, and risk assessment. Enterprises and governments must act now adapt to a world where open-source AI is a double-edged sword—capable of both accelerating progress and introducing unforeseen risks.

Key Takeaways for Enterprises:

  • Conduct thorough AI risk assessments before integrating open-source models.

  • Security is not optional—expect aggressive adversarial testing on any AI system.

  • AI innovation will keep accelerating—build risk strategies that assume continuous disruption.

  • The application layer is where the real value is—focus on AI-powered solutions, not just raw models.

There’s a reason 400,000 professionals read this daily.

Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.

Worth a full read

Modeling Asset Risk Using EPSS

Key Takeaways

  • EPSSg model scales exploit predictions across various data slices, enhancing vulnerability management.

  • Calculating ΔEPSSg scores identifies high-impact CVE removals, optimizing remediation strategies.

  • Overlaying asset attributes with EPSSg distribution provides deeper insights into risk posture.

  • Regular recalculation of EPSSg scores ensures accurate and up-to-date risk assessments.

  • EPSSg model offers a quantitative shift-left approach, improving vulnerability management focus.

  • EPSSg facilitates targeted, organization-specific vulnerability management strategies for risk reduction.

  • Integrating EPSSg with CVSS and qualitative attributes enriches risk assessment comprehensiveness.

  • EPSSg model emphasizes continuous improvement through feedback loops in risk management.

  • Visualization of asset EPSSg distribution prioritizes effective remediation efforts.

  • EPSSg model supports a hybrid quantitative-qualitative risk assessment approach for better insights.

VulnWatch: AI-Enhanced Prioritization of Vulnerabilities

Key Takeaways

  • AI enhances vulnerability detection accuracy, reducing manual efforts and improving security focus.

  • Automated instruction optimization minimizes human error, refining AI model performance.

  • Context-specific scores, not just CVSS and EPSS, are crucial for effective vulnerability prioritization.

  • AI-powered library matching relies on advanced reasoning and industry expertise.

  • Ground truth datasets fine-tune AI models, boosting accuracy and reliability.

  • Topic modeling clusters libraries, enhancing vulnerability prioritization by shared service contexts.

  • Efficient AI systems transform vulnerability management, focusing on critical security triage.

  • Iterative methods optimize LLM prompts, refining AI accuracy and efficiency.

  • Understanding library dependencies is essential for effective vulnerability management.

  • Component and topic scores prioritize vulnerabilities, aiding focused security efforts.

Adversarial Misuse of Generative AI

Key Takeaways

  • AI's dual-use nature offers benefits and risks, necessitating responsible development.

  • Threat actors find productivity gains using AI but lack novel capabilities.

  • Current AI models do not yet enable breakthrough capabilities for threat actors.

  • Iranian and Chinese APT actors are the heaviest users of Gemini.

  • Generative AI allows faster, higher volume operations for skilled threat actors.

  • AI misuse remains a concern, but current models limit breakthrough capabilities.

  • Government-backed threat actors use Gemini for various attack lifecycle phases.

  • AI systems must balance societal benefits with addressing potential misuse challenges.

  • Financially motivated actors use AI for business email compromise operations.

  • Google's Secure AI Framework (SAIF) aims to secure AI systems conceptually.

Wisdom of the week

An apology without change is just manipulation

Contact

Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!

Thanks! see you next week! Simon

Reply

Avatar

or to participate

Keep Reading