PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 23th March 2025

Welcome back!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Table of Contents

What I learned this week

TL;DR

  • AI-powered “vibe coding” is changing how developers write software—fast, instinctive, and dangerously insecure. Under pressure to ship, many skip security entirely, trusting AI-generated code to "just work." But new evidence shows these tools leak secrets, hallucinate logic, and generate malware with ease. The pace of development has outstripped the pace of security. And unless that changes, we’re not coding—we’re packaging future incidents. As Anthropic CEO Dario Amodei warns, AI will write 90% of the code for software engineers within the next three to six months and every line of code within the next year. That means the time to get secure-by-default is now—not after the breach. » READ MORE

  • Linked to the main topic of this week edition - and to prove that AI cannot solve it all - we have a supply chain compromise against GitHub Actions such as tj-actions/changed-files and reviewdog/action-setup. You can read more about it from the initial report from StepSecurity and from Wiz here and here. Surprise (or not) one of the identified targets was Coinbase…looks like the Crypto world can’t get a break from the bad guys.

  • Speaking of Wiz, impossible not to mention the huge acquisition by Google…$32 billion!! A previous deal, in 2024, for $20 billion was supposedly rejected by the Wiz founders. This is a huge milestone for the startup community and cyber security. Don’t get me wrong not every startup can pull something like this but Wiz ability to deliver a great product and (probably more importantly) their strong marketing approach. I remember meeting those guys back in 2022 at RSA and was impressed from the start.

  • Anthropic is finally adding web search to Claude. This is currently available only in the US and for paid Claude users. Looks like they are using Brave Search to enable this feature. Open updated its audio models to power voice agent, this is also all available via the API…I guess at some point I need to try to see and launch a Podcast version of this newsletter 😁

Security for Vibe Coding

Using AI models as coding assistants like ChatGPT, Copilot, Claude, DeepSeek, etc. are now default tools in modern software development. They speed things up, reduce boilerplate, and even offer reasoning chains to support decision-making. But there’s a catch: in the push for speed, structure gets sacrificed.

“Vibe coding” has emerged as the poster child of this shift. Developers prompt, tweak, and deploy—often skipping code reviews, documentation, and even testing. DeepSeek R1 gained traction for being fast, cheap, and “good enough”—but its use also illustrated a broader truth: AI models have inherent risks that must be accounted for. Red teams demonstrated how easily it could be jailbroken and coerced into producing malware, jailbreaks, and fabricated data, underscoring the critical need for oversight and safeguards in their application.

Anthropic CEO Dario Amodei recently predicted that AI will write every line of software code within the next 12 months. That's not a fringe estimate—it’s a likely trajectory based on current adoption curves. The ecosystem is already reinforcing fast iteration cycles with minimal friction. But friction, in the right amount, is what prevents fires. We’ve built a frictionless coding culture. And it’s flammable.

Invisible Flaws: The Security Risks of AI-Coded Software

Security vulnerabilities are not just possible—they're prevalent. AI-generated code often lacks input validation, skips authentication checks, and imports outdated or vulnerable dependencies. Copilot has been caught suggesting insecure defaults, including hardcoded secrets and weak cryptographic functions.

These issues aren't isolated to one platform or provider. They reflect a broader pattern: current-generation AI tools are optimized for usability and speed, not security. Code may appear correct at a glance but lack essential safeguards or include hidden flaws. Without robust scrutiny and testing, these weaknesses go unnoticed until exploited—sometimes long after deployment. As AI adoption accelerates, these embedded vulnerabilities scale with it, compounding the risk landscape for engineering teams and security leaders alike.

Another growing concern is AI's exposure to supply chain attacks. Recent research highlights how a single malformed configuration file or dependency—such as the .rules backdoor technique—can inject malicious logic into codebases without detection. As developers increasingly rely on AI to pull and suggest third-party libraries, the potential for introducing compromised components grows exponentially. These indirect attack vectors are difficult to trace, yet pose a serious risk to both proprietary and open-source environments.

The risk extends beyond the models themselves. Developers are becoming overconfident in AI outputs. Many treat code suggestions as authoritative, rarely questioning logic or verifying outcomes. That kind of automation bias leads to insecure applications being shipped directly to production, often with zero manual review.

Building with Guardrails: Practical Security Steps for AI-Coded Projects

To build safely at speed, organizations need guardrails—not handrails. The first is cultural: treat AI like a junior engineer. That means reviewing all its work, asking for justifications, and testing outputs rigorously.

Next, shift security left with lightweight tooling. Pre-commit hooks should block secrets, enforce linters, and reject insecure patterns. Tools like TruffleHog, ESLint with security plugins, Bandit, and Semgrep can catch issues early, and they integrate easily into CI/CD workflows.

Prompting also matters. Vague prompts yield vague (and dangerous) code. Be explicit about security: ask the model to validate input, avoid SQL injection, enforce authentication. Use Recursive Critique and Improvement (RCI) prompts to force self-review. And teach your teams to do this by default.

Finally, build with secure defaults. Start every project with a locked-down .gitignore, sanitized input pipelines, TLS-by-default, least-privilege access controls, and logging configured out of the gate. Treat API key security as non-negotiable: never hardcode secrets, and ensure proper rotation and scoping of credentials. Use tools like dotenv and secret scanning hooks to prevent leaks. Think of these as your baseline—not add-ons.

If you’re using AI to generate code, use it to review code too. Ask for a second pass on every feature: “Act as a security engineer. What vulnerabilities might exist in this code?” You’ll catch more than you expect. And if the model struggles to answer, that’s your signal to dig deeper.

From Prototype to Production: The Real Test of AI-Coded Software

Vibe coding is good for prototyping and testing some ideas, but it can never go into production "as is." This applies not only to large enterprises but also to self-entrepreneurs building solo projects. AI is powerful, but it is not careful. As tools begin writing all your code, what matters isn’t just how fast you ship—it’s how safely. Ask yourself: when you move from prototype to production, will the code be resilient enough to deserve trust?

SPONSORED BY

Start learning AI in 2025

Keeping up with AI is hard – we get it!

That’s why over 1M professionals read Superhuman AI to stay ahead.

  • Get daily AI news, tools, and tutorials

  • Learn new AI skills you can use at work in 3 mins a day

  • Become 10X more productive

Worth a full read

The Unbelievable Scale of AI’s Pirated-Books Problem

Key Takeaways

  • Ethical dilemmas arise when balancing AI training needs with copyright laws and fair use.

  • Pirated libraries offer vast resources, yet their legality and ethicality remain contentious.

  • Generative-AI companies prioritize rapid data acquisition, sometimes overlooking legal implications.

  • Accessibility of knowledge clashes with creators' rights and commercial interests.

  • The digital age challenges traditional knowledge management, raising societal benefit questions.

  • AI models transform copyrighted material, but transformation's legality is still unresolved.

  • Piracy stems from limited access to academic resources, highlighting disparities in information availability.

  • Generative-AI chatbots risk decontextualizing knowledge, hindering intellectual collaboration.

  • Legal strategies to protect AI training data from scrutiny raise ethical concerns.

  • The pursuit of AI advancements must balance innovation with ethical and legal responsibilities.

Cyber Risk Benchmarking: Key Insights from the Risk Radar Report

Key Takeaway

  • Focusing on risk, rather than just threats, enhances cybersecurity management effectiveness.

  • Smaller third parties often pose significant risks due to data exposure vulnerabilities.

  • Business interruption constitutes a major portion of cyber losses, emphasizing resilience investment.

  • The SEC's material incident reporting reveals gaps in companies' understanding of cyber risks.

  • Continuous monitoring and testing of third-party risks are crucial in a dynamic threat environment.

  • Shifting focus from questionnaires to effective controls can improve third-party risk management.

  • Reliable financial impact estimation is lacking in current SEC cyber incident reporting.

  • Cyber resilience investment also functions as risk mitigation through reducing business interruptions.

  • Prioritizing third-party risk by data exposure potential is more effective than by company size.

  • Reports indicate that most cyber attacks target data exfiltration for financial gain.

Wisdom of the week

Time…

Is the only currency you spend without ever knowing your balance.

Use it wisely…

Unknown

Contact

Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!

Thanks! see you next week! Simon

Reply

Avatar

or to participate

Keep Reading