In partnership with

PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 8th December 2024

Welcome back!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Table of Contents

What I learned this week

TL;DR

  • This week, we explore a pivotal shift in the workforce as AI reshapes how we create, solve problems, and work. The article dives into why AI isn’t here to replace you but to augment your abilities—if you embrace it. From transforming creativity into the key skills to highlighting the growing importance of problem-framing, we unpack the tools and mindset you need to thrive in the AI era. Whether you're in cybersecurity or any other field, the message is clear: the future belongs to the curious and adaptable » READ MORE

  • This week is “irony” week in the cyber world. Eight US Telecom firms have been hacked by China-linked threat group Salt Typhoon. The hack targeted the surveillance systems used by the US government to investigate crimes and threats to national security. Leading to the FBI saying that people should stop using basic SMS and use encrypted messaging application…which is a bit of an irony knowing law enforcement has been pushing back on such technology in the past.

  • There is “no honor among thieves”, Secret Blizzard (aka Turla) has been seen as successfully infiltrated 33 C2 nodes from Pakistani-based actors (Storm-0156). Apart of the little irony in this story, this is highlighting a couple of interesting points:

    • Attribution is hard, especially if nation-state starts hacking each other

    • Whilst this move seems intentional by Secret Blizzard and allow them to have access to additional targets at minimal cost, it’s also a risk. Depending on the op-sec of the targeted threat actor this might lead to the discovery of Secret Blizzard TTPs.
      Full article from Microsoft here (part 1) and Lumen is here.

  • OpenAI is going full on Christmas with the “12 Days of OpenAI”. Day 1 gave us two new things: The full o1 model and ChatGPT Pro. The full o1 model now handles image analysis and produces faster and more accurate response (a 34% few errors). Chat GPT Pro is a $200/month plan that includes unlimited access to o1, GPT-40, Advanced Voice, and future compute-intensive features. Day 2 is about reinforcement fine-tuning research program.

  • Meta has announced the latest Llama model with Llama 3.3 70B. This model is leveraging the latest advancements in post-training techniques to achieve improved delivery in math, general knowledge, instruction following and app use.

Thriving in the Age of AI

If you’ve been paying attention to developments in artificial intelligence then you’ve likely felt a mix of excitement and concern. For some, the excitement comes from seeing how AI can enhance creativity and productivity. For others, there’s concern—fear even—that AI will take jobs and disrupt livelihoods.

This article is my attempt to explain why AI is a game-changer not because it replaces humans, but because it changes the rules of the game entirely. Those who thrive in this new era will be the ones who adapt, learn, and use AI as a lever for their creativity and problem-solving skills.

The Shift: From Knowledge to Creativity

In the pre-LLM era, turning an idea into reality often involved expertise, resources, and significant time. Let’s take application development as an example:

  • Before AI: If you had an idea for an application, you’d need to write detailed requirements, hire developers, and go through rounds of back-and-forth to create the final product. This process took weeks, months, or longer.

  • After AI: Now, with LLMs, you can describe your idea in plain language, feed it to the AI, and receive a working prototype in minutes.

This is a profound change. It eliminates many of the traditional barriers to execution, placing the focus squarely on the individual’s ability to define, describe, and iterate on their ideas. The skill required isn’t knowing how to code—it’s knowing how to think creatively and communicate your vision clearly to the AI.

This isn’t limited to software development. The same principle applies to marketing, writing, education, business strategy, and even cybersecurity.

Across industries, the ability to frame problems and guide AI systems is becoming the key differentiator.

The New Core Skillset

If knowledge and expertise are no longer the primary currency, what skills do you need to succeed in an AI-driven world? Here are the essentials:

  1. Creativity: AI is excellent at execution, but it doesn’t generate ideas on its own. Your ability to innovate and think outside the box is what sets you apart.

  2. Problem-Framing: The ability to translate abstract ideas into clear, actionable inputs for AI is critical. Think of this as prompt engineering—how you ask matters as much as what you ask.

  3. Iterative Thinking: AI thrives on refinement. Success often comes from testing, interpreting outputs, and providing better instructions in a cycle of continuous improvement.

  4. Critical Thinking: AI isn’t infallible. You need to validate outputs, identify biases or errors, and make judgment calls about the ethical and practical implications of AI-driven results.

  5. Adaptability and Curiosity: The pace of AI development is relentless. Staying ahead requires a commitment to learning and a willingness to explore new tools and approaches.

AI Won’t Take Your Job—But Someone Using AI Might

The fear of AI eliminating jobs is real, but the narrative is often misunderstood. AI isn’t an autonomous job-killer; it’s a tool. The real threat isn’t the AI itself but the people who learn how to use it effectively.

Here’s the hard truth: AI won’t take your job, but someone who masters AI will.

Why? Because AI enables individuals to work faster, smarter, and more creatively. Consider these examples:

  • A marketer who uses AI to analyze trends and optimize campaigns isn’t replaced; they’re enhanced.

  • A writer who collaborates with AI on drafting and editing content delivers higher-quality work, faster.

  • A cybersecurity professional who leverages AI to detect threats and automate incident response is far more effective than one relying on manual processes.

The dividing line isn’t between those who know AI and those who don’t—it’s between those who embrace it and those who resist.

The Cybersecurity Connection

For those of us in cybersecurity, the same principles apply. AI isn’t here to replace us—it’s here to augment our abilities. But that only works if we step up and learn how to use these tools.

In cybersecurity, AI is automating repetitive tasks like log analysis, threat detection, and vulnerability scanning. It can also fundamentally change risk management, risk assessment, metrics and reporting. However, it still requires human oversight to interpret results, ensure ethical use, and adapt recommendations to specific contexts. For example:

  • AI can analyze massive amounts of threat data to identify patterns, but it takes a skilled analyst to turn those patterns into actionable intelligence.

  • A CISO who understands how to guide AI tools can focus on strategic decisions rather than getting bogged down in minutiae.

The future of cybersecurity lies in combining human expertise with AI’s capabilities. Those who learn to leverage AI will lead the field.

Embracing the Change

The AI era is here, and it’s reshaping industries faster than most of us expected. But this isn’t something to fear—it’s something to embrace. Here’s how to position yourself for success:

  1. Stay Curious: Explore AI tools, experiment with their capabilities, and stay informed about new developments.

  2. Focus on Creativity: Think about how you can apply AI to amplify your unique strengths and ideas.

  3. Learn the Tools: Treat AI mastery as a critical skill, just like using a computer or the internet once was.

  4. Collaborate with AI: Approach AI as a partner, not a competitor. Your role is to direct, refine, and validate its outputs.

The Future Belongs to the Curious

The future of work isn’t about AI replacing humans—it’s about humans who collaborate with AI. Creativity, problem-framing, and adaptability will set you apart, not just in cybersecurity but in any field.

Remember: AI isn’t your competitor; it’s your amplifier. But you need to take the first step. Start learning. Start experimenting. And most importantly, stay curious. Because in the end, it’s not AI that will take your job. It’s the person who learns to use it.

SPONSORED BY

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

Worth a full read

OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease

Key Takeaway

  • AI clones of people offer powerful tools for simulating policy responses and social science research.

  • Stanford's study shows AI can replicate personalities with 85% accuracy from minimal data.

  • AI agents achieve high accuracy by incorporating entire interview transcripts during responses.

  • AI's ability to mimic personalities may enhance both legitimate and nefarious applications.

  • Human-like AI clones could revolutionize cost-effective policymaking and social experiments.

  • AI models need surprisingly little data to create faithful replicas of real people.

  • Realistic AI clones raise potential ethical concerns and opportunities in various fields.

  • AI clones' potential for misuse necessitates careful consideration of ethical implications.

  • Human character complexity can be captured through AI models with high-fidelity.

  • AI's advancement in mimicking human behavior signals imminent widespread applications.

Benedict Evans: AI Eats the World

Key Takeaway

  • Generative AI represents a significant platform shift, comparable to past technological evolutions.

  • Rapid AI adoption highlights both excitement and uncertainty in technology's future role.

  • Generative AI challenges current business models, encouraging experimentation and innovation.

  • Scaling AI involves unknowns, with debates on potential limitations and continued growth.

  • Capital investment serves as a crucial competitive advantage in AI development.

  • AI commoditization aims to make technology widely accessible, impacting industry dynamics.

  • Generative AI forces reevaluation of automation, balancing potential with error management.

  • Technology's evolution often shifts from intelligence to routine software application.

  • AI's transformative potential lies in redefining tasks and creating new industry opportunities.

  • Generative AI's integration into industries depends on solving specific productivity challenges.

    » Full presentation slide deck

Research Paper

ThreatModeling-LLM: Automating Threat Modeling using Large Language Models for Banking System

Summary: The paper introduces ThreatModeling-LLM, a framework using Large Language Models (LLMs) to automate threat modeling in banking systems, addressing challenges like lack of datasets, need for tailored models, and real-time mitigation strategies. It operates in three stages: dataset creation, prompt engineering, and model fine-tuning, showing significant improvements in threat identification and mitigation accuracy. The study highlights the effectiveness of combining prompt engineering with fine-tuning, achieving superior performance over existing methods, and suggests potential applications beyond banking.

Published: 2024-11-26T02:57:28Z

Authors: Shuiqiao Yang, Tingmin Wu, Shigang Liu, David Nguyen, Seung Jang, Alsharif Abuadbba

Organizations: CSIRO’s Data61

Findings:

  • ThreatModeling-LLM automates threat modeling for banking systems using LLMs.

  • Combining prompt engineering and fine-tuning improves threat identification accuracy.

  • Accuracy of identifying mitigation codes improved from 0.36 to 0.69 on Llama-3.1-8B.

Final Score: Grade: B, Explanation: Strong novelty and empiricism, but lacks detailed statistical analysis for rigor.

Wisdom of the week

The art of being wise is the art of knowing what to overlook.

William James

Contact

Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!

Thanks! see you next week! Simon

Reply

Avatar

or to participate

Keep Reading