PRESENTED BY

Cyber AI Chronicle

By Simon Ganiere · 12th May 2024

Welcome back!

Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.

Table of Contents

What I learned this week

TL;DR

  • Another busy week in the cyber world:

    • Dell disclosed they have lost data of approx. 49 million customers. Did I ever mention API security as one of the hot topics of the moment?

    • Ascension, a major U.S. healthcare network, has been hit by “a cybersecurity event”. The incident disrupted clinical operations, connection to partners were cut off and ambulances were diverted.

    • Zscaler has been hit by a data breach on a particular test environment. There isn’t much information on how it happens yet.

    • Scattered Spider, the group behind the MGM Resort International hack last year, is back and targeting the financial and insurance industry.

    • The US and British authorities have identified the mastermind behind LockBit and have done so publicly. This is coming after the seizure of some of the infrastructure of Lockbit by the Cronos Taskforce Operation. The individual in question is obviously saying it’s not him 😝

  • Microsoft Vice Chair and President Brad Smith, has been called by the Homeland Security Committee to testify over Microsoft shortcomings. Microsoft also deployed GenAI for US spies.

  • It was RSA this week, and without a doubt the whole conference did not disappoint with the amount of “AI powered” products. You can find a summary of the announcement per day here: day 1, day 2, day 3 and day 4.

  • For the second year, an AI product won the RSAC Sandbox Innovation competition. This year winner is Reality Defender, which is active in the deepfake detection space. As we discussed here previously, Deepfake is a key and probably the first big threat scenario that AI is transforming. Another example, just this week where the CEO of the biggest ad firm was targeted by a deepfake. I’ll explore this trend in a more detail and in particular the contradiction between AI companies creating new tools knowing very well they will be misused.

Scamming is the Next Growth Industry

Deepfakes are becoming an increasingly complex issue in today’s digital landscape. Just last year, the technology behind deepfakes wasn't convincing enough to fool many, but things are rapidly changing. Advances in AI are now enabling the creation of hyper-realistic fake videos and audio clips. It’s like we are on the cusp of a storm where synthetic media could manipulate people’s emotions and beliefs on a massive scale.

The types of attacks we’re seeing are evolving too. Scammers use deepfakes for a variety of nefarious activities, from creating fake audios that mimic a loved one in distress to generate sympathy and money, to videos that falsify statements by public figures to sway political opinions or manipulate stock prices. These incidents are becoming more sophisticated, making it harder to distinguish the fake from the real.

A primary reason deepfakes are predominantly used in scams or social engineering is their ability to exploit trust. When a video of a trusted leader or a call from a family member is faked convincingly, it plays directly on human emotions—fear, trust, urgency—making individuals more likely to act without questioning the authenticity of the request.

There’s a troubling contradiction in the tech industry’s race to develop deepfake technology. On one hand, companies are pushing the boundaries of AI capabilities, creating tools that can generate increasingly realistic fake content. On the other, they are fully aware that these technologies can be—and are being—misused by threat actors. This dual role complicates the moral landscape, as companies must balance innovation with the potential harm their inventions could cause.

To tackle the rising tide of deepfake threats, new detection and prevention strategies are being developed. Companies like Mandiant and startups like Reality Defender are at the forefront, creating tools that can spot signs of tampering, such as unnatural hand movements or inconsistencies in speech patterns. Watermarking and metadata are also becoming standard in content creation, helping to verify the authenticity of digital media. The challenge is monumental, as the tech needs to be several steps ahead constantly—not just on par with—the latest deepfake methodologies.

This isn’t just about protecting corporate assets but about safeguarding our personal interactions and the fabric of truth in society.

However, these solutions are part of an ongoing "arms race" in cybersecurity. As detection methods improve, so too do the techniques for creating deepfakes. This dynamic creates a constant battle between security professionals and cybercriminals, one where advancements in AI could potentially tip the scales in favor of either side.

Stay Vigilant

While AI has the power to create and innovate, it also bears the potential to deceive. As we move forward, the focus must be on creating a resilient digital ecosystem where safety measures are as innovative and dynamic as the technologies they aim to regulate.

Our approach to deepfakes today will set the groundwork for how we handle emerging digital threats tomorrow, shaping a trajectory where technology remains a tool for empowerment, not manipulation.

Worth a full read

Reimagining secure infrastructure for advanced AI

Key Takeaway

  • Secure, trustworthy AI systems are foundational to leveraging AI's societal benefits.

  • The strategic value of AI necessitates unprecedented levels of cybersecurity vigilance.

  • Isolation and segmentation are critical in minimizing AI infrastructure vulnerabilities.

  • Physical and operational security innovations are crucial for next-gen AI datacenters.

  • AI can significantly enhance cyber defense capabilities through automation and analysis.

  • Continuous adaptation and research are vital for staying ahead of AI security threats.

  • Collaboration across sectors enhances the development of robust AI security measures.

  • The complexity of securing AI demands investment in both technology and human expertise.

AI is Mostly Prompting

Key Takeaway

Effective prompting in AI hinges on the clarity and precision of human input.

  • The evolution of AI may challenge but not diminish the value of clear prompts.

  • Markdown's simplicity aids in the articulation of complex AI instructions.

  • Text-based interfaces empower users to leverage AI through concise communication.

  • Future AI advancements will likely amplify rather than replace the need for clarity.

  • Clear thinking and communication are increasingly valuable skills in the AI era.

  • The balance between human input and AI autonomy shapes technology's trajectory.

  • Enhancements in AI models amplify the effectiveness of well-crafted prompts.

Russia-Linked Threat Group Uses LLMs to Weaponise Influence Content at Scale

Key Takeaway

  • AI-generated disinformation enables influence networks to tailor content to specific audiences effectively.

  • The sophistication of disinformation networks like CopyCop highlights the evolving threat landscape.

  • Automated publication and generative AI facilitate the scale of disinformation campaigns.

  • Engagement metrics are increasingly important for covert influence operations to demonstrate effectiveness.

  • Hybrid disinformation strategies blend AI-generated volume with targeted human-crafted narratives.

  • Open-source analytics tools are preferred by disinformation networks to avoid detection and sanctions.

  • The proliferation of AI in disinformation efforts poses significant challenges for election security.

  • Influence operations leveraging AI indicate a strategic shift towards more nuanced and scalable campaigns.

  • The operational security mistakes of disinformation networks reveal their reliance on generative AI technologies.

  • The global impact of disinformation campaigns underscores the need for international cooperation in cybersecurity.

Some more reading

A thread on X on a red team exercise against an internal AI chatbot deployed by a company » READ

Warren Buffett says AI scamming will be the next big ‘growth industry’ » READ

OpenAI working on new AI Image detection tools » READ

Microsoft’s “air gapped” AI is a bot setup to process top-secret info » READ

The fight for AI talent is on » READ

Wisdom of the week

Stop telling yourself you’re not qualified, good enough or worthy. Growth happens when you start doing the things you’re not qualified to do.

Steven Bartlett - The Diary of a CEO

Contact

Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!

Thanks! see you next week! Simon

Reply

Avatar

or to participate

Keep Reading