PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 24th March 2024
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
Table of Contents
What I learned this week
TL;DR
Not sure I can believe we are already at the end of March! It’s been a busy first quarter of the year and very much align with the prediction in terms of continuous focus on exploitation of key vulnerabilities, AI driven fraud, ransomware, nation state activities, etc.
This week I looked at API security. That topic never really “clicked” for me but I started to look into it as it makes more and more sense if you combined it with the GenAI / LLM applications stacks. I think this topic is going to be critically important in the near future, especially in a world where we will have multiple AI models talking to each other all of the time.
Why API Security is Critical for LLMs?
I’ll be honest, I never got into API security. Obviously got the basics, met quite a few companies providing tools and all but it never really “clicked” for me…until the whole GenAI topic came into play!
It sort of all came to play when you start taking a step back and understanding how some of the architecture of LLM applications.
It might not be obvious but most of the LLM applications are actually leveraging an API in the backend.
Some of the architecture of GenAI/LLMs apps have added plugins or actions in GPT…and you guessed it! it’s all API!
Even within a company, you need your LLM application to speak to other systems via API to access data or perform enrichment.
Below is two illustrations of the possible architecture of an LLM application that leverage APIs (obviously there are other more complete/complex architecture):
You might now be wondering, this is great but why is this so important? You might remember we touch based on Adversarial Machine Learning in a previous newsletter on some high-level attack scenarios. In addition the FS-ISAC documentation provides a more granular definition for several of those attacks. In particular API can be leveraged to generate the following type of attacks:
Prompt injection is an injection vulnerability where malicious prompting can cause unexpected model outputs, causing it to bypass security measures or override the original instructions of the GenAI application. An API instruction can inject information into the prompt, the challenge being that you might not know what is being injected as it will be dependent on the API provider.
Insecure design is the implementation of vulnerabilities or security concerns through the lack of security controls within the application. Linked to the above but the basic of API security needs to be applied such as ensuring your API keys are secured, prevent manipulation of egress or ingress data.
Supply chain vulnerabilities are those in a GenAI application caused by vulnerable components or software in the Generative AI application lifecycle. This is bringing back the topic of third party and obviously you need to check the software supply chain end-to-end.
Conclusion
Whilst hopefully you already had a look at the security of your API, you will now need to extend this to the world of LLM application. Obviously, it will depend of your exact architecture but you must 100% sure include API in your AI risk management approach. Some basic key recommendations - keeping saying this but the basic people, the basics are more important than anything!
Implement protection against prompt inject
Ensure you cover the exfiltration scenarios based on API
Ensure your API requests are made over a secure channel
Ensure your API requests are authenticated and authorized
Secure API authentication
Patch vulnerabilities in your API software supply chain
Perform red team test focusing on API
If you don’t believe me on the basic: back in early December Lasso Security disclosed a research blog where the API token of big players like Meta, Microsoft, Google, etc. have been exposed on Hugging Face. We are talking about 1500 exposed API token, giving access to 723 organization accounts!
I can only recommend the fantastic and free resources at API Sec University. They have multiple courses including one for securing LLM & NLP APIs which will be released soon (you can sign up to get a notification).
Worth a full read
Evolution of repeated token attacks on ChatGPT models
Key Takeaway
Dropbox.Tech identified a new vulnerability in OpenAI's ChatGPT models, including GPT-3.5 and GPT-4, related to repeated token sequences.
The vulnerability was shared with OpenAI in January 2024, confirmed, and subsequently patched.
Repeated character sequences in prompts could induce hallucinatory responses or extract memorized training data.
OpenAI implemented filtering for prompts with repeated single tokens after the publication of related research.
Despite OpenAI's filtering efforts, Dropbox demonstrated that multi-token repeats could still exploit the models.
Experiments showed that repeated tokens could cause models to ignore instructions and produce unrelated or sensitive content.
OpenAI's initial response included blocking prompts with multi-token repeats and implementing server-side timeouts for long requests.
The repeated token attack is transferable to other third parties and open-source models, posing broader security implications.
The research underscores the importance of ongoing security assessments for AI-powered applications and the need for robust defenses.
VISA Biannual Threats Report
Key Takeaway
Visa Payment Fraud Disruption observed a shift towards more organized and sophisticated threat actors from June to December 2023.
Ransomware incidents targeting the payments ecosystem increased by 37% in the latter half of 2023.
Enumeration attacks, especially in the US, continue to be a popular method for threat actors to compromise payment credentials.
Digital skimming attacks are increasingly targeting third-party service providers and supply chains to compromise multiple entities.
Purchase Return Authorization (PRA) fraud attacks saw an 83% increase in investigations during the last seven months of 2023.
The underground marketplace "BidenCash" released 1.9 million compromised cards in December 2023.
AI technologies are being exploited by threat actors for voice and image cloning to facilitate scams and fraud.
The hospitality sector was notably targeted by ransomware and data breach incidents in 2023.
Scams, including romance scams evolving into "pig butchering," continue to proliferate, leveraging AI tools for more effective social engineering.
Some more reading
FlowiseAI - An open source low-code tool for developers to build customised LLM orchestration flow & AI agents » READ
The case for continuous monitoring of Generative AI Models » READ
Are scammers using AI to enhance fake obituary sites? » READ
To get to AGI, we must first solved the AI challenges of today, not tomorrow » READ
United Nations adopts U.S.-led resolution to safely develop AI » READ
NHS AI test spots tiny cancers missed by doctors » READ
Shadow AI - Should I be worried? » READ
AWS CISO: Pay attention to how AI uses your data » READ
Wisdom of the week
The most mature customers are not only doing tabletop exercises with internal stakeholders but are also including their top business partners.
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon



