PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 28th April 2024
Welcome back!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
Table of Contents
What I learned this week
TL;DR
I spent some time reading a bit more on the state of the AI market. One of the reasons for this is my belief that whilst AI will change a lot of things (in security and other domains) but currently we are about to reach the peak of inflated expectations. Not everything has to be “AI powered” to be useful in the security space. I’ll explore a couple of key aspects of this and share my view on this.
It’s RSA in a couple of weeks and I can only imagine that every single vendor will put AI in their product name…for no particular reasons 🙃
I really enjoyed reading Phil Venables article on the Security and Ten Laws of Technology. His approach to take key technology laws and apply them to security is really interesting. Far from me the idea to think I can create a new law but I keep mentioning my “Vacuum Law”: if you don’t fill the space…someone will and there is a high chance you will not like how they are filling up this space.
In the “just another week in cyber” section hope you have applied that advice to patch edge technology vulnerabilities in 24h…if not here is your weekly reminder…thanks to a new round of zero-days targeting Palo Alto and Cisco ASA. Yes I know, I’m repeating myself here but if by now you have not yet adapted your patching strategy to this trend…you might have another problem on your hands soon enough. Don’t believe me? Well even MITRE got impacted by the Ivanti vulnerabilities.
Not Everything Has To Be AI Powered
Artificial Intelligence has changed the landscape of technology over the last couple of years. Once the favourite kid on the block, the cyber market is in full consolidation mode, as illustrate by Thomas Bravo recent acquisition of Darktrace and all the previously known acquisition like Proofpoint, Splunk, Juniper Network and that might not stop soon either. Ai investment now represents 1 in 4 dollars of investment across US based startup in 2023.
In most of the discussions I had with peers and industry colleagues, I have yet to see a non-AI native company leveraging AI to solve a complex business process and actually add value (and some are even arguing about seeing this for native AI company). The key question is have we reached the top of the hype cycle yet?
Economic Viability Concerns
For something to take hold in a market, its economic model has to be working. A lot of examples exist were technically superior product where design but they didn’t hold on the market. The Concord is a great example of this, impressive technology but bad economic model. The same has to apply to AI.
Most of the frontier model (e.g. foundational models or general-purpose AI) providers are struggling with their economics. We are seeing high valuation with minimal revenue. xAI is allegedly in talks to raise $3B at an $18B valuation…when they don’t even have a clear business model. Inflection had raised $1.5B at a $4B valuation despite revenue in low millions. One of the key challenges of those companies is the fact it’s very difficult for them to have a predictable source of income.
You need a significant amount of Capex to build a model from scratch. Sam Altman is claiming GPT-4 cost $100 million to train but that number is most probably very narrowly defined. The true cost is significantly higher. In the AI model business, you don’t have the same economy of scale some of the cloud-based software providers have. The margin of Anthropic for example are significantly lower than the one of a typical cloud software provider. Overall most of those big model providers are struggling with revenue growth and margins. The speed at which AI technology currently evolved is not helping either. There are very few “lock-in” effect at play today. Any AI application can switch to a different model pretty easily as you “just” have to swap to a new API to use a new model.
The race for new model is also not helping those companies either. Whilst they push hard to create new model at high capex cost, the old model lose value at an amazing pace. Don’t believe me? just do a comparison of API price between GPT-3.5 Turbo and GPT-4 and you’ll get your answer. This is not a sustainable way to do business.
In the end what we are seeing is a situation where big money is being raised…and…burning it more or less instantly without adding much value. This type of situation can only lead to more consolidation.
The other big part is the business value AI is bringing. The massive hype, including from the public press, is not helping at all. If you listen to it you can basically close shop, replace your workers and automated everything. Let’s just be honest about this, we are not there yet. Very few companies have been able to leverage AI to drive their core business. Obviously, there are exceptions but based on how the models currently works it’s not a big surprise.
The “Cargo Cult” Mentality
Here is a great post from Shyam Sankar - CTO of Palantir - about the “Cargo Cult”. He is basically saying that company spent more time and investment on acquiring software rather than integrating into their process to solve real business value. I can’t overstate how important this is. Go speak to any big company and you will see hundred of security software (taking security as an example but the same apply to other domain), most of them mis-configured or not fully utilised and the worst of it is that most of the time the software has changed the process rather than support the process. Pushing this even further you can argue that with AI as we are trying to “force” the adoption we are basically creating problem rather than solving them. You don’t need to deploy a LLM chat bot to get info on the latest CVE or you don’t need Security Copilot to do basic correlation of events in your SIEM.
My guess for the near term future is that we will see AI consolidation very soon. Those crazy number and valuation can’t hold for too long, reality will kick-in. Then we will see smaller model and the ability to solve complex business problem will be the game changer.
PS #1: One of the key player that is still missing is Apple. I’m very curious to see how they will be looking at this. My guess? Smaller AI models focused on specific tasks and those model will run locally on your device. As they own the whole stack from CPU/GPU to software they can optimise and embed this setup deeply in the user experience. The real jackpot would be the ability to build applications around those model in an “AI agent” fashion. Basically allowing user to deploy simple AI driven workflow to solve their problem.
Worth a full read
Five Eyes - Deploying AI Systems Securely
Key Takeaway
Secure deployment of AI systems requires careful setup, configuration, and adaptation to specific use cases and threats.
AI security must evolve to address new risks, alongside applying traditional IT security best practices.
Organizations should secure deployment environments with sound security principles and robust governance.
Continuous protection of AI systems involves validating systems before use and protecting against unauthorized access.
Secure operation and maintenance require strict access controls, user training, and regular audits and penetration testing.
Updating and patching AI systems regularly is crucial for maintaining security and performance standards.
The report emphasizes the importance of securing model weights and adopting a Zero Trust mindset for network protection.
Security in AI deployment is an ongoing process that requires adaptation to evolving threats and technologies.
Protecting sensitive components like model weights is essential to prevent theft or misuse of AI systems.
Security and Ten Laws of Technology
Key Takeaway
Moore's Law has fueled both the rise of cyber-attacks due to increased digital surfaces and advancements in security through enhanced processing capabilities.
Murphy's Law emphasizes the importance of resilience and continuous control monitoring in security to anticipate and mitigate failures.
Conway's Law suggests that the structure of an organization influences its security practices and product designs, necessitating cross-departmental coordination for risk management.
Hyrum’s Law highlights the unintended security consequences when users depend on all observable behaviors of a system, complicating updates and fixes.
Metcalfe's Law underlines the risks and opportunities in network effects, suggesting strategic approaches to security upgrades can leverage these networks.
Wirth’s Law points out that software bloat, driven by software getting slower faster than hardware improves, increases security vulnerabilities.
Cunningham's Law indicates that correcting misinformation can be more engaging than direct inquiries, affecting security community interactions.
Hyppönen’s Law reminds us that the smarter a device is, the more vulnerable it may be, advocating for a balance in device intelligence.
Kryder’s Law's observation on data storage growth emphasizes the challenge of protecting an ever-increasing amount of data.
Venables’ Law posits that understanding attackers' constraints offers strategic advantages in cybersecurity defense and deterrence.
Some more reading
Unlock AI Agent real power?! Long term memory & Self improving [YouTube] » READ
Former Uber cyber boss is now advising execs on avoiding his mistakes » READ
An introductory guide to fine-tuning LLMs » READ
Cyber market consolidation continue, Thoma Bravo to take UK cybersecurity company Darktrace private in a $5B deal » READ
Microsoft needs to win back trust: Years of security issues and mounting criticism have left Microsoft needing to overhaul its cybersecurity » READ
MITRE compromised by the Ivanti zero day - full report » READ
Unearthing APT44: Russia’s Notorious Cyber Sabotage Unit Sandworm » READ
Plan your career around problems » READ
Wisdom of the week
The very essence of leadership is that you have to have vision. You can’t blow an uncertain trumpet.
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon