PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 25th August 2024
Welcome back!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
Table of Contents
What I learned this week
TL;DR
This week, we delve into the security features of Microsoft Copilot, highlighting how its data access controls, encryption protocols, and compliance checks are setup and what they cover. It does, however, still raise significant questions on the implication of existing security challenges in organisation and how Copilot will compound those » READ MORE
I somehow ended up reading some more strategic articles. Including a couple that questioned the role of cyber security and the current approach. I truly believe that the role of the CISO will need to evolve significantly in the coming years and to focus more and more on resilience and ensuring the business can sustain all of the external threats (cyber and non-cyber). The CISO is dead! Long live the Chief Resilience Officer. Obviously, thought-provoking reads so i’m curious to understand other people’s view on it so feel free to comment or share your views.
I like the weekly SentinelOne blog post name “The Good, the Bad and the Ugly in Cybersecurity”. As for once it’s also highlighting some of the good and progress made! Also a fresh reminder that we can all go chase down this APT but basic scams and frauds are still the type of attacks that generate the biggest monetary loss and impact on society at large.
I’m a big fan of Daniel Miessler and can only recommend to subscribe to his newsletter but also checked out the tools he released. I have already mentioned several time Fabric, which I’m using daily. Check out the latest version as it moves to Go rather than Python - simpler install and less dependency management. He released some interesting new tool recently as well: Substrate and Harness.
Microsoft Copilot Security Features: Addressing Real-World Cybersecurity Challenges
Microsoft Copilot is deeply integrated into the Microsoft 365 ecosystem. It comes with a range of built-in security features designed to protect organizational data. Let’s explore these key security features, highlighting how they (are trying to) address real-world cybersecurity challenges and considering how they may evolve to meet future AI-related threats. The below is based on the Microsoft documentation, whilst not aiming to be a sales pitch it gives you the view from Microsoft about their product. Next week i’ll cover more in depth the potential gaps and the threat scenarios.
Data Access and Permissions Management
One of the foundational security features of Microsoft Copilot is its strict adherence to data access and permission management. Operating within the Microsoft Graph, an API-driven platform, Copilot connects to various data sources within the Microsoft 365 environment while respecting all existing access controls. This ensures that Copilot only accesses data that users have permission to view, aligning with the organization’s security protocols.
Assuming your access management practices are robust and well implemented, this means that Copilot can help prevent unauthorized data access by ensuring that its outputs are informed solely by data the user is authorized to access. For example, if an employee inadvertently gains access to sensitive files, Copilot's strict adherence to permission can prevent those files from being included in AI-generated outputs.
Data Privacy and Compliance
Data privacy is a critical concern for any organization, especially when implementing AI tools that interact with sensitive information. Microsoft Copilot ensures that customer data is never used to train the underlying Large Language Models (LLMs). Unlike some AI systems that continuously learn from user interactions, Copilot’s LLMs are pre-trained on large datasets and do not use customer prompts, responses, or data for further training. This approach safeguards organizational data, ensuring that it remains private and contained within the enterprise’s secure environment.
This means that in principle even as Copilot processes and analyzes large volumes of data, the risk of data leakage or inadvertent exposure outside the organization is minimized. For industries with stringent regulatory requirements, such as finance or healthcare, Copilot’s compliance with standards like GDPR offers an added layer of reassurance.
Data Handling and Encryption
Encryption is a cornerstone of data security, and Microsoft Copilot leverages robust encryption protocols to protect data both in transit and at rest. All data processed by Copilot, including user prompts and AI-generated responses, is encrypted using industry-standard methods. This encryption ensures that even if data were intercepted during transmission, it would be unreadable to unauthorized parties.
If an adversary attempts to intercept data between Copilot and the Microsoft Graph, encryption protect the integrity and confidentiality of that data. Future advancements might include more sophisticated encryption methods tailored specifically to AI-generated data and interactions.
Security and Compliance Checks
Beyond encryption and access controls, Microsoft Copilot incorporates additional security, compliance, and privacy reviews during its post-processing phase. After the LLM generates a response, Copilot performs a series of checks to ensure that the output adheres to the organization’s security policies and ethical guidelines.
These post-processing safeguards are crucial in real-world settings where the potential for generating harmful, biased, or inappropriate content exists. For instance, in a scenario where Copilot generates a summary from sensitive documents, these security checks help ensure that the output does not inadvertently include confidential or ethically sensitive information.
Responsible AI Practices
Microsoft has committed to developing AI that operates within ethical boundaries, and Copilot is no exception. The system is built with responsible AI principles in mind, incorporating measures to prevent the generation of biased or harmful content. These principles are embedded into Copilot’s architecture, ensuring that the AI supports positive and ethical outcomes in its interactions with users.
In practice, this means that Copilot is designed to avoid amplifying biases or producing content that could lead to ethical or legal concerns. For example, in generating responses that involve sensitive topics, Copilot’s responsible AI safeguards work to mitigate the risk of producing biased outputs.
Threat Protection and Abuse Monitoring
Finally, Microsoft Copilot includes real-time abuse monitoring features designed to detect and prevent malicious activity. This includes protection against prompt injections and other attempts to exploit the AI system. While Copilot itself does not allow standing access to customer data for monitoring purposes, it is equipped with safeguards to mitigate potential threats as they occur.
Real-time monitoring is essential for maintaining the security of Copilot’s operations. For instance, if an attacker attempts to manipulate the AI through prompt injection, Copilot’s abuse monitoring features can detect and neutralize the threat before it causes harm. As AI-related threats continue to evolve, the real-time monitoring capabilities of systems like Copilot will need to become increasingly sophisticated, potentially incorporating AI-driven anomaly detection and adaptive security responses.
Compounding on Security Challenges?
While Microsoft Copilot introduces powerful capabilities to enhance productivity and streamline workflows, it also amplifies existing security challenges within an organization. In many enterprises, there are often gaps in access management, data classification, and awareness of who has access to sensitive information. Also Copilot operates at scale, quickly processing vast amounts of organizational data to generate insights and responses.
This means that any existing misconfigurations in access permissions or lapses in data classification can lead to unintended consequences. For instance, an employee may inadvertently have broader access to sensitive information than they realize (e.g. they changed role) and Copilot could use this data to generate content that might be shared more widely than intended. This scenario creates new layers of insider threat risks, where data could be exposed or misused, either unintentionally or maliciously.
To mitigate these risks, organizations must ensure that their access management practices are robust, data is properly classified, and employees are fully aware of the sensitivity of the information they interact with. By addressing these foundational security issues, companies can better control the potential risks introduced by Copilot and fully leverage its capabilities in a secure and compliant manner. As AI continues to evolve, so too must the security measures that protect it, ensuring that tools like Copilot remain beneficial without becoming liabilities.
Worth a full read
Palo Alto - Incident Response 2024 Report
Key Takeaway
Software and API vulnerabilities accounted for 38.6% of initial access vectors.
Compromised credentials were used in 20.5% of cases, a fivefold increase over two years.
Social engineering and phishing accounted for 17% of attacks last year.
Malware was implicated in 56% of all documented security incidents in 2023.
Ransomware accounted for 33% of malware-related security incidents.
Median time between compromise and exfiltration decreased from nine days to two days.
45% of data exfiltration occurred less than a day after compromise.
85% of organizations leave Microsoft Remote Desktop exposed to the internet monthly.
Over 90% of Security Operations Centers (SOCs) still rely on manual threat management.
Two of the top five Common Vulnerabilities and Exposures (CVEs) exploited in 2023 were from 2020 and 2021.
Global 2000: Industry Titans Battle the Beast of Supply Chain Cyber Risk
Key Takeaway
Global 2000 companies' cybersecurity is deeply interconnected with third-party vendors.
The financial impact of breaches is exponentially higher for large corporations.
The normalization of technology usage increases vulnerability to cyber threats.
Global 2000 companies' interconnectedness heightens concentration risk.
Multi-party breaches are significantly costlier than single-party incidents.
Managing third-party risks requires continuous and comprehensive monitoring.
Trust and dependency on widely-used technologies can be exploited by attackers.
Cyber resilience hinges on understanding and managing supply chain dependencies.
Effective cybersecurity involves knowing both external threats and internal vulnerabilities.
Concentrated risk management is essential for maintaining economic stability.
Compromising LLMs at large scale
Key Takeaway
Ethical considerations in LLMs are crucial for ensuring societal benefits and preventing harm.
The complexity and scale of LLMs make detecting vulnerabilities challenging.
Continuous monitoring and updating are essential to address new vulnerabilities in LLMs.
Collaboration among technologists, ethicists, and policymakers is vital for responsible LLM deployment.
Public awareness can mitigate the misuse and misunderstanding of LLMs.
Rapid LLM advancements outpace the development of regulatory and ethical frameworks.
Biases in LLMs can reinforce social inequalities if not properly managed.
Ethical frameworks for LLMs should adapt to evolving technologies.
Deploying LLMs in sensitive areas requires stringent ethical standards.
The interdisciplinary approach is crucial for addressing LLM ethical issues.
Some more reading
Leaked environment variables allow large-scale extortion operation of cloud environments » READ
Post quantum cryptography » READ
FIN7: The Truth Doesn’t Need to be so STARK » READ
Cybersecurity should return to reality and ditch the hype » READ
How Phishing Attacks Adapt Quickly to Capitalise on Current Events » READ
Wisdom of the week
There comes a point where we need to stop just pulling people out of the river.
We need to go upstream and find out why they’re falling in.
Contact
Let me know if you have any feedback or any topics you want me to cover. You can ping me on LinkedIn or on Twitter/X. I’ll do my best to reply promptly!
Thanks! see you next week! Simon

