PRESENTED BY

Cyber AI Chronicle
By Simon Ganiere · 29th October 2025
Welcome Back!
A new attack called "Tainted Memories" can secretly inject malicious instructions into ChatGPT's Memory feature, persisting across all user sessions and devices until manually cleared.
This exploit represents a dangerous shift where AI convenience features become persistent attack surfaces, potentially allowing attackers to covertly manipulate everything from coding sessions to business decisions. Could this be the beginning of a new era where our AI assistants become unwitting accomplices?
In today's AI recap:
AI's Tainted Memories
What you need to know: Researchers at LayerX discovered a critical flaw in OpenAI’s Atlas browser that lets attackers secretly plant malicious instructions into ChatGPT’s Memory. The "Tainted Memories" attack leverages a CSRF-style weakness so injected instructions can persist across sessions and devices until the memory is cleared.
Why is it relevant?:
The exploit’s main danger is its persistence; once injected, malicious instructions survive across devices and browser sessions, silently influencing later prompts.
The vulnerability is more consequential because Atlas’s phishing protections were weak in tests (LayerX reported it blocked only 5.8% of malicious pages), and the attack specifically exploits the Memory feature designed to make ChatGPT more helpful by retaining user details.
In practice, an attacker could manipulate a developer's interactive 'vibe coding' session so the tainted memory covertly injects backdoors or data-exfiltration steps into generated code.
Bottom line: This is a new class of threat where convenience features become persistent attack surfaces. Treat AI-native browsers and memory-enabled assistants as critical infrastructure and apply the same security scrutiny (segmentation, monitoring, and explicit memory controls) you’d use for other trusted developer tools.
What 100K+ Engineers Read to Stay Ahead
Your GitHub stars won't save you if you're behind on tech trends.
That's why over 100K engineers read The Code to spot what's coming next.
Get curated tech news, tools, and insights twice a week
Learn about emerging trends you can leverage at work in just 10 mins
Become the engineer who always knows what's next
Copilot Gets Phished
What you need to know: Security researchers have detailed a new phishing technique called 'CoPhish' that weaponizes Microsoft Copilot Studio agents. The method tricks users into approving fraudulent OAuth requests to steal session tokens from what appear to be trusted Microsoft services.
Why is it relevant?:
The attack’s power comes from its deceptive legitimacy, as malicious links are hosted on legitimate Microsoft domains, which significantly lowers user suspicion and bypasses typical domain-based security filters.
Attackers customize a Copilot agent's "Login" function, redirecting unsuspecting users to a malicious OAuth consent page that is specifically designed to capture their credentials and session tokens.
While Microsoft plans a fix, high-privilege accounts remain a key target, as Application Administrators can still approve risky permissions that bypass the default security policies meant to protect standard users.
Bottom line: This attack shows how adversaries are creatively exploiting the flexible, user-configurable features of trusted AI platforms. Security teams must expand their threat models to account for legitimate services being used as wrappers for malicious activity.
Your AI Chats Aren't Private
What you need to know: A security researcher discovered over 143,000 user conversations from popular AI chatbots like ChatGPT and Claude were publicly accessible. This incident highlights a significant data leakage risk from seemingly private interactions.
Why is it relevant?:
The data exposure happened because users created shareable links for their conversations, which were then scraped and archived on public websites like Archive.org.
The exposed conversations were not just harmless chats; they contained sensitive information including AWS Access Key IDs and other API tokens.
This underscores the risk of employees using public AI tools for work, inadvertently feeding proprietary code, business strategies, and customer data into systems outside their company's control.
Bottom line: The rapid integration of AI is creating new and often hidden pathways for data exposure that traditional security measures may not cover. Organizations must establish clear AI usage policies and train employees to prevent sensitive information from becoming public training data.
The End of Cybersecurity?
What you need to know: Former CISA head Jen Easterly suggests that AI’s ability to automatically find and fix software flaws could make security breaches so rare that it effectively leads to “the end of cybersecurity” as we know it.
Why is it relevant?:
Easterly argues that the industry doesn't have a cybersecurity problem, but rather a software quality problem, with vendors prioritizing features over security for decades.
The core idea is that AI can systematically scan code to identify and remediate vulnerabilities, tackling the mountain of technical debt left by insecure development practices.
While AI also gives attackers new tools, Easterly believes that if governed correctly, AI-powered defense will ultimately tip the balance in favor of security teams.
Bottom line: This vision reframes the security challenge from reactive incident response to proactive, automated remediation at the code level. It signals a future where security professionals' roles may shift from fighting fires to overseeing AI systems that prevent them from starting.
The Shortlist
Dreadnode demonstrated a "Living Off the Land" technique where malware can autonomously perform privilege escalation using local AI models and inference libraries, like Phi-3 and ONNX Runtime, found on CoPilot+ PCs.
NeuralTrust discovered a prompt injection vulnerability in OpenAI's Atlas browser where the omnibox can be jailbroken by disguising malicious instructions as a malformed URL, tricking the agent into executing them with high trust.
Researchers found that major AI chatbots, including ChatGPT and Gemini, are pushing Russian state propaganda by citing sanctioned media outlets like RT and Sputnik as sources in response to queries about the war in Ukraine.
Anthropic partnered with the US government to develop mechanisms designed to prevent its Claude AI from providing instructions for building a nuclear weapon, though experts are mixed on the project's necessity and effectiveness.
AI Influence Level
Level 4 - Al Created, Human Basic Idea / The whole newsletter is generated via a n8n workflow based on publicly available RSS feeds. Human-in-the-loop to review the selected articles and subjects.
Reference: AI Influence Level from Daniel Miessler
Till next time!
Project Overwatch is a cutting-edge newsletter at the intersection of cybersecurity, AI, technology, and resilience, designed to navigate the complexities of our rapidly evolving digital landscape. It delivers insightful analysis and actionable intelligence, empowering you to stay ahead in a world where staying informed is not just an option, but a necessity.
