This is an update for Q3 from the previous article. Do note that at time of publication only Anthropic has released publicly an update threat landscape. I’ll update this article if the other key provider (OpenAI and Google) release an updated report on their side.
This article re-use the same approach as the previous one to ensure consistency. See bottom of the article for more details.

The threat landscape continue to evolve at a rapid pace in 2025. On top of the social engineering and information warfare, the latest report from Anthropic is showing the following evolution:

  • Adversaries have moved beyond content generation to autonomous execution of complex attack chains

  • Low-skill actors now wield sophisticated tools through AI-powered criminal-as-a-service platforms

  • Traditional detection timelines are compressed as AI enables rapid iteration and adaptation

  • Employment fraud and synthetic persona creation at scale challenges fundamental trust assumptions

Threat Actor Evolution

Threat Actor

Q1 2025

Q2 2025

Q3 2025

Assessment

North Korea (DPRK)

Basic AI résumé/cover-letter generation; interview scripts; coding assist.

Attempts to automate persona/résumé loops; recruiting proxy operators in Africa/North America; tighter remote-work OPSEC.

Complete AI dependence across the job lifecycle (backgrounds → interviews → coding → daily comms); separate DPRK malware campaign (“Contagious Interview”) auto-blocked pre-use.   

This represents the most concerning development—an entire nation-state economy depending on AI to simulate technical competence at scale.

The "Contagious Interview" malware campaign demonstrates parallel offensive.

China

Content generation; Latin America media infiltration via sponsored pieces.

Multi-operation sophistication (“Sneer Review,” “VAGue Focus”); AI-generated internal docs; APT scripting/recon.

9-month CI campaign vs Vietnam with AI used across nearly all phases of ops. 

China's approach demonstrates strategic patience combined with comprehensive AI integration. The Vietnam campaign showcases how AI enables sustained, complex operations against critical infrastructure.

Iran

Heavy model use for phishing/defense research; IO volume high; evasion by rewriting.

IO (e.g., STORM-2035) continued; fewer novel behaviors vs Q1.

Low visibility in Q3 sources; no new AI TTPs highlighted in this set.

Iran's reduced AI visibility may indicate OPSEC improvements or strategic pivoting. This uncertainty itself represents a risk factor.

Note: The direct impact of the conflict with Israel / US should not be underestimated.

Russia

Limited AI engagement; basic malware refactoring/encryption.

German 2025 election IO (“Helgoland Bite”); “ScopeCreep” malware iterated with ChatGPT under strict account hygiene.

Criminal ecosystem on RU-speaking forums uses AI to mine stealer logs and profile victims; state-aligned AI usage still minimal in these sources. 

Low — Possible OPSEC focus

New Actors

Cambodia romance/task scams; Ghana election engagement simulation.

Influence-as-a-Servicematuration; Philippinespolitical ops (“High Five”).

AI-enhanced fraud stack: Telegram romance-scam bot touting “high EQ” replies; carding platform with AI-built resilient infra; synthetic identity service.   

The democratization of sophisticated capabilities through turnkey services fundamentally alters the threat landscape's accessibility.

What changed?

The evolution since the start of the year is pretty clear and it’s moving rapidly. What we are seeing in the non-threat actor world where AI is used by people without core skillset and they can build new things at speed and scale. Without surprise the same can be done by threat actors. We are now 100% seeing this evolution on the threat side.

We moved from foundational capabilities being used at the end of 2024 and start of 2025 to now proper operations running at scale.

August 2024–Q1 2025: Foundational Capabilities

  • Mainstream media content laundering (China)

  • AI detector evasion techniques (Iran)

  • Clandestine identity creation (DPRK)

Q2 2025: Operational Integration

  • Automated persona generation loops (DPRK)

  • Internal operations optimization (multiple actors)

  • Advanced OPSEC implementation (Russia, others)

Q3 2025: Autonomous Execution

  • Agentic AI performing live intrusions ("Vibe Hacking")

  • AI-driven extortion optimization

  • No-code ransomware with advanced evasion

Risk Management Framework: Adaptive Defense Strategy

Immediate Risk Mitigation (Next 90 Days)

Detection Enhancement: Traditional signature-based detection fails against AI-generated content and behavior. Focus on pattern recognition:

  • Hunt for repetitive, programmatic command sequences that suggest AI-driven execution

  • Monitor for rapid iteration cycles indicative of AI-assisted debugging

  • Flag configuration artifacts that reference attack frameworks by name

Identity Verification Hardening: Employment fraud at DPRK's scale demands fundamental verification improvements:

  • Implement multi-factor presence verification beyond video calls

  • Deploy geovelocity monitoring with stricter thresholds

  • Require device attestation for remote work scenarios

Supply Chain Resilience: AI-powered criminal services create new dependency risks:

  • Map vendor AI dependencies and assess third-party AI usage policies

  • Establish model-provider integration for takedown coordination where available

  • Develop contingency plans for AI service disruptions

Medium-Term Strategic Adaptation (6-12 Months)

Capability-Based Defense Architecture: Move beyond threat-actor-centric models to capability-focused frameworks:

  • Assume AI amplification across all threat categories

  • Build detection systems that identify AI-enhanced techniques regardless of actor

  • Develop response playbooks that account for accelerated attack timelines

Negotiation Preparedness: AI-driven extortion will become more sophisticated and data-informed:

  • Pre-position negotiation frameworks with AI-aware protocols

  • Monitor for financial data exfiltration as a precursor to AI-calculated ransom demands

  • Establish communication channels that can handle AI-generated negotiation tactics

Media Risk Management: Content laundering through sponsored placements requires proactive monitoring:

  • Track narrative propagation across regional media ecosystems

  • Establish relationships with media partners in strategic markets

  • Develop rapid response capabilities for synthetic content campaigns

Strategic Recommendations: Risk as Competitive Advantage

Invest in AI Defensive Capabilities: Organizations that successfully integrate AI into their defensive operations will maintain competitive advantage. This isn't optional—it's table stakes.

Rethink Identity Infrastructure: The fundamental assumptions underlying digital identity are under assault. Organizations must architect new trust models that assume sophisticated synthetic identities.

Embrace Velocity Matching: Traditional monthly or quarterly security reviews are insufficient when adversaries iterate daily. Risk management must match threat evolution speed.

Leverage Ecosystem Intelligence: Model providers and cloud platforms have unique visibility into malicious AI usage. Building relationships and integration capabilities with these platforms creates strategic intelligence advantages.

Operational Reality Check

Having managed security operations through multiple threat evolution cycles, this AI integration represents a significant paradigm shift. The organizations that recognize this as a fundamental change—rather than an incremental improvement—will be the ones that maintain competitive position.

The risk isn't just about protecting against AI-enabled attacks. It's about ensuring your organization's defensive capabilities evolve at the same pace as the threat landscape. Those who treat this as a technology problem rather than a strategic imperative will find themselves fundamentally disadvantaged.

AI Influence Level

  • Level 3 - AI Created, Human Full Structure

Reply

or to participate

Keep Reading

No posts found