Threat Intelligence

Adversarial Use of AI: What GTIC Found in 2025

Artificial intelligence has officially crossed the threshold from theoretical cybercrime tool to operational reality. According to Google’s Threat Intelligence Group (GTIG), adversaries ranging from nation states to cybercriminals are no longer just experimenting with AI for productivity. They are actively deploying AI enabled tools for reconnaissance, malware generation, obfuscation, and social engineering (Google Threat Intelligence Group, 2025).

This shift marks a decisive transformation in the cyber threat landscape. Let’s break down who is doing this, what they are doing, how they are doing it, and why.

Who Is Using AI for Cyber Operations

Nation-state actors are leading the charge. GTIG reports that more than 57 advanced persistent threat (APT) groups have experimented with AI. China’s APT41 and DRAGONBRIDGE have used AI for reconnaissance and information operations. Iran’s APT42 is described as one of the heaviest users, leveraging AI for phishing and malware development. North Korean groups like APT43 have weaponized AI for malware and even deepfake driven job scams. Russia’s APT28 has tested AI for payload crafting and encryption (GTIG, 2025).

Cybercriminals are also embracing AI. Tools like WormGPT, FraudGPT, and GhostGPT are sold on underground forums, enabling phishing, ransomware, and identity fraud. Prompt engineering “jailbreak packs” are now a commodity, allowing even novice criminals to bypass AI guardrails (GTIG, 2025).

Information operations (IO) actors use AI to scale propaganda. Groups like DRAGONBRIDGE and KRYMSKYBRIDGE generate localized fake news and manipulate social media narratives with generative models.

Finally, cyber mercenaries, private contractors often working as proxies, are experimenting with AI-enabled tooling to gain a competitive edge in espionage and influence campaigns.

What Capabilities Are Emerging

GTIG highlights several areas where AI is reshaping adversary tactics:

  • Phishing and social engineering: AI generates hyper personalized lures, localized in language and culture, and even supports deepfake video calls for scams. North Korean actors have used deepfake avatars to infiltrate Western companies (Google Threat Intelligence Group, 2025).
  • Malware generation: New families like PROMPTFLUX and PROMPTSTEAL integrate with LLM APIs to rewrite themselves in real time, evading antivirus detection. PROMPTLOCK, a proof-of-concept ransomware, uses a locally hosted LLM to generate encryption scripts at runtime.
  • Vulnerability discovery: AI accelerates scanning for zero-day and n-day vulnerabilities, helping adversaries iterate on exploits faster.
  • Information operations: Generative models produce propaganda optimized for SEO and virality, making disinformation harder to counter.
  • Guardrail bypass: Prompt injection and jailbreak techniques are sold in underground markets, enabling adversaries to sidestep safety restrictions.
  • Underground AI marketplaces: Subscription-based “dark AI” services like WormGPT and FraudGPT democratize access to sophisticated crimeware.

How Adversaries Are Doing It

Adversaries are exploiting both mainstream and underground AI ecosystems. Google Gemini is widely abused for content creation and code generation, while other models like DeepSeek and Qwen are popular in underground circles. Malware families such as PROMPTFLUX directly interface with APIs using stolen keys to request new obfuscation logic on demand (GTIG, 2025).

Platforms like Lovable and Vercel are repurposed to generate phishing websites. Underground forums sell “prompt packs” that exploit roleplay scenarios, encoding tricks, and metadata injections to bypass guardrails. Techniques like “policy puppetry” have proven universally effective across frontline LLMs (HiddenLayer, Forbes, 2025).

Why Adversaries Are Motivated

The motivations vary but converge on a few key themes:

  • Scale and speed: AI reduces the skill barrier, allowing low-level actors to launch sophisticated campaigns quickly. This democratization of cybercrime supercharges existing networks.
  • Evasion: Self-mutating malware defeats traditional defenses. AI editors change code structure and obfuscation style with each execution.
  • Strategic advantage: Nation-states use AI for espionage, reconnaissance, and influence operations. North Korea even leverages AI to infiltrate companies through synthetic job applications.
  • Profit: Cybercriminals weaponize AI to cut costs and expand reach, particularly in ransomware and fraud.
  • Disinformation: IO actors exploit generative AI to flood social media with localized propaganda at scale.

Defensive Mitigations

Google and its partners are responding with a mix of disruption and resilience. Malicious accounts and API keys are disabled upon detection. Classifiers and guardrails are strengthened by feeding observed abuses back into model training. Automated red teaming tools like Big Sleep and CodeMender are used to identify and patch vulnerabilities (DeepMind, 2025).

GTIG also emphasizes the importance of public-private partnerships and frameworks like Google’s Secure AI Framework (SAIF). Monitoring underground marketplaces and educating employees about AI risks are part of the broader defense strategy.

Conclusion

The adversarial use of AI is no longer speculative. It is operational, adaptive, and accelerating. Nation-states, cybercriminals, and IO actors are all exploiting AI to achieve their goals, whether espionage, profit, or influence. Their methods range from polymorphic malware and deepfake scams to prompt engineering arms races and subscription-based dark AI services.

The takeaway is clear: defenders must match attacker innovation with continuous AI-enabled monitoring, layered authentication, and collaborative intelligence sharing. As GTIG warns, the AI-cyber arms race is only beginning. Vigilance and innovation will determine who holds the advantage in this new domain (Google Threat Intelligence Group, 2025).

David

Recent Posts

How Cybersecurity Firms Are Using AI to Detect and Respond to Insider Threats

Insider threats have quietly become the most persistent and costly cybersecurity risk facing organizations today.…

14 hours ago

Malta Tax Office Data Breach: Error, Negligence, or Insider Threat?

When the Malta tax office mistakenly sent sensitive company details to around 7000 recipients, the…

1 day ago

How Identity Governance and PAM Solutions Stop Insider Threats in HR and Sensitive Roles

Insider threats are one of the most persistent risks facing organizations today. Whether malicious, negligent,…

2 days ago

The Knownsec Data Breach: A Wake-Up Call for Global Cybersecurity

In November 2025, the cybersecurity community was shaken by one of the most consequential breaches…

2 days ago

HR Insider Threats in 2025: The Hidden Risks Inside Your Organization

When most people think of insider threats, they picture rogue IT administrators or disgruntled engineers.…

2 days ago

When Zero‑Days Meet Insider Threats: The Real Risk Window

Cybersecurity headlines often focus on zero‑day exploits, those mysterious vulnerabilities that attackers discover before vendors…

3 days ago

This website uses cookies.