Artificial intelligence has officially crossed the threshold from theoretical cybercrime tool to operational reality. According to Google’s Threat Intelligence Group (GTIG), adversaries ranging from nation states to cybercriminals are no longer just experimenting with AI for productivity. They are actively deploying AI enabled tools for reconnaissance, malware generation, obfuscation, and social engineering (Google Threat Intelligence Group, 2025).
This shift marks a decisive transformation in the cyber threat landscape. Let’s break down who is doing this, what they are doing, how they are doing it, and why.
Nation-state actors are leading the charge. GTIG reports that more than 57 advanced persistent threat (APT) groups have experimented with AI. China’s APT41 and DRAGONBRIDGE have used AI for reconnaissance and information operations. Iran’s APT42 is described as one of the heaviest users, leveraging AI for phishing and malware development. North Korean groups like APT43 have weaponized AI for malware and even deepfake driven job scams. Russia’s APT28 has tested AI for payload crafting and encryption (GTIG, 2025).
Cybercriminals are also embracing AI. Tools like WormGPT, FraudGPT, and GhostGPT are sold on underground forums, enabling phishing, ransomware, and identity fraud. Prompt engineering “jailbreak packs” are now a commodity, allowing even novice criminals to bypass AI guardrails (GTIG, 2025).
Information operations (IO) actors use AI to scale propaganda. Groups like DRAGONBRIDGE and KRYMSKYBRIDGE generate localized fake news and manipulate social media narratives with generative models.
Finally, cyber mercenaries, private contractors often working as proxies, are experimenting with AI-enabled tooling to gain a competitive edge in espionage and influence campaigns.
GTIG highlights several areas where AI is reshaping adversary tactics:
Adversaries are exploiting both mainstream and underground AI ecosystems. Google Gemini is widely abused for content creation and code generation, while other models like DeepSeek and Qwen are popular in underground circles. Malware families such as PROMPTFLUX directly interface with APIs using stolen keys to request new obfuscation logic on demand (GTIG, 2025).
Platforms like Lovable and Vercel are repurposed to generate phishing websites. Underground forums sell “prompt packs” that exploit roleplay scenarios, encoding tricks, and metadata injections to bypass guardrails. Techniques like “policy puppetry” have proven universally effective across frontline LLMs (HiddenLayer, Forbes, 2025).
The motivations vary but converge on a few key themes:
Google and its partners are responding with a mix of disruption and resilience. Malicious accounts and API keys are disabled upon detection. Classifiers and guardrails are strengthened by feeding observed abuses back into model training. Automated red teaming tools like Big Sleep and CodeMender are used to identify and patch vulnerabilities (DeepMind, 2025).
GTIG also emphasizes the importance of public-private partnerships and frameworks like Google’s Secure AI Framework (SAIF). Monitoring underground marketplaces and educating employees about AI risks are part of the broader defense strategy.
The adversarial use of AI is no longer speculative. It is operational, adaptive, and accelerating. Nation-states, cybercriminals, and IO actors are all exploiting AI to achieve their goals, whether espionage, profit, or influence. Their methods range from polymorphic malware and deepfake scams to prompt engineering arms races and subscription-based dark AI services.
The takeaway is clear: defenders must match attacker innovation with continuous AI-enabled monitoring, layered authentication, and collaborative intelligence sharing. As GTIG warns, the AI-cyber arms race is only beginning. Vigilance and innovation will determine who holds the advantage in this new domain (Google Threat Intelligence Group, 2025).
Insider threats have quietly become the most persistent and costly cybersecurity risk facing organizations today.…
When the Malta tax office mistakenly sent sensitive company details to around 7000 recipients, the…
Insider threats are one of the most persistent risks facing organizations today. Whether malicious, negligent,…
In November 2025, the cybersecurity community was shaken by one of the most consequential breaches…
When most people think of insider threats, they picture rogue IT administrators or disgruntled engineers.…
Cybersecurity headlines often focus on zero‑day exploits, those mysterious vulnerabilities that attackers discover before vendors…
This website uses cookies.