Malicious AI Tools: What Cybercriminals Are Promoting in 2025

Underground advertising reveals the tools threat actors use — and how defenders can respond.

Cybercrime forums and underground markets are now saturated with AI-powered tools being pitched by threat actors. These aren’t just shiny demos — many are built to automate phishing, credential theft and other classic attacks with AI enhancements.

Below, we break down what’s happening, show the major tool types, explore why they matter, and offer a practical defence checklist.


Why AI-Driven Tools Are Flooding the Cybercrime Market

DriverWhat it means for defenders
Scale & SpeedAI lets attackers automate large volumes of phishing, content generation or reconnaissance far faster than manual methods.
Lower technical barTools packaged with UI or APIs let less skilled criminals launch attacks, widening the threat base.
Evasion & stealthAI-agents mimic human browsing, adapt payloads dynamically, and evade detection more easily.
Classic vectors upgradedRather than inventing wholly new attacks, adversaries are using AI to turbo-charge what already works.

Top Categories of AI Tools Threat Actors Are Promoting

CategoryPrimary use-caseExample features/promises
Phishing & Social Engineering GeneratorsCrafting highly personalised luresEmail writing, tailored landing-pages, psych profile input
Credential/Account Harvesters (AI-Assist)Detecting reused creds, automating login attemptsBot-powered stuffing, credential databases, AI-driven pattern matching
Malware Build Kits / Payload InjectorsEmbedding AI-logic in malware delivery or evasionCode-generation, polymorphic payloads, dynamic obfuscation
Browser-based AgentsBrowsing, scraping or sandbox evasion via AI agentsMimicking human sessions, pay-wall bypass, automation of UI workflows
Data-Mining & Recon ToolsReconnaissance at scale: entity extraction, leaks parsingNLP-based result filtering, entity linking, social graph mapping

What Makes These Tools Particularly Dangerous

  • Volume + Quality: Attack volumes are rising and each event is more convincing (better grammar, tailored content, fewer mistakes).
  • Automation of “last-mile” tasks: Tasks like writing the phishing email, building the landing page, or rotating payloads that once required manual labour are now automated.
  • Resource reuse: Many tools are repurposing legitimate AI frameworks (LLMs, embedding APIs, vision models) so attribution becomes harder and cost decreases.
  • Integrated workflows: Toolchains are being built end-to-end: reconnaissance → build → deliver → monetise.
  • Access & affordability: Some tools are marketed openly in underground forums with subscription models, lowering the barrier to entry.

Practical Steps for Defence

LayerActions for enterprises
User AwarenessTrain staff to recognise ultra-personalised phishing, unusual browsing behaviours, and trusted-looking landing pages.
Authentication HardeningEnforce MFA, monitor credential reuse, block login attempts from anomalous locations/devices.
Email & Web DefencesUse advanced filters that detect AI-generated content patterns, sandbox new attachments/links, monitor for rapid campaign-style behaviour.
Browser & Session MonitoringLook for high-volume browsing sessions tied to unpopular accounts or bots, unusual automation patterns, pay-wall bypass behaviours.
Threat Intelligence & Tool-HuntingMonitor underground forums, track novel AI-tool references, subscribe to threat feeds about AI-enhanced attacks.
Incident Response ReadinessBuild playbooks that assume AI-automation upstream — expect faster attack lifecycles, shorter dwell times, and more automated phases.

Final Word

The era of “manually crafted phishing and targeted malware” is evolving into one of AI-augmented mass campaigns and automated adversary workflows. Organisations cannot treat this as the same old threat model. The defenders’ pace must match the attackers’ new velocity. The tools may have changed, but the game remains. Adapt or fall behind.