Back to all insights

Are AI & LLMs tilted towards bad actors?

Threat landscape

/

October 22, 2025

Intro

Artificial intelligence is reshaping both sides of the battlefield – cybersecurity professionals and adversaries. On one hand, LLMs and Agentic AI enable faster, more adaptive, and scalable cyberattacks. On the other, the same technologies power defensive teams, making them capable of building counter-measures detecting and mitigating threats in real time.

Modern adversaries keep on leveraging AI to automate reconnaissance, craft personalized phishing, generate malicious code, produce realistic deepfakes for social engineering.

Meanwhile, security teams are using AI to automate threat detection, behavioral analysis, and incident response — trying to stay one step ahead in an arms race that evolves daily.

How attackers weaponize AI and LLMs

AI-assisted hacking is basically a normal practice now. Researchers and law enforcement agencies have documented a growing number of cases where LLMs, and especially Agentic AI, were used as a key tooling for cybercrime.

Attackers use AI to:

  • Automate reconnaissance — scanning for vulnerabilities, open ports, or misconfigured systems faster than any human could.

  • Generate malicious code — writing or obfuscating malware snippets, exploits, and phishing payloads.

  • Craft social engineering messages — LLMs create perfectly fluent, localized phishing emails that mirror corporate tone or even mimic a CEO’s style.

  • Produce deepfakes — audio and video impersonations for financial fraud or disinformation campaigns.

  • Exploit AI systems themselves — via prompt injection, adversarial attacks, or data poisoning to manipulate or corrupt models.

An example of AI misuse out in the wild

In early 2025, several cyber threat intelligence (CTI) groups observed a rise in phishing operations enhanced by LLMs. These included realistic fake login portals generated through automated HTML templates and targeted spear-phishing emails that bypassed traditional spam filters.

According to CrowdStrike’s Global Threat Report, state-aligned actors have begun testing generative AI for automating reconnaissance and mass-producing phishing content. Similarly, Check Point Research highlighted that now more malware devs have started using code-writing assistants or taking the Agentic approach to build obfuscated loader scripts and polymorphic payloads.

The Business Risks of AI-Augmented Attacks

For organizations, the threat isn't just that attacks are happening – it's that AI fundamentally changes their economics. What once required time, skill, and manual effort can now be automated at scale. A single adversary can generate thousands of personalized phishing emails, each tailored to its recipient's role, writing style, and recent activity. Deepfakes can impersonate executives in video calls or create false statements that spread before they can be debunked. AI-assisted reconnaissance tools can crawl systems for vulnerabilities faster than human analysts, and polymorphic malware can adapt in real-time to evade detection systems that rely on pattern recognition.

According to the recent Gartner Report on Agentic AI, the result is a compressed timeline between reconnaissance and exploitation, and a blurred line between what's real and what's fabricated. Trust erodes faster, and traditional defenses struggle to keep pace

How AI strengthens the "defending side"

But the same capabilities that empower attackers also create new defensive possibilities. AI-driven security systems can monitor behavioral patterns across entire networks, flagging anomalies that would be invisible to rule-based systems – unusual login sequences, subtle lateral movement, or deviations in normal user activity. Where human analysts might drown in alert fatigue, machine learning models can triage incidents, correlate signals across disparate systems, and surface the events that actually matter.

Organizations are also using AI offensively in their own defense: generating realistic phishing simulations to train employees, predicting attack vectors based on historical data, and automating response workflows through integrated SIEM and SOAR platforms. The advantage here isn't just speed – it's context. AI can synthesize information across logs, threat feeds, and past incidents in ways that manual processes simply can't (IBM X-Force Threat Intelligence Index; Google Cloud Security AI Workbench).

What this means for governance and compliance

The asymmetry isn't just technical – it's organizational. Deploying AI for defense requires oversight, transparency, and continuous validation. Both proprietary and third-party AI tools need regular audits to ensure they're not introducing new risks through data leakage, bias, or brittleness under adversarial conditions.

This means going beyond tool adoption. Organizations need structured AI governance: model version control, accountability frameworks, ongoing employee training for evolving threats like deepfakes, and active participation in threat intelligence sharing networks. Defensive AI isn't a product you buy – it's a capability you build and maintain (NIST AI Risk Management Framework; Microsoft AI Security Guidance).

Don't let AI disrupt your business

Minimize cyber-risks, secure assets today

What’s next?

AI and LLMs are redefining what “speed” and “scale” mean in cyber operations.

For attackers – faster reconnaissance, automated deception, and synthetic social engineering.

For defenders – leveraging automation and pattern recognition for better exposure management, threat detection, and resilience.

Ultimately, the future of cybersecurity will depend on how effectively organizations can align AI innovation with security governance — turning the same technology that empowers adversaries into a shield that protects against them.

Start securing your organization's infrastructure today with Deepengine! Reach out to us, if you need help with getting started.