Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Crowdstrike says cyber criminals are rapidly adopting AI for cyber crime

In the same way as businesses all around the world, hackers are making use of AI to bolster the damaging power of their cyber attacks.

Wed, 25 Feb 2026
Crowdstrike says cyber criminals are rapidly adopting AI for cyber crime

As outlined in CrowdStrike’s Global Threat Report 2026, the number of cyber attacks by “AI-enabled adversaries” grew 89 per cent compared to the previous year.

The technology is used by threat actors in a number of ways to supercharge their capabilities, from social engineering, writing phishing content and emails, developing malware and harmful tools and disinformation.

“AI accelerated phishing and automated reconnaissance, shortening the time from initial access to impact. It elevated less sophisticated threat actors and amplified the most advanced ones. It compressed the time between intent and execution,” CrowdStrike wrote.

 
 

Some threat actors use these AI tools to write phishing emails in multiple languages, reducing the time it takes to launch threat campaigns and making them more convincing.

In terms of developing malware using AI, CrowdStrike analysts observed Russian state-sponsored hacking and espionage group Fancy Bear embedding Large Language Model (LLM) prompts into malware to have it perform operational tasks.

The campaign, which it referred to as LameHug, aimed to perform espionage tasks against Ukraine, and made use of an LLM to support document collection and recon before exfiltration.

While CrowdStrike did not note that the LLM made the malware any more effective, it said that the use of it showed that threat actors were exploring how the tool could be a “development Aid.

“This is another area where AI can enable the threat actor and we expect to see more of this,” said CrowdStrike head of counter adversary operations, Adam Meyers during a media briefing, as seen by InfoSecurity Magazine.

As companies increasingly rely on the technology, AI also expands the threat landscape, with new vulnerabilities to exploit.

“As AI is embedded into development pipelines, SaaS platforms, and operational workflows, AI systems themselves become part of the attack surface,” CrowdStrike continued.

“Adversaries exploited legitimate AI tools by injecting malicious prompts that generated unauthorized commands. As innovation accelerates, exploitation follows.”

These threat actors aren’t typically developing their own models either, but making use of legitimate tools by bypassing their safeguards.

OpenAI’s ChatGPT was mentioned far more than any other model on dark web hacking forums, 550 per cent more in fact, with Gemini the second most mentioned, followed by Grok, DeepSeek and Claude.

Most mentions related to service outages, new version releases and model performance feedback, except for Grok, which had a massive jump in mentions as a result of its publication of “racist, antisemetic and explicit content.”

CrowdStrike cited a number of groups who dramatically increased their use of AI in threat campaigns.

PUNK SPIDER increased their use of the technology 134 per cent, executing AI-generated scripts during its cyber attack, while FAMOUS CHOLLIMA used the technology to generate fake personas, as well as using an AI coding assistant to “evade detection and maintain employment.”

“This adversary has used AI image manipulation services to create fake personas, messaging services with AI capabilities to manage multiple accounts, and AI coding assistants to perform legitimate job functions,” CrowdStrike wrote.

State-sponsored adversaries used AI to empower their disinformation campaigns. CrowdStrike cited a pro-Russia propagandist who used AI “to generate legitimate-looking media websites and videos in multiple IO campaigns targeting US and German elections.

“Other disinformation campaigns have used AI-generated networks of fake social media accounts, deepfake videos and audio of political figures or well-known individuals, and targeted propaganda content.”

In order to defend these new AI-powered cyber attacks, CrowdStrike says that organisations need to keep AI in mind when developing clear incident response plans. This means clear identity verification procedures, a focus on AI when it comes to security and training for awareness.

“Security must parallel the slope of innovation. In the agentic era, cybersecurity is the foundational infrastructure required to protect AI itself,” said CrowdStrike.

“To defend against AI-enabled threats, organizations should develop clear incident response responsibilities and business continuity plans.”

“This is an AI arms race,” added Meyers.

“Security teams must operate faster than the adversary to win.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags:
You need to be a member to post comments. Become a member for free today!