Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Hackers leveraging AI to launch phishing scams

Acumen Research and Consulting has found the global market for AI-based security products has grown to $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030 due to the rise in cyber attacks fuelling the market's growth.

user icon
Wed, 14 Sep 2022
Hackers leveraging AI to launch phishing scams
expand image

Another driver of market growth was the COVID-19 pandemic and shift to remote work, according to the report.

Along with an increasing number of attacks such as distributed denial-of-service (DDoS) and data breaches, many of them extremely costly for the impacted organisations, are generating a need for more sophisticated solutions.

This forced many companies to put an increased focus on cyber security and the use of tools powered with AI to more effectively find and stop attacks.

============
============

Looking ahead, the Acumen Research and Consulting team has observed trends such as the growing adoption of internet of things (IoT) and the rising number of connected devices are expected to fuel market growth. The growing use of cloud-based security services could also provide opportunities for new uses of AI for cyber security.

AI’s security boost

Among the types of products that use AI are antivirus/anti-malware, data loss prevention, fraud detection/anti-fraud, identity and access management, intrusion detection/prevention system, and risk and compliance management.

Up to now, the use of AI for cyber security has been somewhat limited.

According to Brian Finch, co-leader of the cyber security, data protection and privacy practice at law firm Pillsbury Law, companies thus far aren’t going out and turning over their cyber security programs to AI.

"That doesn’t mean AI isn’t being used.

"We are seeing companies utilise AI but in a limited fashion.

"Mostly within the context of products such as email filters and malware identification tools that have AI powering them in some way," Finch said.

Behavioural analysis tools have been increasingly using AI. Tools analysing data to determine behaviour of hackers to see if there is a pattern to their attacks — timing, method of attack, and how the hackers move when inside systems.

"Gathering such intelligence can be highly valuable to defenders," Finch said.

In a recent study, research firm Gartner interviewed nearly 50 security vendors and found a few patterns for AI use among them, according to research vice president Mark Driver.

"Overwhelmingly, they reported that the first goal of AI was to 'remove false positives' insofar as one major challenge among security analysts is filtering the signal from the noise in very large data sets.

"AI can trim this down to a reasonable size, which is much more accurate.

"Analysts are able to work smarter and faster to resolve cyber attacks as a result," Driver said.

In general, AI is used to help detect attacks more accurately and then prioritise responses based on real world risk, Driver further explained, and it allows automated or semi-automated responses to attacks, and finally provides more accurate modelling to predict future attacks.

"All of this doesn't necessarily remove the analysts from the loop, but it does make the analysts' job more agile and more accurate when facing cyber threats,” Driver said.

Adding to cyber threats

On the other hand, bad actors can also take advantage of AI in several ways.

"For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses," Finch further explained.

When combined with stolen personal information or collected open source data such as social media posts, cyber criminals can use AI to create large numbers of phishing emails to spread malware or collect valuable information.

"Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — [for example] tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails,” Finch said.

"AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools."

Constantly changing malware signatures can help attackers evade static defences such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behaviour up until it’s ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. This is partly why companies are moving towards a "zero trust" model, where defences are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.

Given the economics of cyber attacks, Finch argued, it’s generally easier and cheaper to launch attacks than to build effective defences.

AI will be on balance more hurtful than helpful, Finch outlined, explaining that the caveat is based on really good AI being difficult to build and requires a lot of specially trained people to make it work well.

"Run of the mill criminals are not going to have access to the greatest AI minds in the world.

"Cyber security program might have access to "vast resources from Silicon Valley and the like [to] build some very good defences against low-grade AI cyber attacks."

"When we get into AI developed by hacker nation states [such as Russia and China], their AI hack systems are likely to be quite sophisticated, and so the defenders will generally be playing catch-up to AI-powered attacks," Finch said.

[Related: North Korean state-backed hackers linked to Maui ransomware activity]

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.