Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Australia holds the keys to understanding delays to security patching

Organisations don’t always patch their systems quickly enough to avoid intrusions or attacks; Rohan Langdon at ExtraHop writes we’re only now starting to understand the myriad reasons why.

user iconRohan Langdon
Fri, 24 Jun 2022
Rohan Langdon
expand image

In early 2022, an Australian state government health services agency and the outsourcing partner that patches its fleet of 1,500 servers became a case study in an often fraught area of security: patching.

By trawling four years of patch management meeting minutes and even attending some meetings virtually, University of Adelaide researchers were able to pinpoint common reasons why patching is delayed and the frequency these delays occur.

The findings are eye-opening: out of 232 closed tasks, 132 — or 56.9 per cent — were impacted by delays. The researchers observed that patches were often delayed for “a combination of reasons”. One such combination was “delayed input by the vendor, delays in coordination with the vendor, and lack of expertise”. In total, they identified 38 potential causes of delays in software security patch management.

============
============

While there are some obvious limitations to the research — one end-user organisation isn’t a large sample size — the case study is still interesting for a couple of reasons.

First, it offers insight and a level of nuance that’s often lost in discussions about patching. Patch management is a part of security that can lend itself to victim-blaming. It’s often presented as a given that if a patch exists, it must be applied. If an organisation doesn’t apply a patch fast enough or at all and experiences a security incident caused by the exploitation of the unpatched system, they may receive little sympathy, having been judged to have sealed their own fate through inaction or poor patch practices generally.

But there may be good reasons not to rush patching, and without knowing an organisation’s internal processes intimately, there’s a real risk of drawing inaccurate conclusions on why a patch takes a while to apply.

For example, it’s not unheard of for security release documentation or advisories to be issued some time after a fix; or for security patches to be bundled with other non-critical or functional updates that themselves pose breakage risks. Considerable testing of patches and mitigations is required to ensure that they do not break more than they fix. Allowing security teams appropriate time to do their work is essential if unanticipated repercussions to production systems are to be avoided.

Patch management is also traditionally a time-consuming endeavour. A recent study by ExtraHop found that while 64 per cent of teams were able to enact mitigations or apply a patch (where available) within three days, it took a week or more in 28 per cent of instances. Breaking those findings down further, 26 per cent of teams responded in under a day, 39 per cent took one-to-three days, 21 per cent needed a week, and 8 per cent needed a month or more.

That month-long window may be necessary due to overly manual and people-intensive internal workflows. It may even be considered normal depending on the size of the organisation or the industry it exists in. The health agency case study, for example, has a standard allowance of four weeks to retrieve, assess, test and deploy a patch. Their work is mission-critical, and any change to their systems would obviously be carefully considered and tested, since it is accompanied by considerable risk of breakage and downtime if a patch goes wrong and has to be rolled back.

Even if we accept that some delays in patch management are inevitable, organisations may be able to reduce the frequency and duration of delays through the use of best practice rules and tools.

Improving the lot of patch teams

Regular patch management is necessary to de-risking exposure to known vulnerabilities. Threat actors, particularly ransomware groups, have become adept at using old exploits, either singularly or chained together, to craft entry into enterprise systems.

Where risk assessments allow, and patches pass testing and other necessary gates, it is important that those patches can be applied expediently and without incurring any unnecessary delays. This is particularly so for critical vulnerabilities or zero-days, for which fixes may be released out-of-band, depending on severity and in the wild exploitation.

We also know from our own research that unpatched devices and the use of outdated protocols sap the confidence of defenders. Already, only 43 per cent of IT decision-makers in Australia express a high degree of confidence in their organisation’s ability to prevent or mitigate cyber security threats; an equal percentage have low confidence. Anything that can be done to improve the confidence of Australian businesses and security teams to mitigate against the large number of critical vulnerabilities that emerge every week is likely to be welcomed.

Network detection and response (NDR) technologies are increasingly being used by enterprises to improve vulnerability scanning and patch management, to identify assets at high risk and to reduce the potential for delays to patching them. Just over one-third of Australian businesses already have NDR systems in place, and an additional 40 per cent say they intend to invest in such systems this year.

Using NDR technology can help security teams discover potentially vulnerable devices that should be patched and offer continuous visibility into exploitation attempts that use vulnerabilities with assigned common vulnerabilities and exposures (CVE) identifiers. With these capabilities, teams are able to practice better security hygiene measures, as well as rapidly detect and respond against CVEs new and old.

Rohan Langdon, vice president, Australia and New Zealand, ExtraHop.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.