Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Interview: CrowdStrike’s APAC CTO Fabio Fratucello – how generative AI combats the cyber skills gap

The cyber security industry has been plagued by a major skills shortage, and it only looks to be worsening as workloads get larger and workers get burnt out.

user icon Daniel Croft
Tue, 21 Nov 2023
Interview: CrowdStrike’s APAC CTO Fabio Fratucello – How generative AI combats the cyber skills gap
expand image

At the same time, the artificial intelligence (AI) revolution has begun, and while other industries are concerned that the technology could be coming for their jobs, generative AI may just be exactly what the cyber industry needs.

Cyber Daily sat down with CrowdStrike’s chief technology officer for the Asia-Pacific region to discuss how the technology could empower cyber professionals.

Cyber Daily: AI is set to be the next technological frontier and a tool to be used in pretty much any and all industries. What are some of the key ways in which generative AI can assist cyber staff and SOCs?

============
============

Fratucello: Generative AI, applied to a security context, enables security teams to be more productive and focused by reducing manual, lower-value work. As an example, instead of spending a significant amount of time manually going through documentation searching for knowledge or intelligence, generative AI allows teams to obtain specifically what they are searching for.

From a skills shortage perspective, this technology is going to help businesses because typically, it’s expensive, challenging and increasingly competitive to hire great cyber talent. The cyber threat to organisations is only going to increase in severity and volume, so this technology is essential for helping upscale and uplift the value of the workforce and do more with current resources. Generative AI also facilitates on-the-job learning, empowering team members to upskill themselves as they work, ultimately strengthening an organisation’s overall security posture.

This is why there is a very high return on investment for organisations in using generative AI.

Cyber Daily: Workers in the cyber industry are, at times, inundated with a wide range of tools that they use in their jobs. In some cases, having such a large arsenal can reduce efficiency and efficacy. Do you think that generative AI will provide a solution to this and provide workers with a more unified solution that does not require juggling multiple tools?

Fratucello: There used to be the belief that the more security tools an organisation could acquire, the more secure it would be. However, many Australian businesses are now challenged by the sheer complexity of their security stack. If an organisation is relying on an overly complex and disjointed array of technologies, it can inadvertently expose them to greater risks of attack while increasing operational costs. At CrowdStrike, we advocate for organisations to consolidate existing security solutions into a single, interconnected platform.

Having complete visibility across all the relevant domains within an environment is key. Security leaders, therefore, need to consolidate their security stack to prevent breaches [from] slipping between the visibility gaps, either by implementing extensive and expensive integration programs or by taking advantage of a security platform approach.

When thinking in this context, the opportunity for generative AI is to be an overarching capability layer able to reach and interact with any platform element. For instance, Charlotte AI enables security experts to automate repetitive tasks like data collection, extraction and basic threat search and detection while making it easier to perform more advanced security actions across a single or several platform capabilities. Generative workflows created by Charlotte AI can turn an eight-hour day of activity into minutes. Security professionals can have a conversation with Charlotte AI, get contextualised information they need, create better workflows, target adversaries, and identify exposure to vulnerabilities.

Cyber Daily: On the other hand, there have been major concerns as to AI potentially displacing the workforce. Do you think that this is a concern for workers in the cyber security industry, such as areas of security analysis or digital forensics?

Fratucello: The cyber threat landscape is only going to get more challenging for security teams to stay across, and this is made harder by an ongoing skills shortage in our industry. Generative AI has the potential to free up security teams so that they can focus on tasks that require a higher degree of business intimacy, ultimately delivering higher-value work. We are unlikely to see cyber teams run out of work.

Our approach to stopping the breach has always been rooted in the belief that the combination of machine and human element together will transform cyber security. We believe our continuous feedback loop on human-validated content is critical. This is also valid when talking about generative AI, as it is best used to assist teams and empower them rather than replace them.

Cyber Daily: Do you think that the Australian government, as well as other governments and international bodies like the US government and the United Nations, are doing enough to put proper regulations in place to prevent the unethical or dangerous development and use of AI?

Fratucello: Recent international initiatives such as Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence is a step towards regulating AI. Governments and intergovernmental organisations need to consider the importance of maintaining a balance of allowing innovation, as threat actors won’t be following the rules and could potentially innovate faster than us. This increases the risk of cyber security professionals falling behind in developing and innovating technologies that protect our data.

The Australian government’s Cyber Security Strategy for 2023–2030 will hopefully be a good example of a regulatory measure to prevent the unethical or dangerous development and use of AI. The Australian government has set a target to become “the most cyber secure country in the world by 2030”. It’ll be interesting to see how AI is considered as part of the strategy when it is announced.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.