Share this article on:
Cyber security companies have been testing the potential of OpenAI’s ChatGPT as a tool that benefits security teams and defenders.
When everyone’s favourite artificial intelligence (AI) ChatGPT launched back in November 2022, there was much discussion about its potential to assist bad actors in developing malicious programs and phishing emails.
“Using OpenAI’s ChatGPT, [Check Point Research] was able to create a phishing email, with an attached Excel document containing malicious code capable of downloading reverse shells,” said researchers from Check Point Research.
Just as less experienced cyber criminals could use ChatGPT to write malicious code, security workers could leverage AI in the same way, according to Jeff Pollard, principal analyst and vice-president at Forrester.
“I do think there is an aspect of looking at what it’s doing now, and it’s not that hard to see a future where you could take a SOC analyst that maybe has less experience, hasn’t seen as much and they’ve got something like this sitting alongside them that helps them communicate the information, maybe helps them understand or contextualise it, maybe it offers insights about what to do next,” he said.
Additionally, Malwarebytes has revealed a number of applications in which ChatGPT could be used, such as debugging code and uncovering exploits, assisting Network Mapper (Nmap) tools in vulnerability assessment, and aiding security professionals with insights on preventative measures.
Now, a number of cyber security companies have begun trials with ChatGPT and its integration into security services and programs.
Orca, a cloud security organisation, has said that it is using ChatGPT to upgrade the remediation steps given to customers.
“By fine-tuning these powerful language models with our own security data sets, we have been able to improve the detail and accuracy of our remediation steps — giving you a much better remediation plan and assisting you to optimally solve the issue as fast as possible,” the company said.
Armo, a company known for creating the first open-source Kubernetes security platform, has said it uses ChatGPT to make it easier for users to generate security policies based on Open Policy Agent (OPA), a general-purpose policy engine.
“Armo Custom Controls pre-trains ChatGPT with security and compliance Regos and additional context, utilising and harnessing the power of AI to produce custom-made controls requested via natural language,” said Armo.
“The user gets the completed OPA rule produced by ChatGPT, as well as a natural language description of the rule and a suggested remediation to fix the failed control — quickly, simply and with no need to learn a new language.”
OpenAI has said that ChatGPT is currently a research preview and that it hopes to continue adding improvements to prevent it from being used for malicious means by cyber criminals.
Comments powered by CComment