Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Op-Ed: Artificial intelligence of today and the future – Helpful or security risk in disguise?

Artificial intelligence (AI) tools have garnered significant attention recently due to their impressive capabilities.

user icon David Hollingworth
Mon, 17 Jul 2023
Op-Ed: Artificial intelligence of today and the future – Helpful or security risk in disguise?
expand image

These tools, like ChatGPT, for example, are powered by large language models (LLMs) that can generate complex pieces of writing – including research papers, poems, press releases, and even software code in multiple languages – all in a matter of seconds.

These AI tools have already begun to revolutionise various industries, including software development, where the emerging technology is being leveraged to accelerate the development process.

However, alongside the excitement around AI, there are also concerns about its potential risks. Some experts fear that the rapid advancement of AI systems could lead to a loss of control over these technologies and pose an existential threat to society. The question stands – are AI-powered tools going to be helpful in the long run, or will they just be a security risk waiting to blow up?

============
============

Leveraging AI for security

Despite the concerns and ongoing debate around the need for a pause in AI research and development, AI tools have already made their way into software development. These tools can generate code quickly – and in multiple languages – easily surpassing the speed of human developers. The integration of AI into cyber security tools also has the potential to improve the speed and accuracy of detecting cyber threats, such as deploying it to analyse vast amounts of data and quickly identify patterns and anomalies that can be difficult for humans to detect.

AI-enhanced security tools can also significantly decrease the number of false positives and help to reduce some of the more time-consuming security tasks, allowing development and security teams to allocate resources to focus on critical issues.

Additionally, AI’s ability to respond to prompts without the need for extensive research or interviews offers a unique advantage, as it eliminates the need for humans to perform repetitive programming tasks, which often require working tirelessly around the clock.

A force for good – and evil

While AI-enhanced security tools can be used to better protect organisations (and their end users) from cyber threats, the technology can also be used by malicious actors to help them create more sophisticated attacks and automate activities that mimic human-like behaviour without being detected by some software security tools. Already, there are reports of hackers leveraging AI to launch machine learning-enabled penetration tests, impersonate humans on social media in platform-specific attacks, create deep fake data, and perform CAPTCHA cracking.

It is important to recognise that while modern AI tools excel in certain areas, they are far from perfect and, for the time being, should be regarded as a scaled-down version of the autocomplete function commonly found in smartphones or email applications. And while AI can provide substantial assistance to individuals familiar with coding and help them accomplish specific tasks more efficiently, challenges will surface for those expecting AI tools to have the ability to produce and deliver complete applications.

For example, the AI may provide incorrect answers due to biases within the datasets on which they are trained or, when it comes to coding, the tools may omit crucial information and subsequently require human intervention and thorough security testing.

The importance of human-AI collaboration and oversight in application security testing

A recent demonstration by Synopsys researchers highlighted the need for human oversight in AI-generated code after instances were observed where AI-generated code failed to identify an open-source licensing conflict. Ignoring licensing conflicts can be very costly and may lead to legal entanglements for an organisation, which emphasises the current limitations of AI-enhanced security tools.

There have also been cases where AI-generated code included snippets of open-source code containing vulnerabilities; thus, it is imperative for organisations leveraging AI to adopt comprehensive application security testing practices to ensure that the code it generates is free of both licensing conflicts and security vulnerabilities.

For both attackers and defenders, cyber security is a never-ending race. Now, AI is an integral part of the tools used by both sides. As a result, there is a growing importance for human-AI collaboration. As AI-aided attacks become more sophisticated, AI-aided cyber security tools will be required to successfully counter the attack. By delegating these tasks to a security tool that’s integrated with AI, humans can then provide unique and actionable insights on how to best mitigate attacks.

The need for human intervention may decrease alongside each progressive step of AI evolution, but until then, the importance of maintaining an effective and holistic application security program is now more critical than ever before.

Kelvin Lim is the director of security engineering, APAC, at Synopsys Software Integrity Group.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.