Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Op-Ed: Why AI is causing security challenges for software developers

When released to the world in late 2022, the artificial intelligence tool ChatGPT attracted instant attention.

user icon David Hollingworth
Mon, 28 Aug 2023
Op-ed: Why AI is causing security challenges for software developers
expand image

The tool’s ability to respond to almost any question with high-quality responses made it popular with everyone, from researchers and business leaders to teachers and students.

ChatGPT also quickly became a focus of attention for software developers as it is able to produce workable code based on a simple prompt. It’s since become clear that the tool has the potential to reshape the entire software development process.

Recognising that the popularity of AI tooling – along with its potential benefits – won’t go away anytime soon, it is imperative that we consider the underlying security implications of utilising the technology in development workflows. Unfortunately, many security processes are frequently overlooked, resulting in poor quality output and – potentially – vulnerabilities that carry varying degrees of business risk and growth of the attack surface.

============
============

There have been a number of recent developments that raised significant concerns about the ramifications of such bugs when they are not caught and addressed early on. These include:

  • OpenAI, the AI lab that created ChatGPT, disclosed in March that it took ChatGPT offline due to a bug in an open-source library that potentially exposed payment-related information of the chatbot’s subscribers. This information included names, email addresses, and partial credit card numbers with expiration dates. OpenAI has since patched the bug.
  • Stanford University found that a mere 3 per cent of research participants who had access to an AI assistant tool (such as GitHub Copilot) wrote secure code. The resulting paper also revealed that those with this access are more likely to believe they wrote secure code than those without it.
  • A Université du Québec team asked ChatGPT to generate 21 programs using a variety of languages. The team discovered that only five of the 21 were initially secure. Although ChatGPT was aware of its vulnerability issues, it would not reveal them unless the team specifically asked whether the code was secure.

For all its marvel and mystery, AI technology suffers from the same common pitfalls mere human mortals face when navigating and writing code. This is hardly surprising given the reliance on human-created training data, but crucial to remember before making assumptions that generative AI content is inherently accurate.

Because the ensuing, inevitable threats will likely forever permeate the future even as the technology improves, developers must hone their security skills and raise the bar for code quality.

There are three approaches to enabling best practices for a more secure developer-AI partnership. They are:

  1. Incorporate protection from the start
    Recent research has found that just 14 per cent of developers are focused on security. Ensuring code quality, reducing technical debt, improving application performance, and solving real-world problems ranked higher in prioritisation.

    This mindset needs to change. Whether human or machine, poor coding patterns too often remain the go-to choice. It takes a security-aware team to enable safer coding patterns by asking the right questions and delivering the right prompt engineering.

  2. Understand the technology
    If an organisation is looking to increase the integration of AI into the developer experience, it must become extremely familiar with the team’s tools. From the perspective of secure coding, AI is “human” to a fault. We should prepare to identify problems based on the results that the tools generate.

    Developers need to be thought of as integral to the security strategy of the business and upskill accordingly. Any learning solutions have to be comprehensive, with the agility to match an ever-evolving threat landscape if we are to see a meaningful uptick in found and fixed vulnerabilities before code is set free into the wild.

  3. Set standards
    It is important to advocate for establishing industry-determined standards so developers can map onto them when integrating AI into their day-to-day duties. In addition, industry leaders could create a council to advocate for the adoption of these standards.

    There have previously been similar security initiatives for open-source software with the OpenSSF project, not to mention the Secure-by-Design Guidelines from CISA, which will surely put software vendors on notice to raise the security standards of their code. With such a rapid uptake of AI tooling in software development, specific guardrails and standards must be fast-tracked.

There may eventually come a day when teams will think of the safe leveraging of AI for code development as a given, and something that comes as second nature to them.

Until then, however, we would be well-advised to take a more proactively vigilant stance in changing developer mindsets and organisational security cultures, increasing understanding of these tools and establishing universal standards in using them.


Pieter Danhieux is co-founder and CEO of Secure Code Warrior.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.