cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

ChatGPT could be used by hackers

A recently launched AI chatbot could prove to be a dangerous tool to help hackers conduct scams and cyber attacks.

user icon Daniel Croft
Fri, 06 Jan 2023
ChatGPT could be used by hackers
expand image

ChatGPT is an AI chatbot that was launched in November last year by OpenAI. Built on the company’s GPT-3.5 language models, the AI is able to create rather human-like responses to questions and requests, replicating human thought.

Now concerns have been raised regarding its potential to assist cyber criminals to create more convincingly human phishing messages, which would then be sent to victims via email and test.

Trialling the theory that the tool could be used to create malicious emails, Check Point Research asked ChatGPT and OpenAI’s other AI-based system Codex to write phishing emails. ChatGPT was able to create a phishing email, which was then refined with further discussion with CPR.


“Using OpeAI’s ChatGPT, CPR was able to create a phishing email, with an attached Excel document containing malicious code capable of downloading reverse shells,” said team researchers.

Whilst the tests proved that initial responses aren’t perfect, the tool provides hackers with a base that can be further fine-tuned.

ChatGPT does try and curve users away with warnings stating that the request “may violate our content policy”, but still displays the content requested.

“Note that while OpenAI mentions that this content might violate its content policy, its output provides a great start,” added CPR researchers.

“In further interaction with ChatGPT we can clarify our requirements: to avoid hosting an additional phishing infrastructure we want the target to simply download an Excel document. Simply asking ChatGPT to iterate again produces an excellent phishing email.”

Following up on the initial email request, CPR then asked ChatGPT to generate the actual code that “when written in an Excel workbook, will download an executable from a URL and run it”. CPR specified that it be written in a way where the code is run as soon as the Excel file is opened.

ChatGPT successfully generated a code, however once again, it would need refining before being deployed successfully.

Researchers noted that cyber criminals would need at least a basic knowledge of coding and cyber crime to be able to correct the issues, but once again the AI provided them with a strong starting point.

OpenAI has since said that ChatGPT is a research preview, and that it would continue to improve it to prevent it from being used in malicious or harmful ways.

CPR has said that defenders should remain aware and vigilant of the possible threat that AI can present in the current and future cyber climate.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.