The dark side of AI – How ChatGPT is becoming a tool for cybercrime

19. July 2023

OpenAI - The company behind ChatGPT

Royalty-free image from Pexels: https://www.pexels.com/de-de/foto/schreibtisch-laptop-technologie-homeoffice-16037283/

In the cyber security landscape, the threats are as diverse as the technologies that create them. Artificial intelligence (AI), a technology with the potential to revolutionize many aspects of our lives, is no exception. A recent TV report on ARD PLUSMINUS showed in a frightening way how AI can be misused in the context of cybercrime, in particular through so-called ChatGPT hacking, where malicious code is written and distributed.

In this article, Florian Hansemann, founder of HanseSecure and renowned cyber security expert, asked the AI ChatGPT to write malicious code. The result was a functional keylogger capable of spying on a user’s keystrokes. This experiment underscores the urgent need to regulate the development and use of AI, while highlighting the importance of cyber security experts like Hansemann who can protect us from such threats.

Florian Hansemann - Cyber security expert HanseSecure GmbH

The power of AI and the potential risks

Artificial intelligence has developed rapidly in recent years and is now capable of performing complex tasks that could previously only be carried out by humans. However, these advances have also brought with them new risks and challenges, particularly in the area of cyber security.

A recent TV report on ARD PLUSMINUS highlighted these risks in a frightening way. In this report, Florian Hansemann, a renowned cyber security expert and founder of HanseSecure, got the AI ChatGPT to write malicious code. This code, known as a keylogger, is capable of spying on a user’s keystrokes and thus stealing sensitive information, including online banking credentials. This is a clear example of the potential of ChatGPT hacking.

A working Python script, created by ChatGPT

The experiment – AI in the service of cyber gangsters

The experiment started with a simple request to ChatGPT: to create a malicious code. The AI responded surprisingly quickly and generated a complete and functional code. This code was then embedded by Florian Hansemann in an otherwise harmless file. In the next step of the experiment, Hansemann asked ChatGPT to create a phishing email. The AI then generated an email that at first glance looked like a harmless tax reminder. But in reality, a link in the email concealed the file with the embedded malicious code. As soon as an unsuspecting victim clicked on this link, the malicious code was downloaded and the victim’s computer was infected. This process is a frightening illustration of how easily criminals can use AI technologies such as ChatGPT to expand and conceal their illegal activities.

The results and their implications

To demonstrate the functionality of the malicious code created, Hansemann opened the phishing email on another computer. From this point on, everything typed on the keyboard was recorded. When Hansemann typed “ChatGPT hacking”, this input appeared on the infected computer – clear proof that the malicious code created by ChatGPT worked. This experiment demonstrates not only the impressive capability of AI systems like ChatGPT, but also the potential risks associated with them. It underlines the need to regulate AI systems and the importance of cyber security experts like Florian Hansemann who can protect us from such threats.

The working keylogger created with ChatGPT

Artificial intelligence and cybercrime: an escalating danger

The Israeli cybersecurity company Check Point has identified an alarming trend: More and more people without in-depth IT knowledge are turning to cybercrime. They are using the possibilities of artificial intelligence to disguise their illegal activities and expand their criminal capabilities. One worrying trend is the global trade in hacked accounts from the ChatGPT AI system. These accounts are used by criminals to cover their tracks and increase their criminal capabilities. This underlines that AI is not just a tool for tech-savvy hackers. It also opens up new opportunities for people without extensive IT knowledge to intensify their criminal activities. It is therefore crucial that we are aware of this growing threat and take appropriate measures to contain it.

The need for regulation and responsibility

In view of the growing threat of AI-driven cybercrime, there is an urgent need to regulate the development and use of AI. Ethics professor Peter Dabrock from Friedrich-Alexander-Universität Erlangen-Nürnberg and member of the Learning Systems Platform calls for “absolute accountability”. He expressly advocates liability and consumer protection rules to ensure that responsibility for the actions of AI always lies with humans.

Looking to the future – the regulation of AI

A comprehensive set of regulations on AI, the so-called AI Act, is currently being drafted in the European Union. It is expected that the implementation of these regulations in the member states will be completed by 2025. When asked, the Federal Ministry of the Interior confirmed that the topic of AI regulation is currently being negotiated at EU level and that the German government is actively supporting these negotiations. This makes it clear that the issue of AI regulation is an urgent and topical concern that is being addressed at the highest political level.

AI – a double-edged sword

Connor Leahy, a leading AI expert from London and head of the company Conjecture, emphasizes that artificial intelligence is a double-edged sword. It holds both immense opportunities and significant risks. “An AI that has the potential to develop new drugs also has the ability to construct biological weapons,” warns Leahy. Today, AI can even generate formulas for narcotics, drugs and chemical weapons. This fact emphasizes the urgent need to regulate the development and application of AI. It also underscores the important role of cyber security experts like Florian Hansemann in protecting us from such threats and helping us to take advantage of AI safely and responsibly.

The dangers of AI (artificial intelligence) cannot yet be assessed

Conclusion

The results of the experiment presented in the ARD PLUSMINUS report are a wake-up call for us all. They show that the ongoing development of AI technology brings with it not only enormous opportunities, but also serious risks. The fact that an AI like ChatGPT is able to write a functional malicious code and create a phishing email shows that cybercrime is no longer just the domain of highly skilled hackers. Even non-professionals could potentially use the capabilities of AI to write and distribute malicious code.

This underlines the urgent need to regulate the development and use of AI while highlighting the importance of cyber security experts like Florian Hansemann. As the founder of HanseSecure and a renowned expert in his field, Florian Hansemann not only has the skills and knowledge to protect us from such threats, but also the vision to help shape the future development of cyber security.

You can watch the full ARD PLUSMINUS report on dangerous AI here.

Florian Hansemann in the ARD report on ChatGPT hacking

You are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Similar posts

Today I would like to share my experience with OSCP from the guys at Offensive Security. Why OSCP? There are [...]

11. August 2017

I decided to set up a new team of infosec professionals, because of a lot of project requests and my [...]

25. October 2018

It’s funny that two independent companies name the “21” cybersecurity and redteaming resources in the world. But I appreciate being [...]

8. July 2019

I had a lot of fun with an anniversary edition of the podcast “Ones & Zeros, IT Simply Explained” at [...]

25. August 2020