2.6 C
New York

The Unseen Dangers of ChatGPT: Scams, malware, phishing and more

The recent release of OpenAI’s ChatGPT has sparked significant interest in artificial intelligence and its potential applications. However, it appears that the cybercriminal community is already leveraging this tool for malicious purposes.

Artificial Intelligence (AI) has revolutionized many areas of human life, including communication and technology. One of the most remarkable examples is OpenAI’s ChatGPT, an AI language model that has gained widespread popularity due to its ability to respond to a wide range of queries and generate coherent text. However, this advanced technology has a darker side too. Researchers have warned of the potential misuse of ChatGPT in creating malware, scams, and deep fakes, among other uses for evil purposes.

The abuse of ChatGPT and Codex as malware development tools

Cybersecurity experts have expressed concern about the use of ChatGPT by hackers as a malware development tool. Cybercriminals with little or no coding experience have been using ChatGPT to write malicious code that can be used for spying, ransomware, and other harmful tasks. The Dark Web has already shown significant interest, with hackers leveraging the tool to develop malware, including Dark Web Marketplace scripts and ransomware.

According to a recent analysis by cybersecurity firm Check Point Research, underground hacking communities are utilizing ChatGPT to write code that can be used for spying, ransomware, and other forms of cyberattacks.

In December 2022, researchers at Check Point demonstrated the potential for abuse by using ChatGPT to write convincing phishing e-mails and to generate Visual Basic for Applications (VBA) script code that could be used to create a malicious Microsoft Excel macro. The researchers also used Codex, another tool from OpenAI, to create reverse-shell scripts and other malware utilities, which could then be converted into EXE apps that run natively on Windows PCs.

One instance of such abuse is the appearance of a thread on a popular underground hacking forum titled “ChatGPT – Benefits of Malware“. The publisher of the thread revealed that he was using ChatGPT to recreate malware techniques described in research publications. He shared code for a Python-based stealer that is capable of searching for common file types, copying them to a random folder within the Temp folder, compressing them using ZIP, and uploading them to an FTP server. The publisher also created a Java snippet using ChatGPT that can be modified to run any program, including common malware families.

Researchers also discovered a case of cybercriminals using ChatGPT to improve the code of a basic malware from 2019. The researchers explain that the ease of automation offered by AI makes it an attractive tool for hackers to develop malware that continues to learn and improve.

The findings by Check Point Research have raised concerns over the potential illicit use of ChatGPT in the creation of dark web marketplaces. In a discussion titled “Abusing ChatGPT to create Dark Web Marketplaces scripts“, the author demonstrated how easy it is to use the language model to create a platform for illegal transactions. The code published in the thread received live cryptocurrency prices through a third-party API and contributed to the crypto-based payment systems commonly used on the dark web.

ChatGPT has demonstrated its potential as a tool for cybercriminals with little to no coding experience to develop malicious code that can be used for nefarious purposes. As the ease of automation offered by AI makes it an attractive tool for hackers, they can develop malware that continues to learn and improve, rendering traditional human-written defensive software inadequate. The consequences of this development cannot be understated, as malware can cause immense harm, from stealing sensitive data to disrupting critical infrastructure.

Phishing and misinformation

OpenAI’s ChatGPT can be a powerful tool for phishing and social engineering as per researchers at Recorded Future, a leading cybersecurity research firm. Their report states that the platform is lowering the entry barrier for threat actors with limited programming or technical skills, and is able to produce effective results with just a basic understanding of cybersecurity and computer science.

The report identifies threat actors on the dark web and special-access sources sharing proof-of-concept ChatGPT conversations that enable the development of malware, social engineering, disinformation, phishing, malvertising, and money-making schemes.

The report also notes that the lack of grammatical errors in ChatGPT’s responses makes it an effective tool for phishing attacks, as users are less likely to identify such messages as suspicious. The report also found that the model is a valuable asset for script kiddies, hacktivists, scammers and spammers, payment card fraudsters, and threat actors engaging in other low-level cybercrimes. The report further highlights that the lack of accountability and transparency of AI-based systems, such as ChatGPT, is a growing concern for cybersecurity experts.

The lack of transparency of these systems makes it difficult to monitor and track their use, leaving security teams unable to respond effectively.

Furthermore, the use of AI-based systems raises ethical and legal concerns related to the responsibility and accountability for their actions. In the case of ChatGPT, who is responsible if the system is used to develop malware or commit cybercrimes? Should it be the developers, the users, or the system itself? The answer to this question is not clear, and this lack of accountability is a growing concern for cybersecurity experts.

Lack of Transparency in AI Models: A Threat to User Privacy

As the use of AI models becomes more widespread, concerns about the collection and use of personal data are growing. The lack of transparency in the development and deployment of these models, such as ChatGPT, poses a significant risk to user privacy.

There is often limited visibility into the data that is being used to train these models, making it difficult for users to know what information is being collected and how it is being used. This lack of transparency raises concerns about the potential for abuse and misuse of user data.

To address these concerns, it is important to establish regulations and guidelines for the collection and use of personal data in AI models such as ChatGPT. These regulations should include requirements for transparency in data collection and processing, as well as limits on the use of personal data for commercial purposes.

The Difficult Dilemma of Designing Safe AI

The question arises: is it possible to prevent ChatGPT and other AI tools from creating malware? The answer is not as straightforward as it might seem.

Computers have no moral compass, and therefore cannot differentiate between ethical and unethical behavior. Code only becomes malicious when wielded by malicious actors. Artificial intelligence, like any other tool, can be used for good or evil, depending on the intentions of its users. Unfortunately, it is impossible for AI tools to anticipate how their output will be used.

Phishing emails, for instance, can be used to train AI tools to identify them, or to teach people how to recognize and avoid them. However, the same output could also be used in a phishing campaign to defraud unsuspecting victims. The problem is that AI tools like ChatGPT cannot predict how their output will be used, making it difficult to control their potential for harm.

OpenAI is working on a possible solution, a watermark to indicate that the output was generated by an AI tool and not a human being. However, this approach may not be enough to prevent the creation of malware.

It is worth noting that tampering with ChatGPT to make it capable of generating malware would require a high degree of technical skill, beyond the reach of most hackers. Nevertheless, the potential for abuse remains a serious concern.

Subscribe

Related articles

Big Data Analytics: How It Works, Tools, and Key Challenges

Your business runs on data—more than you may realize....

Security Implications of RAG LLM: Ensuring Privacy and Data Protection in AI-Driven Solutions

Retrieval-Augmented Generation (RAG) Large Language Models (LLMs) have risen...

How Blockchain Can Transform Your Business

What if I told you $1.76 billion will be...

API Abuse and Bots: The Overlooked Threat to Digital Infrastructure

There are many threats to digital infrastructure in 2024,...

Top 7 Mobile App Development Mistakes and How to Avoid Them

Mobile app development brings many chances but also has...

Author

Christy Alex
Christy Alex
Christy Alex is a Content Strategist at Alltech Magazine. He grew up watching football, MMA, and basketball and has always tried to stay up-to-date on the latest sports trends. He hopes one day to start a sports tech magazine. Pitch your news stories and guest articles at Contact@alltechmagazine.com