Home Tech News ChatGPT phishing: Increase in Cyber Attacks using ChatGPT look-alike Sites

ChatGPT phishing: Increase in Cyber Attacks using ChatGPT look-alike Sites

chatgpt

Cybersecurity experts at Check Point Research (CPR) have been monitoring the implications of AI, particularly ChatGPT, for the security of online users. Their latest report highlights a concerning trend of cyberattacks leveraging websites associated with the ChatGPT brand.

In December 2022, Check Point Research began raising concerns about the implications of ChatGPT for cybersecurity. In its previous report, the company highlighted the increase in trade in stolen ChatGPT Premium accounts, which allow cybercriminals to bypass OpenAI’s geofencing restrictions to grant unlimited access to ChatGPT.

Now, experts at Check Point Research report that they have recently noticed an increase in cyber-attacks using websites associated with the ChatGPT brand. These attacks involve the distribution of malware and phishing attempts through websites that appear to be related to ChatGPT.

The Rise of Malicious ChatGPT Look-alike Websites

Phishing scams often use a technique called “lookalike domains,” which involves creating fake domains that closely resemble legitimate or trustworthy domains. The goal is to trick recipients into thinking they’re interacting with a legitimate entity when, in fact, they’re giving away sensitive information to an attacker.

Check Point Research has identified multiple campaigns that create websites that mimic ChatGPT’s official website to deceive users into downloading malicious files or disclosing sensitive information. The frequency of these attacks has been steadily increasing over the past few months, with tens of thousands of attempts to access these malicious sites. From the beginning of 2023 to the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI.

The surge in cyberattacks targeting ChatGPT is alarming. CPR’s analysis of new domains created related to ChatGPT or OpenAI, from the beginning of 2023 until the end of April, found that 1 out of every 25 new domains were either malicious or potentially malicious.

Fake Domains: A Common Technique Used in Phishing Schemes

One of the most common techniques used in phishing schemes are lookalike or fake domains. Cybercriminals create lookalike domains that are designed to appear legitimate or trusted at a casual glance. For instance, instead of the email address boss@company.com, a phishing email may use boss@cornpany.com, substituting ‘rn’ for ‘m’. These emails may look authentic, but they belong to a completely different domain that may be under the attacker’s control.

Phishers may also use fake but believable domains in their attacks. For example, an email claiming to be from Netflix may be from help@netflix-support.com, which may seem legitimate but is not necessarily owned or associated with Netflix.

CPR has identified several malicious websites that imitate the ChatGPT brand, including chat-gpt-pc.online, chat-gpt-online-pc.com, chatgpt4beta.com, chat-gpt-ai-pc.info, and chat-gpt-for-windows.com. Once a victim clicks on these malicious links, they are redirected to these websites and potentially exposed to further attacks.

Why ChatGPT is a Target for Cybercriminals

ChatGPT’s popularity makes it a lucrative target for cybercriminals. The ability to generate natural and human-like language opens up a range of possibilities for attackers. They can use ChatGPT to generate convincing phishing emails, impersonate legitimate organizations, or even generate fake news.

In addition, the vast amount of data that ChatGPT processes and generates makes it a valuable target for data theft. Attackers can use stolen ChatGPT accounts to access sensitive information or use ChatGPT to generate fake documents or reports.

Here are some tips for staying safe when using ChatGPT:

  • Only use ChatGPT from trusted sources.
  • Be careful about what information you share with ChatGPT.
  • Keep your antivirus software up to date.
  • Be suspicious of any links or attachments you receive from ChatGPT.
  • If you think you have been the victim of a cyber attack, contact your IT department or a cybersecurity expert immediately.

The rise in cyberattacks targeting ChatGPT begs the question: what is the future of AI? Is it something to be feared or embraced?

I believe that the future of AI is both anxiety and aid. AI has the potential to solve some of the world’s most pressing problems, such as climate change and poverty. But it also has the potential to create new problems, such as job losses and the proliferation of autonomous weapons.

The key to a successful future for AI is to ensure that it is used for good, not for evil. We need to develop ethical guidelines for the development and use of AI, and we need to make sure that AI is used in a way that benefits all of humanity, not just a select few.

Here are some specific ways that we can best prepare for a future where AI plays an increasingly prominent role in our lives:

  • Educate ourselves about AI. The more we know about AI, the better equipped we will be to make informed decisions about its use.
  • Develop ethical guidelines for the development and use of AI. These guidelines should be based on principles such as fairness, transparency, and accountability.
  • Support research into the safe and responsible development of AI. This research will help us to identify and mitigate the potential risks of AI.
  • Engage with the public about AI. We need to have a public conversation about the future of AI and how we can best ensure that it is used for good.

The age of AI is upon us. It is up to us to ensure that it is an age of opportunity, not an age of anxiety.

Exit mobile version