The cybersecurity company Darktrace has issued a warning about the increasing use of artificial intelligence by cybercriminals to perpetrate more sophisticated scams, since the release of ChatGPT. The Cambridge-based firm stated that AI is enabling “hacktivist” cyber-attacks that use ransomware to extort money from businesses.
ChatGPT is an example of generative AI, which uses complex algorithms to create original content such as essays, poems, and images in a matter of seconds. Launched by Microsoft, ChatGPT has sparked a conversation about the implications of generative AI, particularly when it comes to content creation and its potential misuse.
Despite its many benefits, generative AI also poses significant risks, particularly when it comes to fraud and cyber security. The concern is that these tools can be used to create convincing fake content that can trick people into revealing sensitive information or handing over money.
In the case of ChatGPT, Darktrace has warned that the bot’s ability to create realistic and authentic content could be used to increase the sophistication of phishing scams.
According to Darktrace, since the launch of the Microsoft-supported AI tool, ChatGPT, the firm has observed the emergence of more convincing and complex scams by hackers. The company found that while the number of email attacks across its own customer base has remained steady, the use of trickery to deceive victims into clicking on malicious links has decreased. In contrast, the linguistic complexity of these scams has increased, including elements such as text volume, punctuation, and sentence length. Darktrace suggests that this indicates that cybercriminals are shifting their focus towards crafting more sophisticated social engineering scams that exploit user trust.
Darktrace acknowledges that this phenomenon has not resulted in a new wave of cybercriminals emerging; instead, existing threat actors are changing their tactics. Although ChatGPT has not yet significantly lowered barriers to entry for threat actors, Darktrace believes that it may have contributed to an increase in the sophistication of phishing emails. This could enable adversaries to create more targeted, personalized, and ultimately, successful attacks.
With ChatGPT’s ability to generate convincing text, criminals could use the bot to create convincing phishing emails that are much harder to spot than those created by humans. Darktrace has urged individuals and businesses to be vigilant and to ensure that their cyber security protocols are up-to-date and effective.
The rise of generative AI is both exciting and daunting, with its potential to revolutionize content creation and other areas of life. However, it is essential that we recognize the potential risks and take proactive measures to mitigate them.
In its results, Darktrace also warned that it had noticed a “noticeable” slowdown in business sign-ups for its security products during the final three months of 2022. The company attributed the decrease in its operating profits during the last six months of last year to a tax bill related to the vesting of share awards for its CEO, Poppy Gustafsson, and finance boss, Cathy Graham. This tax bill led to a reduction in the forecast for free cash flow this year.
Despite the barrage of criticism from short-sellers who doubt Darktrace’s ability to become a potential European superpower in the US-dominated cybersecurity space, the company remains unfazed by the recent slump in new business. Darktrace’s customer base has increased by 25% year-on-year, rising from 6,573 to 8,178 in the six months ending in December.