Image Credit: UnSplash
The cybersecurity company Darktrace has issued a warning about the increasing use of artificial intelligence by cybercriminals to perpetrate more sophisticated scams, since the release of ChatGPT.
Image Credit: Tenor
The Cambridge-based firm stated that AI is enabling “hacktivist” cyber-attacks that use ransomware to extort money from businesses.
Image Credit: Tenor
According to Darktrace, since the launch of the Microsoft-supported AI tool, ChatGPT, the firm has observed the emergence of more convincing and complex scams by hackers.
Image Credit: Tenor
Darktrace has found that while the number of email attacks across its own customer base remained steady since ChatGPT’s release, those that rely on tricking victims into clicking malicious links have declined
Image Credit: Tenor
The linguistic complexity of these scams has increased, including elements such as text volume, punctuation, and sentence length
Image Credit: Tenor
Darktrace suggests that this indicates that cybercriminals are shifting their focus towards crafting more sophisticated social engineering scams that exploit user trust.
Image Credit: Tenor
Image Credit: Tenor
Image Credit: Tenor