Artificial Intelligence (AI) has become a buzzword in the last decade. Every business or more accurately every aspect of human life is deeply affected by the onset of AI-powered technology both in a positive and negative sense. The more concerning fact about the rapid evolution of AI is that it has now become a weapon for cybercriminals. Hackers are leveraging AI to write more sophisticated phishing emails and to create undetected deepfakes. The unprecedented rise in AI-powered cybercrimes has forced security teams to rethink their strategies and to come up with more refined and innovative defense strategies.
The AI-powered Cybercrime Boom
AI-powered cybercrime trend has been on the rise lately. Such cybercrimes are very commonplace now and literature is filled with such examples. Here are a few examples of AI-powered cybercrimes:
AI-powered Voicemails and Business Email Compromise (BEC) Attacks:
Another worrying trend in AI-backed cybercrimes is the onset of BEC attacks. Unlike traditional BEC attacks that impersonate someone’s email, AI-powered BEC attacks clone a person’s voice and then leave a convincing email. Gutsalyuk (2024) highlighted a cybercrime example in his book “AI in Cybersecurity: An Instrument for Defense and Attack” about how an AI-driven phishing email and sophisticated deep fake impersonation has led to a $243,000 financial loss to a company. He explained that AI created a very accurate deep-fake voice message from the company’s executive that led employees to believe it was real. Moreover, there were no unusual emails and spelling or grammar errors because AI polished every aspect of the scam.
AI-powered Malware Attacks:
Another rising trend is the use of language models like ChatGPT, CoPilot, and Claude, etc., which can be used to write malicious code. The availability of AI-powered tools poses a serious threat of automated cyberattacks because research has shown that tools like ChatGPT can be easily manipulated to write malicious code. One such instance was quoted by Vitaly Simonovich, a threat intelligence researcher, who convinced AI to write a malicious code that could hack into Google Chrome password manager. ChatGPT was manipulated by a role-play experiment in which Simonovich convinced ChatGPT that he was a superhero named “Jaxon” combating a villain named “Dax.” Through this scenario, Simonovich manipulated ChatGPT into writing malicious code (Tangalakis-Lippert). It highlights the vulnerability of widely available AI-powered tools and its potential impacts.
The Sophisticated Phishing Attacks:
Phishing scams are not a novel phenomenon. But their intensity and effectiveness have increased manifold with the onset of Large Language Models (LLM) like ChatGPT etc. Attackers now use AI-powered web scraping and widely available public information to collect data about a person, i.e. their name, job location and description from LinkedIn and other corporate websites. It is used to curate personalized emails. For instance, AI is used to analyze social media posts to understand the communication style of a person.
The attackers also bypass traditional security filters by rewording emails in more genuine and unique ways. In 2025, EINSA published a report “EINSA Threat Landscape: Finance Sector” for the period of January 2023-June 2024. It has categorized phishing as one of the most common social engineering attacks. It has reported that The Latvian State police reported fraud campaigns where attackers impersonated bank officials to steal information, leading to financial loss and identity theft. The report also stated that 50% of social engineering attacks caused financial losses, 28% caused fraud and large-scale financial crimes and 19% caused the exposure of sensitive information.
How are Security Teams Fighting Back?
Fighting an unknown enemy is always hard. But once the enemy is identified, a solution can be found. The same is the case with AI-powered cybercrimes. Here is how security teams are fighting the menace of AI-powered cybercrimes:
Adoption of Zero Trust Architecture (ZTA)
Zero Trust Architecture (ZTA) is a security model which works on the basic premises that no device should be trusted by default. It works on continuous verification of identities, stringent access controls and network segmentation so that attackers cannot navigate the system laterally. Almost all the tech giants like Microsoft and Google have implemented the ZTA model. Google has launched BeyondCorp, a ZTA-based framework, after 2010 Aurora Cyberattack.
In 2022, The U.S Department of Defense also launched ZTA to counter attacks from China and Russia. In 2021, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) reported a cyberattack on a colonial pipeline, a major U.S. fuel supplier from DarkSide hackers. After this attack, Colonial Pipeline adopted a ZTA solution and since then no major cyberattack is reported.
Agentic AI to counter Cyberattacks
The technology that powers cybercrimes can also be used as its antidote. Security teams are now increasingly deploying autonomous AI-driven security systems, i.e. Agentic AI to curb cybersecurity threats. These systems can analyze threats in real time and run pre-approved counter measures. In recent times, CrowdStrike has developed CrowdStrike Charlotte AI that automates threat investigation and filters out false positives. It handles low-level threat triage and prioritizes critical threats autonomously. In 2024, CrowdStrike Signal was launched that categorizes related threats into actionable insights. It is also a self-learning model which makes its effectiveness manifold (CrowdStrike Press Release).
Threat Intelligence Sharing: A Collaborative Approach
Collaboration is the key to combating cyberattacks. The surge in AI-powered attacks has called for the collaboration across governments, industries and security firms. The Cyber Threat Alliance (CTA) and MITRE ATT&CK facilitates the real-time sharing of cyberthreats to help organizations to stay ahead of attackers. One of the most notable examples is of the U.S Treasury Department.
In 2024, a Salt Typhoon Cyber Attack was launched on the U.S treasury department. Cybersecurity agencies like CISA, NSA, and private security firms detected the suspicious activity early owing to the threat intelligence sharing network and hence the threat was mitigated due to the proactive approach (Ozeren, 2025).
AI-Powered Phishing Detection
The same AI that is used to create sophisticated phishing attacks can be used to detect them. Security firms are using AI-powered anomaly detection systems that analyze metadata, contextual inconsistencies and writing patterns to flag suspicious emails. Startups like Abnormal Security and Darktrace use AI-powered behavioral analysis to detect and block phishing emails.
Employees Training
It’s a universally said that knowledge is power. Even the most accurate, most tested and most reliable system can be a victim of human error. So, the safest bet for security teams is to train their employees about the possibilities of cyberthreats. Security teams should invest in live drills for detecting phishing emails and deep-fake backed voice messages. For instance, JPMorgan Chase trains its employees in how to handle deep-fake and AI-based phishing attacks. Others should follow suit.
Concluding Thoughts
Using Artificial Intelligence (AI) to launch cyberattacks and to detect them is like a cat and mouse game. The importance of AI is undeniable, and it is here to stay. Since AI plays a central role on both sides, there is hope that if AI strengthens the cyber attackers, it can also serve as an antidote. The security teams are deploying innovative and cutting-edge solutions. There is also a need to raise more awareness among employees and to educate the masses about the potential threats of cybercrime. The battle against cybercrime is far from over, but those who fail to adapt will be left vulnerable.