Artificial Intelligence has upended many well-established processes and procedures, and its impact proves a classic example of market disruption through innovation. Broadly speaking, the arrival of AI is seen as a positive development barring some concerns revolving around job loss and employment redundancies.
However, if you look at the cybersecurity industry, this common perception is flipped. Here, AI is often seen as a tool for escalation due to an increased usage of AI amongst malcontents. To see an example of this, we need only look at how AI technologies have transformed the phishing landscape – vastly increasing the ease with which bad actors can create and implement a range of cyber attacks.
More specifically, research has drawn attention to the risk that large language models – the artificial intelligence algorithms that power technologies such as ChatGPT – pose when used maliciously to create spear phishing attacks.
How does spear phishing work in a world where AI is accessible to almost everyone?
We’ll answer that question, dive into an exploration of LLMs and spear phishing tactics, and ultimately ask: could AI be the solution as well as the challenge?
[Norton (2023). What is Phishing? How to Spot and Avoid It]
Spear phishing is one type of phishing – a fraudulent messaging tactic that aims to acquire personal information. There are many types of phishing, including spear phishing, smishing (SMS phishing), vishing (voice phishing), and email phishing – and advancing technologies only open the door to more varieties.
Spear phishing targets a specific individual, group, or organization. Spear phishing aims to trick the recipient of the message (whether that’s an email, text, or voicemail) into providing personal information, such as bank or credit card details or login information.
Unlike some more generalized forms of phishing, spear phishing is sophisticated in its approach. It emphasizes quality rather than quantity, approaching a limited number of targets, and acquires and utilizes personal information about the target to ensure the attack is convincing.
Artificial intelligence can expedite this process, making it easier for bad actors to create more convincing spear phishing attacks.
A recent study by Julian Hazell, Spear Phishing with Large Language Models, explores the effects of large language models (LLMs) on how spear phishing works.
The study analyzes how spear phishing works in the age of AI, detailing how LLMs can make it easier for cybercriminals to launch sophisticated spear phishing attacks. This highlights a significant cybersecurity issue involving AI technologies, particularly in Large Language Models (LLMs) like GPT-3 and GPT-4.
Tests involving over 600 British MPs revealed that GPT-3.5 and GPT-4 could generate authentic-looking messages cheaply and efficiently, increasing the volume of effective cyberattacks. It also shows how easy it is for AI to assist in the collection of personal data and the creation of highly personalized, convincing phishing messages.
LLMs can contribute to the creation of sophisticated spear phishing attacks. Powerful linguistic models such as GPT-4 can be used – not only in the creation of the attack itself – but during multiple other stages of the cybercrime process.
The Oxford study highlights three key ways LLMs can be used.:
[Hazell, J. (2023). Spear Phishing with Large Language Models.]
LLMs can be used during the reconnaissance stage of spear phishing attacks to carry out investigative work. Replacing labor-intensive manual background checks, generative AI models can be used to gather personal information. These key data points are then used to lend legitimacy to the spear phishing contact – for example, by including the recipient’s place of work, role, location, personal interests, and more. These details can be the difference between a successful spear phishing attack and one that is easily dismissed as spam.
Next, LLMs can be used to generate the text used in spear phishing attacks. While AI models are trained to avoid these types of requests, there are ways around this. As the author of our study notes, it is possible to circumvent these restrictions by creating a prompt that fits the characteristics of a “good” phishing email – without naming it as a phishing email. This could include:
These principles can then be added to biographical information to form a prompt for the LLM to create a sophisticated spear phishing email.
Finally, LLMs can also be used to access information on how spear phishing works and tips on how to carry out a successful spear phishing campaign.
While most models will not be able to offer this information in response to a direct prompt, the study notes that prompt engineering can bypass this and enable malicious actors to access information on how to carry out a variety of types of cyberattacks.
Overall, this research emphasizes the dual-use nature of AI: a powerful tool for advancement but also a cybercrime instrument.
So, what can be done to prevent spear phishing attacks?
The ease with which AI (and specifically LLMs) can be used to simplify and expedite the spear phishing process calls for a reevaluation of the role of AI in cybersecurity. It highlights the need for advanced AI-driven defense mechanisms to keep up with the evolving threat landscape.
At Jericho Security, we specialize in AI-driven email simulations, offering an innovative approach to combating these sophisticated threats. Our solutions are crafted to provide realistic and adaptive simulations and training scenarios, enhancing your team's ability to recognize and respond to advanced phishing attempts. By using AI to fight AI, we create a level playing field – LLMs can make phishing attacks easier, but they can also make phishing defense easier. One innovative example comes in the form of using LLMs in the creation of phishing simulations, particularly through topic discovery. Jericho Security’s phishing simulations create hyper-realistic “attacks” that can be used to train your organization to identify phishing attacks and avoid costly cyber threats. A thorough analysis of semantic similarity is just one of the technological advances that makes this possible.
Ultimately, while AI poses a significant cybersecurity threat, these same technologies can be used to guard against attacks and better equip organizations to defend against spear phishing and other malicious tactics.
With Jericho Security, your organization is equipped to handle current threats and future challenges in the ever-evolving digital landscape. Equip your organization with Jericho Security's advanced AI-driven training and spear phishing solutions for a secure digital future.
Want to learn more? Dive into the full Oxford study.