Jericho Security | Blog

Buffett's AI Conundrum: The Alarming Potential of AI in Security

Written by Jericho Security Contributor | June 25, 2024

At a recent Berkshire Hathaway stakeholders meeting, 93-year-old philanthropist Warren Buffett expressed his concerns about artificial intelligence. The billionaire investor likened the development of AI to the creation of nuclear weapons, describing both as powerful forces that, once unleashed, cannot be contained.

 

Buffett highlighted the alarming potential of AI, warning that its unintended consequences could be as uncontrollable and dangerous as those of nuclear technology. His remarks underscore the need for cautious and responsible development and implementation of AI systems.

 

But how exactly has AI helped proliferate an increased threat landscape?

 

The Evolution of Cybersecurity Dangers Since the GenAI Boom

 

To keep this article condensed and easy to read, we’ll focus on a common yet highly threatening cyberattack that has benefited greatly from the GenAI boom - phishing. 

 

Phishing attacks, one of the most common and harmful cyberattacks, have evolved significantly with the advent of Generative AI (GenAI). Phishing attempts to deceive individuals into revealing personal information through deceptive emails, websites, and social engineering. While phishing has been around for decades, the recent proliferation of GenAI has led to a tremendous and nefarious growth in more sophisticated forms of these attacks.

 

Firstly, GenAI allows for the automatic generation of email-based phishing attacks. Instead of a single format being sent to deceive multiple individuals, GenAI can create customized phishing emails for each potential victim. This customization makes it difficult for security systems to identify phishing emails before they reach the recipient

 

GenAI’s ability to create numerous forms of customized phishing emails also broadens the range of potential victims, increasing the likelihood that recipients will react to the phishing attempts. A study by Harvard Business Review found that the entire phishing process can be automated using LLMs, reducing the cost of phishing attacks by more than 95%.

 

Secondly, GenAI can bypass existing solutions to phishing attacks. Traditional solutions for email phishing attacks relied on filtering information such as URLs, IP addresses, or other elements within the email. Mail servers can also detect if identical phishing emails are sent to a large number of recipients in a short period. Once identified as spam, these phishing emails are deleted or moved to the spam folder.

 

However, with GenAI, each phishing email generated is unique, making it difficult for existing email security systems to detect them. The sophistication of these phishing emails is such that an untrained human eye can easily fall prey to them, leading to significant financial losses, especially if the victim holds significant access to company resources.

 

Turning the Assailant into a Protector

 

In the same way that GenAI has been used to create malicious attacks, organizations can use the same algorithms to create potent defenses against phishing emails. While no solution is perfect, several studies have identified discernible differences between human-generated phishing emails and AI-generated ones.

 

There are specific patterns in AI-generated emails that can be profiled using machine learning. Through neural networks, topic modeling, and style analysis, AI-generated emails can be accurately identified. This advanced approach allows cybersecurity systems to stay ahead of evolving threats, ensuring more robust protection.

 

Nonetheless, machine learning systems need to be trained on AI-generated emails, as these differ from manually generated ones. Additional protection methods include security awareness and training for employees - who remain the highest point of vulnerability for companies.

 

As we navigate this new technological frontier, the balance between harnessing AI's potential and mitigating its risks will be crucial. Continuous vigilance and adaptation in cybersecurity strategies will be essential to safeguard against the evolving threats posed by AI-driven cyberattacks.