Cybercriminals' new hat trick: AI hallucinations

  • AI hallucinations refers to when an AI tool churns out an answer that, “while might sound true, has no factual basis and/or cannot be justified by its training corpus data,” according to Cato’s Etay Maor.

  • Attackers are using large language models (LLMs) to deploy attacks through phishing emails.

  • To preemptively avoid AI hallucinations, organizations using LLMs will need to be extremely cautious when using outputs from AI tools for critical decisions.

Even artificial intelligence (AI) has its kryptonite. Cybercriminals — notoriously early adopters of new technology — are taking advantage of large language models (LLMs) with the goal of creating “human-like” content to mask attacks, according to Etay Maor, Cato’s senior director of security strategy, in a sci-fi sounding phenomenon referred to as AI hallucinations.

Etay Maor, Cato Networks
* (Etay Maor, Cato Networks)

AI has been in the spotlight for quite some time and “the fact that AI is now so easily accessible means that anyone with good or bad intentions can utilize this new tool as part of their arsenal,” Maor told Silverlinings. “While AI tools have been around for years, ever since the introduction of ChatGPT, AI has received constant (and growing focus)." 

One way attackers are using LLMs to deploy attacks is through phishing emails, which IBM named as one of the two most prevalent attack vectors in a recent report — “these emails would have perfect grammar and spelling,” and would be very hard to identify as malicious emails, Maor noted. Attackers will use LLMs to write script or code and identify vulnerabilities. 

Threat actors can then generate said AI hallucinations, which is when an AI tool churns out an answer that, “while might sound true, has no factual basis and/or cannot be justified by its training corpus data,” according to Maor. It’s a significant issue that is still not completely understood by anyone at this point, he admitted, but it needs to be. 

The reason AI hallucinations are significant is that with the rapid adoption of AI technologies, organizations and employees must realize that a convincing AI-produced answer can be completely erroneous. Plus, it's no surprise that many use AI tools in their day to day lives — Cato recently reported that 62% of its enterprise customers used ChatGPT within a two-week period. 

To preemptively avoid AI hallucinations, organizations using LLMs will need to be extremely cautious when using outputs from AI tools for critical decisions. “In my opinion, AI hallucinations will be the number one problem with the future adoption of this technology with privacy and prompt injection attacks following behind,” warned Maor. 

At this point, there is still no official solution to AI hallucinations. But, it’s very clear that while these systems are useful at times, “using their output ‘as is’ can be very risky,” Maor explained. 

In the future, Maor predicts we will see emerging potential fact checkers to data output, and possibly human validation of data before it is used in important decision-making — “we will need to have much more critical thinking when evaluating AI responses and not take it at face value,” he concluded.