Novel Social Engineering Attacks Exploding in Growth: New Report from Darktrace

The dramatic growth of generative AI tools, including ChatGPT, is setting off alarm bells all over for very different reasons. One of those relates to cyber security implications. Cyber defense company Darktrace recently published a blog post summarizing new data collected from its customer base, which points out a correlation between increasing social engineering attacks with the rising use of AI software.

“Social engineering attacks pose an increasingly serious threat to organizations as an entry point for the majority of the cyber attacks; that’s the first step,” said Osman Erkan, Founder and CEO of DefensX, a secure web browser cyber security company whose solutions protect individuals and organizations from web-borne threats. “Generative AI is pouring fuel on the fire, and it is critical that we rally together to fight this before attacks rage out of control.”

Darktrace saw a 135% increase in these “novel social engineering attacks” among customers from January to February 2023, according to the company’s chief product officer, Max Heinemeyer, who also noted that this same timeframe aligns with the “widespread adoption of ChatGPT.”

Heinemeyer doesn’t think this similar timeline is a coincidence: “The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale,” he wrote.  

“It’s important for every Managed Service Provider, every CIO at every enterprise, every government agency, and every cyber security professional to take a step back and review the threat of social engineering immediately,” Erkan warned. “While there are many obvious benefits of tools like ChatGPT and the hundreds of others emerging every day, keep in mind that well-funded and well-organized criminal rings also benefit from the ability to instantly create lifelike content that becomes massive campaigns in minutes. Large enterprises, to some extent, may increase their budgets and stretch their training muscles trying to countervail with this trend, but small and medium businesses, which are the 50% of the US workforce and economy, will get the collateral effect.”

Erkan said that leaders must go way beyond beefing up their threat detection and mitigation activities and “prioritize – immediately – defending against this brushfire by not only training their teams and all employees about social engineering attacks and AI’s growing impact on them but giving them the software they need to alert every end-user to suspicious emails, text messages, social media messages and more. There are an infinite number of entry points for a social engineering attack. Defending email is not sufficient. This cannot wait.”

Social engineering attacks, like the ones Darktrace reported, use sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length with no links or attachments.

“This alarming trend clearly illustrates that generative AI is providing an avenue for threat actors to craft sophisticated targeted attacks at speed and scale,” Erkan said.

In March 2023, Darktrace commissioned a global survey by Censuswide to 6,711 employees across the UK, US, France, Germany, Australia, and the Netherlands to gather third-party insights into human behavior around email. The goal was to better understand how employees globally react to potential security threats, their understanding of email security and the modern technologies that are being used as a tool to transform the threats against them.

In the UK, key findings from 1,011 respondents included:

  • Nearly 1 in 5 (19%) UK employees have fallen for a fraudulent email or text;
  • 80% of employees are concerned about the amount of personal information available about them online that could be used in phishing and other email scams;
  • 73% of employees are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication; and
  • 58% of employees have noticed an increase in the frequency of scam emails and texts in the last six months.

“Until now,” Erkan explained, “it has been easier to identify social engineering attack messages, including an invitation to click on a link or open an attachment, poor spelling, and grammar, messages from an unknown sender, or atypical content from a known sender. The stakes are so much higher now, as generative AI is so well programmed in many cases that it can create impeccable content, and even more dangerous, personalized content so credible that recipients trust and engage.”

Erkan said one of the richest places to find personal information is on social media, and that can be used as part of Facebook or LinkedIn scams to create realistic messages from friends and business contacts.

“Generative AI attacks are linguistically complex and entirely novel scams that use techniques and reference topics that are completely beyond what we’ve ever seen before,” Erkan warned.

Heinemeyer, who is Chief Product Officer at Darktrace, commented on the findings: “Email security has challenged cyber defenders for almost three decades. Since its introduction, many additional communication tools have been added to our working days, but for most industries and employees, email remains a staple part of everyone’s job. As such, it remains one of the most useful tools for attackers looking to lure victims into divulging confidential information through communication that exploits trust, blackmails, or promises reward so that threat actors can get to the heart of critical systems every single day.

“The email threat landscape is evolving and expanding to other tools, such as SMS, iMessage, Whatsapp, Slack, and so on. For 30 years, security teams have given employees training on spotting spelling mistakes, suspicious links, and attachments. While we always want to maintain a defense-in-depth strategy, there are increasingly diminishing returns in the approach of entrusting employees with spotting malicious messages coming from different vectors. In a time where readily-available technology allows attackers to rapidly create believable, personalized, novel, and linguistically complex phishing messages, we find humans even more ill-equipped to verify the legitimacy of ‘malicious’ content than ever before. Except for vishing, all these attack vectors target the web browser in the last mile. Defensive technology needs to involve a secure browser as the last layer to keep pace with the changes in the social engineering threat landscape; we must arm organizations with AI-supported secure browsers that can do that.”

Arti Loftus is an experienced Information Technology specialist with a demonstrated history of working in the research, writing, and editing industry with many published articles under her belt.


Edited by Erik Linask

This article was originally published on MSP Today