When a hedge fund manager opened up an innocuous Zoom meeting invite, he had little idea of the corporate carnage that was to follow. That invite was booby-trapped with malware, enabling threat actors to hijack his email account. From there they moved swiftly, authorizing money transfers on Fagan’s behalf for fake invoices they sent to the hedge fund.

In total, they approved $8.7 million worth of invoices in this way. The incident was ultimately the undoing of Levitas Capital, after it forced the exit of one of the firm’s biggest clients.

Unfortunately, targeting of senior execs like this is not uncommon. Why bother with the little fish when whales can elicit such riches?

What is whaling?

Put simply, a whaling cyberattack is one targeted at a high-profile, senior member of the corporate leadership team. It could come in the form of a phishing/smishing/vishing effort, or a business email compromise (BEC) attempt. The main differentiator from a typical spearphishing or BEC attack is the target.

Why are “whales” attractive targets? After all, there are fewer of them to victimize than regular employees. Three key attributes stand out. Senior executives (including the C-suite) are typically:

  • Short on time, meaning they may click through on a phishing email, open a malicious attachment or approve a fraudulent transfer request without looking at it properly. They may also switch off or bypass security controls like multifactor authentication (MFA) to save time
  • Highly visible online. This enables threat actors to harvest information with which to craft convincing social engineering attacks, such as emails spoofed to come from a subordinate or PA
  • Empowered to access highly sensitive and lucrative corporate information (e.g., IP and financial data), and to approve or request big-money transfers

What does a typical attack look like?

Just like a regular spear phishing or BEC attack, whaling requires a certain amount of groundwork to stand a good chance of success. This means threat actors are likely to perform detailed reconnaissance on their target. There should be no shortage of publicly available information to help them, including social media accounts, their company website, media interviews and keynote videos.

Aside from the basics, they’ll want to know information on key subordinates and colleagues, or corporate information that could be used as a pretext for social engineering, such as M&A activity or company events. It may also help the threat actor to understand their personal interests, and even communication style if the end goal is to impersonate the “whale.”

Once they have this information, the adversary will usually craft a spear phishing or BEC email. It will most likely be spoofed to appear as if sent from a trusted source. And it will use the classic social engineering tactic of creating urgency so that the recipient is more likely to rush their decision making.

The end goal is sometimes to trick the victim into divulging their logins, or unwittingly installing infostealing malware and spyware. These credentials could be used to access monetizable corporate secrets. Or to hijack their email account in order to launch BEC attacks at subordinates  impersonating the whale to get a smaller fish to make a big money transfer. Alternatively, the fraudster may pose as the “whale’s” boss, in order to trick them into green-lighting a fund transfer.

AI changes the rules

Unfortunately, AI is making these tasks even easier for the bad guys. Using jailbroken LLMs or open source models, they can leverage AI tools to harvest large quantities of data on targets in order to assist with victim reconnaissance. And then use generative AI (GenAI) to create convincing emails or texts in flawless natural language. These tools could even be used to add useful context and/or mimic the writing style of the sender.

GenAI can be used to leverage deepfake tech for highly convincing vishing attacks, or even to craft videos impersonating high-level executives, in order to convince the target to make a money transfer. With AI, whaling attacks increase in scale and effectiveness, as sophisticated capabilities become democratized to more threat actors.

The big payoff

What’s at stake here should go without saying. A major BEC attack could result in the loss of millions of dollars’ worth of revenue. And a breach of sensitive corporate data may lead to regulatory fines, class action lawsuits, and operational disruption.

The reputational damage can be even worse, as Levitas Capital found out. The hedge fund was, in the end, able to block most of the approved transactions. But that wasn’t enough to stop one of its biggest clients from walking, bringing down the $75 million fund in the process. On a more personal level, duped executives are often scapegoated by their superiors following incidents like these.

Taking out the whalers

There are several ways security teams can help to mitigate the risks of spearphishing and BEC attacks. But these aren’t always successful when faced with a senior executive who might think the rules don’t apply to them. This is why executive-specific training exercises involving simulations are so important. They should be highly personalized and kept to short, manageable lessons incorporating the latest threat actor TTPs, including deepfake video/audio.

These should be backed by improved security controls and processes. This could include a strict approvals process for big-money fund transfers, potentially requiring sign off by two individuals and/or verification through an alternative known-good channel.

AI tools can also help network defenders. Consider AI-based email security designed to spot suspicious patterns of communication, senders, and content. And deepfake detection software to flag potentially malicious calls in real time. A Zero Trust approach may also provide some useful resilience. By enforcing least privilege and just-in-time access it will minimize what executives can access, and ensure their logins are never trusted by default.   

More generally, your organization may want to start limiting the kind of corporate information it shares publicly. In a world where AI is everywhere, the means to find and weaponize such information is now in the hands of the many, not the few.