Would you fall for a faked call from your CEO asking you to wire money? As our colleague Jake Moore found out, you might. As computers with spare compute cycles get fed more and more training data, deepfakes get better and better. Feed them a long CEO speech and they gain inflection, tone, and other nuanced speech patterns that that can eventually make them quite convincing.

Now, in the US at least, there’s a contest to break the fake, and hopefully find ways to provide defensive systems with ways to thwart a related attack. The challenge spun up by the Federal Trade Commission (FTC) had competitors hacking together whatever tech they could find to thwart voice fakes and win a $25,000 prize.

Contests have been used for everything from space suit designs to self-driving vehicles to solar technologies, and now aim to boost interest and entrants into the deepfake defense arena, aspiring to create a race to protect consumers against AI-enabled voice cloning harms. Once you sign up through the portal, your ideas can compete with other approaches to hopefully bring home the gold.

I still wonder whether that can stop someone like Jake, so I asked him.

jake-moore-headshot

Q: Will businesses be able to defend themselves against this threat using current, widely available technologies (and if so, which ones)?

Jake: Sadly, I don’t think the answer yet lies with countermeasure technologies to combat this evolving problem. We are still heavily in the infant phase of this new AI era and many businesses are bedding in the idea of the good, the bad, and the ugly uses and their capabilities. Technology is continually being developed to help those in need, but like in any infant phase of a new technology, the rapid pace of such a movement cannot keep up with the shift in illicit demands in using it to scam people and businesses.

Q: Same questions for individuals, think retiree scams, romance scams, and the like?

Jake: AI has enabled scammers to carry out their actions at much larger scales than before, meaning the numbers game just got bigger. When more people are targeted, the rewards reaped are far greater and at no real extra efforts by the fraudsters.

However, individuals need to continue to be more savvy about all scams – from classic frauds to current trends. In particular, with the birth of voice cloning and deepfaking likenesses with striking accuracy, people need to stop thinking that seeing (and hearing) is believing. Remaining aware that such scams are possible will supply the confidence to those targeted to question such communications with more scrutiny and teach them to not be afraid to question their actions.

There is a lot of guidance on the internet offering awareness advice on such scams and people need to continually refresh themselves with the latest information to become and remain protected.

Q: What can institutional anti-fraud teams do to blunt attacks on their users/customers, and who should pay if people get scammed?

Jake: By default, anti-fraud teams must offer training at any cost, which needs to be annual as well as ad hoc for all members of an organization. Simulation attacks and war-gaming also help drive the message home in an interesting fashion and bring a serious issue into a more interesting dynamic, allowing for failure in a safe environment.

Once people have experienced the notion of a modern-day attack and witnessed firsthand the scale of this new technology, they will be more likely to remember the advice at the time of a real attack. AI is still very impressive, so to see deepfakes in action while in a safe setting gives the opportunity to showcase the potentially dangerous outcomes it may also create without creating too much of a fear factor.

Q: This is clearly a cat-and-mouse game, so who will win?

Jake: Since the beginning, there has been a chase between “cops and robbers” where the robbers are usually a couple of steps ahead. However, as technology improves, we must make sure we don’t get left behind and hand an even greater advantage over to the scammers.

It’s vital that people are equipped with the right tools and knowledge to best fight off these inevitable strikes, so that fraudsters don’t always win. By keeping up to date with the latest attack vectors and evolving scams, individuals and businesses have the best chance in defending against this new wave of technological attacks.

One thing is sure: this is no longer a theoretical threat. Expect 2024 and beyond to be the time when scammers find new automated ways to launch voice-enabled attacks very rapidly, especially in response to cataclysmic events where some trusted official “asks” you to do something. It will all sound convincing, complete with an air of urgency, and with what seems like auditory multi-factor authentication, but you can still get scammed, even if you “personally heard from an official”.

Thank you for your time, Jake.