Over the past several years there was no such thing as a security vendor that didn’t have machine learning (ML) – usually mischaracterized it as artificial intelligence (AI) because bandwagons are so attractive, no matter how misleading – and they mostly claimed it was going to fix security for good. But it didn’t.

At Black Hat 2018 the aisles were bustling and activity kept ramping up, not subsiding. Last year there was no shortage of security breaches and they seem to be continuing unabated, so what happened with the promise of AI?

Turns out security is hard, with or without machine learning, neural networks, deep learning or whatever the current fad or next tool may be. Yes, machine learning has brought new tools to bear on security problems, but it isn’t just “fire and forget”. It takes security experts with data science expertise to have a chance of making machine learning work in a security context.

The internet is filled with AI fail memes, where dogs were mistaken for fried chicken, muffins or mops (really). While machine automation is useful, it doesn’t understand security context. It doesn’t know, for example, what an IP address means, only that it’s comprised of a few numeric values near each other. Without a real expert to correct and train it, it therefore makes mistakes that can have real security implications.

Also, as with any tool or technology, machine learning can be used for both good and bad, depending on the intentions of the user. With the wide availability of standard AI libraries, the heavy lifting of the daunting math is largely handled, and tools can be assembled by importing libraries and spooling up an attack.

For example, now scammers can use the same widely available AI resources to make even more malware mutations even faster to try to fool the security defenders’ defenses.

AI just upped the game, it didn’t kill it!

It’s not just malware. Now phishing attacks will be much more convincing. If machine learning can train on millions of emails, it should be able to create phishing emails that will often be more convincing than legitimate emails.

What about fake social media influence campaigns? Using neural networks can allow bad actors to increase the perceived credibility of fake personas, which can then be used to attempt to scam users with much more plausible backstories, even for those who attempt to do more diligence and verify that the requests are from “real people”. There are plenty of memes showing AI making “virtual people” by modifying a training set of real pictures of real people. They can do the same with personas, which will seem very real.

Turns out AI is a just part of the stack of defenses, not the whole stack as many marketing brochures have promised. Is it good? Yes. Is it important? Yes. But it is not the silver bullet of security.

And while your car, phone and maybe even the refrigerator have neural networks baked in, people who are determined to steal something of value will continue, but with new tools to add to their arsenal. But then, that’s been the case since the first bad guy picked up a tool and smacked someone on the head – no, it’s nothing’s new.

And no, this article wasn’t generated by an algorithm.