The Black Hat keynote trotted out a litany of security problems AI tries to fix, with an accompanying dizzy array of ones it might cause unwittingly, or really, just described a huge new attack surface created by the thing that was supposed to “fix” security.

But if DARPA has its way, its AI Cyber Challenge (AIxCC) will fix that by dumping huge amounts (millions) of dollars as prize money toward solving AI security problems, to roll out in coming years at DEF CON. That’s enough for some aspiring teams to spin up their own skunkworks of the willing, to focus on the issues DARPA, along with its collaborators from industry, think are important.

The top five teams at next year’s DEF CON stand to haul in US$ 2 million each in the semifinal round – no small sum for budding hackers – followed by over $8 million in prize money (total) if you win in the finals. That’s not chump change, even if you don’t live in your mom’s basement.

Issues of AI

One major issue of some current AI (like language models) is that it’s public. By gorging itself on as much of the internet as it can slurp up, it tries to create an increasingly accurate zeitgeist of all things useful such as relationships of questions and answers we might be asking, inferring context, and making assumptions, and trying to create a prediction model.

But few companies want to trust a public model, which may use their internal sensitive data to feed the beast and make it public. There is no sort of chain of trust in the decision-making of what Large Language Models puke into the public sphere. Is there a reliable redaction of sensitive information, or a model that can attest to its integrity and security? No.

What about protecting legally protected things like books, pictures, code, music, and the like from being pseudo-assimilated into the giant ball of goo used to train LLMs? One could argue they’re not really using the thing itself improperly, but they certainly are using it to train their products for commercial success in the marketplace. Is that proper? Legal wonks haven’t exactly figured that out.

ChatGPT – a sign of things to come?

I attended a session on ChatGPT phishing, which also promises to be a newly supercharged menace, since LLMs can also assimilate photos, along with related conversations and other data, to synthesize the tone and nuance of an individual and then perhaps send a crafty email you’d be hard-pressed to detect as bogus. Which seems like bad news, really.

The good news though is that with multimodel LLM functionality coming out soon, you can send your bot to a Zoom meeting to take notes for you, determine intent based on participants’ interaction, judge mood and ingest the content of the documents shown while screen-sharing and tell you what, if anything, you should probably respond to and still seem like you were there. That actually might be a good feature, if incredibly tempting.

But what will be the actual end result of all this AI LLM trend? Is it going to be for the betterment of humanity, or will it burst like the crypto blockchain bubble did a while ago? And, If anything else, are we prepared to face the real consequences, of which there can be many, head-on?

Related reading: Will ChatGPT start writing killer malware?