Sunday, June 22, 2014

The Race to Artificial Intelligence

Here are some things I know:

  • In computer security, there are two groups: the good guys and the bad guys.
  • People are trying to develop artificial intelligence.
  • Intelligence-singularity machines can modify and adapt themselves.
It hit me while thinking about these things that the race to artificial intelligence is more important than which people/companies are going to get super rich. I believe that the first infosec "side" to acquire real, adapting AI will be able to claim total victory.

It's very well established by now that every piece of software contains bugs. In fact, entropy pretty much guarantees that something is going to go awry if you run a process long enough. And as we saw in Heartbleed, little issues can explode into big trouble when exploited. One vulnerability opens another, and pretty soon you've got everything compromised. (The Walmart credit card stealing malware got in through a noncritical system.)

Artificial intelligence is going to be really good at understanding binary files, even obfuscated ones if it's smart enough. It's also going to be really good at crunching large amounts of information, correlating data with other data in nontrivial mappings (understanding context), and checking every contingency.

The way issues are found (both bugs in a process and errors in code) is by trying every possible path of input to see if it ever does something it shouldn't. Eventually, we will have AI sufficiently advanced to analyze a piece (or a bunch) of software and figure out all the bugs, including a lack of detection by a monitoring application (it'll figure out what causes a malicious program to not trip a virus scanner). This is what the race is towards.

If the good guys get it first, they'll get to analyze every program for holes that would allow an attacker to get it where it shouldn't be possible. Now, since bugs are always going to happen, things are still going to be tough for the heroes. The only way I see them winning is to have perfect integration between the user's definition of "working properly" and virus detection. This is a different application of AI: it needs to be able to tell what is going to produce a state that the user does not desire. The brute-force aspect of decision analysis comes in again: it has to figure out what the change in state will do to every aspect of the system.

There's also the in-betweenish case of the good guys who will be good by virtue of eliminating the bad guys. Basically, fighting fire with fire: use the AI to detect security holes, exploit them to get into the bad guys' stuff, and make it impossible for any computer to run another AI ever. It sounds Communist, but it might be the only way to win.

If the bad guys get it first, they'll be able to (again) analyze all the things, but they're not interested in fixing it. Instead, they'll set up vast networks of zombified computers, analyzing every program ever to break into everything. Once every vulnerability is cataloged, the AI can create programs to exploit it. Mass information theft, large-scale denial of service, and general badness ensue. Since they got AI first, they can analyze the problems that will be in the good guys' AI, thereby staying a step ahead. (I'm assuming that computational power is equal.) This is why time matters: AI needs time to adapt.

Alright, stop reading my blog and go develop AI for the good guys. Time is of the essence!

No comments:

Post a Comment