How deep learning will transform automation

Managing computer systems, finding bugs, and plugging security holes are all problems that AI will solve

How deep learning will transform automation
Thinkstock

Are you getting tired of hearing about artificial intelligence? It seems we must be reaching peak hype cycle around AI when almost every article written about it rehashes the same tropes around self-driving cars, the latest game that has been mastered by computers, or the next house appliance to get speech recognition.

There is so much noise around AI that it’s hard to find a signal. And the real work of AI is happening behind the scenes of mainstream press coverage. After all, the places that machine learning can have the most impact haven’t yet been touched by AI—like automation.

But wait, I hear your moans already. Isn’t artificial intelligence almost the same as automation? No.

Computer automation is as old as the computer itself. The Turing Machine is by definition a self-replicating mathematical automaton. In other words, an algorithm that can automatically generate itself. But who programmed the Turing Machine to replicate itself? We did.

Humans are behind the overwhelming majority of computer automation. For example, take the lowly underappreciated cron job, arguably the bedrock of computer automation. You specify a time (every five minutes) and a command (empty the trash), and the job will get done. Still, the computer didn’t decide to empty the trash on its own. There was no computer sentience. You told it what to do.

But times are changing, and machine learning has opened opportunities for computers to automate and program themselves in more advanced ways than ever before.

Applying deep learning to automate IT operations

One of the biggest macro trends in IT operations today is the movement away from humans running and managing software systems to computers running and managing software systems.

Consider one of the most successful and popular web server technologies in history: Apache. In the 1990s, the running and management of Apache was almost purely done by humans. If a server crashed, a human IT operations pro would have to fix it.

The next step forward was computer systems like Nagios, which would monitor Apache for crashes and restart it as necessary. But Nagios wasn’t smart enough to get under the hood and actually manage the configuration of the software it ran.

Devops and PaaS represented the next leap, as configuration management software like Puppet and Chef could not only start and stop Apache but manage the settings as well.

Still, with devops and PaaS, the systems were being built piecemeal. If you needed to run a new application, you ad-hoc deployed it into the system. And what if the whole system crashed? Could you re-create the entire thing with all of the applications and dependencies intact? Not easily.

Right now, the state of the art for IT operations is declarative (as opposed to ad-hoc) systems. What do I mean by declarative? I mean creating an entire IT topology that is described in a single file or set of files. Then it’s the declarative system’s job to sync the real IT topology with the description specified in the files.

How is that different than devops or anything before? Because there is no ad-hoc adding or removing from the system without modifying the declared system architecture. You don’t simply deploy a few more websites or applications into the system (from the bottom up). You declare them (top-down) and let the system become consistent with your top-down declaration.

Some leaders in declarative IT include the Kubernetes project from Google and the Docker container platform. These technologies represent a paradigm shift in the operation of complex systems, but fundamentally there is still a human creating the architecture itself.

If the system needs to scale up or down, the architecture file needs to be modified to reflect that (or a human needs to specify the scaling rules).

Still, the holistic, encapsulating nature of declarative IT systems creates a new opportunity. Since the total state of the complex system is defined in a single place, the changes of state can be tracked over time—before and after ... every improvement. Every scale up and down is logged in a centralized way.

The next step is to train a neural network on the topology and its changes over time. A recurrent neural network would then be able to predict and react to the state of the IT system based on its own assessments.

At first, the output of the learning system might be used as mere recommendations (“it looks like your systems are about to be overloaded”), but eventually, as the recommendations become more reliable than even human intuition, the deep learning system might take control and put all of IT management on autopilot.

Applying deep learning to automate programming

We all have blind spots. Some things are so obvious to us that we can’t see them. It astounds me the amount of resources that computer programmers have put into teaching computers to understand human languages when there is something so much more obvious: teaching computers to understand code itself.

If you’re a programmer, you’re probably rolling your eyes at this statement. After all, a compiler’s job is to understand and execute code. But simply following code instructions is not what I mean. I mean understanding the intentions behind code, understanding the difference between good code and bad code, intuitively being able to fix buggy code to do what it was supposed to.

Impossible? Maybe, or maybe not. After all, GitHub has more than 20 million active repositories and more than 66 million pull requests. Every pull request (in theory) is a change of state from bad code to good code, from featureless code to featured code, from security vulnerable code to patched code.

The structure and volume of the data set for training a neural network on GitHub data is ideal. A recurrent neural network could be used as the ultimate code autocompletion tool. A Kohenon neural network could automatically figure out the purpose of the program you are trying to code as you write it, the ultimate code pattern identifier.

“Oh, is that an abstract factory you are trying to build there? You really shouldn’t be doing it that way. I also noticed you are trying to build a web server. May I finish it off for you?”

Deep learning applied to computer programming itself is an almost completely unexplored area right now and perfectly ripe for discovery. One of the few examples of work in this direction is the bug predictor system built by Google engineers, which watches code repositories and their associated project management systems. The more frequently a piece of code shows up in a bug report, the higher the score it gets for being problematic. There is also an exponential time decay to make sure the most recent problems show up higher in the ranking. Although this isn’t quite applying deep learning principles to code, it is certainly a step in that direction.

Another example is Siri’s successor, Viv. A Wired article includes an infographic that shows how Viv builds and executes a computer program based on the speech it receives from the user. This is very different to the way existing natural language processors work. Right now, systems like Siri and Alexa work by matching the language patterns of the request to the available set of skills. Adding new skills means adding new patterns. Once the pattern is matched, a single script is run for each pattern.

With Viv, the process resembles a more sophisticated compiler than a simple pattern matching system. In essence the human language is compiled into a set of algorithms, rather than reduced to simple patterns that can be matched to algorithms. The team at Viv seems to be creating a generic compiler for English to machine language.

Applying deep learning to automate network security and compliance

The quantity of information going through IP networks is staggering. In 2013, there were around 200TB of data transferred every single minute. Today that number is almost double and growing. Ninety percent of the world’s network data has been created in the last two years.

The only way to keep up with this kind of scale is through automation. As seen in the previous examples of IT operations and the building of software itself, artificial intelligence is about to play a much more prominent role in the way our networks are managed and run.

Every action you take on the internet leaves a network fingerprint. Those fingerprints create patterns. Even encrypted traffic leaves a trail of which IP addresses were accessed at what times and with what frequency. And wherever there are patterns, artificial intelligence will eventually be able to understand them far better than humans can.

A recent Wired article covered the Darpa Cyber Grand Challenge, where AI-based hackers autonomously attacked one another and defended themselves. As you might expect, AIs can run through a cookbook of penetration testing techniques faster than their computer counterparts. The surprise was their success in finding the sorts of subtle and complex security holes that have always required human researchers to discover.

A group of network and security startups are built from the ground up to address the need to automate the discovery and patching of vulnerabilities. Among them is StackPath, a startup run by the founder of SoftLayer. Traditional security software isn’t built for the age of IoT where the number of internet-connected devices is exploding.

Site licenses on a per-device basis made sense when the majority of devices were desktop computers, but with the proliferation of smartphones, laptops, pads, surfaces, and Raspberry Pis, not to mention thermostats and major appliances, the surface area for network security attacks is growing out of hand.

After all, the security of your entire network is only as good as the security of its weakest link.

Many neurons make light work 

The applications of machine learning to computer automation itself might be the most valuable, underused, and underhyped application of artificial intelligence to date. Why? Because, more than any other use of AI, it is a force multiplier.

Using AI to assist in the running and managing of computers themselves offers improvements and opportunities to all other forms of computation, including the AI that runs our cars and powers our robo-investments. Improvements in computers playing chess arguably doesn’t make a direct difference to other applications of AI, but improvements to automation do.

That’s why the next steps in the artificial intelligence revolution will be in automation.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2017 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!