Starting a discussion on machine learning

Machine learning has tremendous potential to solve complex human problems

machine learning
Shutterstock

As the term ML (machine learning) gains attention and popularity, it begins to lose its true definition and risks becoming poorly-used marketing language. ML is so much more than a buzzword, it is something that holds promise and profound beauty for our civilization.

Existing computers are merely declarative devices, taking explicit instructions from different inputs for quick processing. Methods are specific and structured with limited expectations.

The ML process differs in that it should accept desired outcomes with example inputs to then complete a different task without further instruction. This new paradigm assumes a machine can do more with less by applying past experience to different situations.

Decisions based on new information

Imagine a robotic arm on a conveyor belt that sorts bottles and cans all day as they come down the belt. The arm is programmed to tell the difference between the shape of a can and the shape of a bottle as it puts them each in separate bins at the end of the belt nicely with no mistakes.

However, one day a box comes down the belt and the arm can not handle it. In fact, the arm will never know how to handle the box shape without reprogramming. A human, on the other hand will adapt. So, our concept is of an improved machine that is able to handle new information.

Humans are designed to take in new information and make judgements. Item sorting is not particularly complex, so let's elevate the concept. Imagine a robot surgeon programmed with detailed knowledge of human anatomy and surgical techniques. This robot is a miracle, but it might run into problems if presented with a different type of animal.

This is the lack of adaptivity. A human doctor might make some assumptions about the similarity and placement of the animal's organs and proceed with experience-based judgement. The ultimate limit of any is not in size, speed, nor precision but in its ability to create new knowledge based on past experience.

Reaching for understanding through ML

So, there is a difference between computers that complete instructions and learning machines that generalize experience. While developing this discipline, use of pattern matching, searching, goal reduction, rules-based processing, and other logical strategies have yielded incredible discoveries. However, ML's true potential is still unknown because we have not crossed the threshold.

ML is not AI (artificial intelligence), which is a quest of making machines more useful and also a quest for better understanding our own intellect. ML is a process that helps us approach a new stage in computing: a machine that can understand by abstraction.

The practical need for ML is to address tasks humans simply cannot do, such as analyzing extremely large and complex data sets. Scientific discovery depends on data analysis, and there is more data available in the world than can be consumed and understood. The mass data generated by our very networks are a good example. Properly understood, the information could tell us about particular problems within networks. ML can help just as long we do not incorrectly associate certain events with risk.

Applying ML to security while avoiding false positives

Digital security is one particularly challenging problem to try to solve using ML. Security is a process that is supposed to enable businesses to operate as freely as possible, while ensuring confidentiality, integrity and availability of data. Traditional approaches to security all work on a basic idea: We can clearly define what good behavior looks like, versus malicious actions. Unfortunately, the approach has been repeatedly shown as incorrect. The security industry tried to correct this course by turning to the statistical methods underlying machine learning.

One such promising approach was the use of anomaly detection, which is teaching a computer to identify a baseline of activity, and then identify deviations. Deviations can then be further analyzed by human experts to separate the wheat from the chaff. However, experience has shown us that in sufficiently complex systems, anomalies are the norm. For example, a diligent employee, that always gets to the office early, will, from time to time, get sick and answer emails from home during work days, creating a network anomaly. This is clearly not malicious, but very difficult to tell apart from someone using a stolen password to read his email.

The reliance on anomalies therefore, often leads to false positives which consume valuable human time. The point of a machine is to reduce our workload. False positives are more than a problem, they are plague, which I will explain through future discussions.

Copyright © 2017 IDG Communications, Inc.