My first job out of college was as a decision support application analyst, and it allowed me to work with systems using early versions of artificial intelligence. The idea was compelling to me then, and it remains compelling to me today.
The systems learned as they processed information. The objective was predicting the future, aka predictive analytics. This is still the objective today in larger enterprises.
The problem with these systems in the past was that to really analyze all the relevant data, they needed a huge amount of processing power and storage. Thus, businesses seeking to use the so-called learning systems for tasks like predictive analytics had to shell out major bucks for hardware and software -- or do without.
The trend today is machine learning, which is a form of artificial intelligence that uses algorithms to learn from data. These systems build models from incoming transactional data, then find patterns in that data to make predictions. These predictions can be a simple as providing a recommendation to a shopper on an e-commerce website or as complex as determining if a brand of automobile should be retired.
As with their learning-system forebears, the overhead of machine-learning systems is typically huge. But today we have the option to place these systems in the cloud. Amazon Web Services, for example, supports machine learning using AWS's algorithms to read native AWS data (such as RDS, Redshift, and S3). Google has supported predictive analysts for some time with its Google Prediction API, and Microsoft provides an Azure machine-learning service.
The ability to predict the future for both tactical and strategic purposes has eluded us because of prohibitive resource requirements. But today, thanks to the cloud for machine learning as a service, you can apply this technology far and wide on all that data enterprises have been collecting.