Is your ITOM project AI-friendly?

The ability to stream information from legacy sources and setup within machine-learning-friendly value chain are key measures of today’s projects to ensure they aren’t obsolete before the end of the year

1 measure roi

Regardless of where you stand with your digital transformation project(s), are you prepared to explain how machine learning and artificial intelligence (AI) will transform your organization? For the vast majority of IT staffers, the answer will be range from “kind of … maybe” to “that’s years away, so let’s talk later.” Unfortunately, those must-have projects being worked today are likely creating crippling barriers for the future operations leaders. 

While loads of experts offer valuable insights for a successful machine learning experience, most of these mentions make a key assumption which insinuates the AI adventure has been planned or is well under way. Admittedly, a true AI system is likely years away, so we’ll keep it real by just focusing on the near term realities of machine learning. For most IT operations shops, the machine learning strategy will entail a huge catchup step (“iteration zero”) when considering how to manage the corporation’s critical applications as additional magnitudes of scale are introduced.

Where’s the data flowing, or is it flowing?

Many machine learning conversations start with ensuring quality data is being funneled into your intelligence design, which is dealing with redundant, invalid, noisy, and other types of unwanted data. While completely valid, this won’t be much of a factor if a mechanism to get your data into a streaming pipeline does not exist. Many enterprise IT operations management (ITOM) teams rely on a plethora of methods for handling various aspects of virtual and physical datacenter parts. These tools, regardless of whether they were developed in-house or customized from an enterprise vendor, often share one common trait in their use of storing data in siloed databases.

This creates two big problems for the future. First, correlating this data across all these silos is difficult, and it’s something ITOM teams struggle with every day. Luckily, a machine learning engine will provide some benefit in the interrelationship challenges. The larger issue is streaming this valuable information onto a data bus where machine learning processing can occur. ETL’ing the data out won’t suffice in the real-time worlds of ephemeral containers and IoT; consequently, to future-proof a project today, it needs to include one or more of the following types of streaming mechanisms: gRCP, Kafka connectors, or plain old websocket push.

A tape measure with no numbers

While devops has been buzzed out of being new for the most part, the vast majority of mature organizations are either just getting started with their rapid release trains or are trying to corral a few random upstarts within the company. Due to rapid success, these upstarts in the company are frequently running without significant supervision or integration into the larger operations picture, and much to the chagrin of the established ITOM leadership, the insularly nature of these devops activities creates hidden risk to other areas of the IT ecosystem.

Even though it’s possible to fix what you can’t measure (for a while), having applicable data points from all parts of the organization is required to inform future models where machine learning will be used see what mere mortals are unable to detect. To avoid this data deficit, processes must be established for ensuring devops teams are implementing key metrics into the code today such that these internal insights can be obtained via the aforementioned data bus, even if that bus hasn’t yet arrived. This is a tough request but imagine how hard it will be when the original developers are no longer in the picture.

DIY, you say!

As Abraham Lincoln once said, “You cannot escape the responsibility of tomorrow by evading it today.” Unless you are installing a management tool inside a top-secret nuclear submarine, the reasons for adding/building another self-run, on-premises enterprise tool are suspect. (Yes, your industry has a set of strict, likely obsolete compliance guidelines, and, rest assured, your competitor is already working through these to stay ahead.)

In general, the data-hungry requirements of an machine learning ingest are not likely to withstand a tool running in a private cloud or, heaven forbid, on bare metal. In addition, in-house management of an on-prem machine learning system is like being handed the fob to an F1 racecar—just because you can get it started doesn’t mean you can drive it. Simply said, the current struggles of running management apps are trivial relative to the challenges of an machine learning engine, and the elastic nature of real-time streaming data makes it a perfect fit for the machine learning app creators to run in a cloud-native format.

While a large percentage of ITOM projects completed in 2018 will be based on projects started in 2017 or earlier, the competition is likely working on a digital transformation of their business while simultaneously rejiggering their operations to manage these dynamic business processes.The ability to stream information from legacy sources and setup within machine-learning-friendly value chain are key measures of today’s projects to ensure they aren’t obsolete before the end of the year.

Copyright © 2018 IDG Communications, Inc.

How to choose a low-code development platform