Smart sensors and actuators everywhere to control building access, fuel valves, turbines, heating and ventilation, traffic lights, parts pickers, and much of our infrastructure’s operations—that’s the dream of the industrial internet of things. But there’s also a nightmare: All those devices operate with little oversight, so maybe you don’t have the data you need, maybe a valve won’t turn off when it should, or maybe a terrorist has taken them over to sabotage an oil rig or dam.
The truth is that the industrial internet of things is largely in place already, through thousands of proprietary deployments of sensors and actuators. Industry has been automating for years, and IoT is simply the new form of that automation, adding more connectivity and logic to the mix. Critical devices are already monitored, managed, and secured—but often inefficiently or sporadically, creating a different risk and high cost as more IoT devices get deployed.
An IoT take on mobile management tools, pioneered by Research in Motion’s BlackBerry, is what the management industry is proposing to make IoT monitoring, management, and security both efficient and consistent.
There are indeed sensible notions in mobile management that could be applied to IoT management. But there are also key differences that IT organizations have to clarify before embarking down an IoT management path.
The key strengths of mobile management
One key strength of the mobile management approach is its reliance on APIs and policies to define what rules should be imposed—encryption, password complexity, use of VPNs, connections to only specific access points, bans on some applications, and so on—and on periodic, local validation of compliance for those policies.
IT doesn’t need to handle every device by hand; it can simply require connected devices to adhere to its policies to gain access, and once devices are connected it can monitor status, enforce changes to compliance rules, change configurations remotely, and disconnect suspect devices.
The other key strength of the mobile management approach is standardizing those APIs and policies where possible, at least at the administration console level to ensure consistent deployment across a range of devices.
Flash back to 2010: BlackBerry’s policies worked only with BlackBerrys, Windows Mobile’s only with certain Microsoft devices, Apple’s only with Apple devices, Google’s only with Android devices, and so on. But mobile management tools could provide a fairly unified front end to most or all of these, and today the APIs and thus policies from Apple, Google, and Microsoft have become more similar, so the single-pane-of-glass management approach is easier and more effective.
Differences among devices continue to exist, and thus their management and security capabilities, but the common aspects can now be handled only once for all devices, not in a separate console for each. The device differences are also more visible to IT when shown in the same console than if each type of device had a separate console, so the combined front end also makes management easier and more effective where devices differ.
To the extent that IoT management can consolidate APIs and policies under a single pane of glass, the better those IoT devices can be monitored, managed, and secured.
Of course, both IT and mobile management vendors face less than a dozen types of devices and related APIs to manage. Yes, for a brief time, they might have encountered any or all of the following: BlackBerry 5, BlackBerry 10, iOS, Android, Windows Mobile, Windows Phone, Windows PC, Macs, Symbian, WebOS, and PalmOS. In 2009 to 2012, IT managers used to complain about all those platforms, though now we’re basically down to four: iOS/MacOS, Android, Windows 10, and legacy Windows. Three of those have significant overlap: iOS/MacOS, Android, and Windows 10. Today it’s a manageable level of complexity.
IoT has hundreds, indeed thousands, of platforms. There are no common protocols for transport, management, security, access, or pretty much anything. Each set of devices is highly specific, designed as a proprietary product and deployed as such. Very few were designed to work with anything else and even more rarely with devices from another vendor.
Many existing IoT devices don’t even have radio interfaces; people must go to them in person to read their status or adjust their settings—if they’re lucky, perhaps using some special cable connected to a PC running a configuration app rather than a bunch of DIP switches or arcane physical controls.
Over time, the expectation is that these existing devices will get replaced with smarter ones, exactly like consumer IoT vendors want us to replace our analog thermostats with digital, connected Nests and Ecobees, and our analog sprinkler controllers with internet-connected RainMachines. Modern IoT devices would be more likely to support the kinds of common, or at least similar, APIs and policies that mobile devices do.
Where IoT needs to find its own way to management
But even if existing sensors and actuators get replaced with modern IoT devices (which will take decades, for the same reason Cobol and Windows NT are still in use in critical systems), the complexity of all those devices’ different purposes and protocols will be staggering. How can you have a single-pane-of-glass console that managed building access card readers, heating and ventilation, lighting, elevators, security cameras, natural gas lines, pneumatic distributors, conveyor belts, robot assemblers, part-picking machines, and all the IoT devices in, say, a factory floor?
One notion, proposed by mobile management vendor MobileIron, is to rethink where the integration happens: closer to the devices than to the datacenter. Remember: Mobile management centralizes the console at the datacenter (or at a cloud service—same diff) for any or all mobile devices and increasingly for computers too. Windows PCs are likewise managed by a centralized console.
MobileIron’s other notion is that the proprietary nature of existing IoT devices is a reality but should not be assumed as the permanent state. (I should be clear that I’m describing my take on MobileIron’s proposed approach with a view to offering a more general model for the industry as a whole; you can see MobileIron’s own view at its website.)
Both notions are ones I haven’t seen in other proposed IoT management platforms, which tend to take the same approach of traditional ERP: a highly proprietary management system designed for a specific set of devices and applications. Although a generic implementation is possible, the norm is a highly custom deployment that’s hugely expensive and difficult to maintain. That pretty much describes the current state of sensor and actuator management, so most “IoT management platforms” are more IoT-washing than anything new.
In MobileIron’s proposed architecture, you’d have IoT gateways for sets of related devices. In my factory floor example, there might be an IoT gateway for building access card readers, heating and ventilation, lighting, elevators, and security cameras because they tend to be used by the same people (facilities management) and are likely to have some connectivity among them. A different IoT gateway might handle the IoT devices used on the manufacturing assembly line, for the same reason. If you have multiple factories, you’d have the various IoT gateways at each.
An IoT gateway would use whatever connections are needed for the devices it supervises; devices with no connectivity can’t be connected and will still need to be managed manually. It would have to support whatever protocols, including hard-wired ones, the various devices use to report status, provide and ingest data, manage access and configuration, and validate system integrity.
In my conversations with MobileIron, it’s clear the company would tackle each IoT gateway based on the customers it gets, seeking to work with what it learned for one customer to deployments proposed to other customers in similar industries. That’s sensible, but shows how stepwise IoT management’s evolution will need to be. It's also why IoT management will necessarily have to be less generic than mobile management.
In MobileIron’s model, a central console at the datacenter (on-prem or cloud) would manage those IoT gateways, essentially distributing the management across the ecosystem.
That distributed notion is gaining traction across the IoT industry, with labels such as “edge computing,” Cisco’s “fog computing,” and “mesh computing” bandied about to reflect that control points can’t exist only at the client or only at the center. There’ll be a mix, and that mix will differ based on the actual systems, the networks they use, the degree to which latency matters, the degree to which self-correction is possible, and the risk of overcentralization (such as for a terrorist takeover).
Also, where the IoT gateways come together, for companies with multiple kinds of industrial IoT uses, is the opportunity to identify and perhaps promulgate possible standards across a broad watch of IoT. As we saw in mobile management, it’s reasonable to expect standards on communications protocols, self-identification, system state (including last update, last access, and last self-test), access policies, and data security to emerge at the point where the different systems actually come together.
MobileIron’s model assumes an automatic network of sensors and actuators, with no direct management or configuration control by people outside the central management console—but only as a last resort to an algorithmically controlled system. That’s not the only IoT model. A common model is a human-managed IoT system.
An example is Bluetooth beacons, which don’t actually connect to the internet but instead react to Bluetooth devices passing by; that device connects to the internet to look up that beacon’s ID and thus stated location for use in whatever relevant app the device is running. To manage them, a person has to go near each one and establish a Bluetooth connection to make whatever changes are desired.
Another example are all those smart meters the utilities have been deploying at homes, so meter readers can drive by to scan your utility usage, rather than tramp up to each and every home and write down the details. To manage those devices, a utility still needs to go to the area and connect via a radio or physical interface.
Yet another example is an IT tech managing access points while checking signal strength as she moves through the physical space.
What these have in common is that there are local conditions—lack of dedicated power or lack of internet connectivity are the likeliest factors—that require a person to be on site to interact with the device. That is, they are mediated devices. Access control becomes a bigger deal in that case, because you need a public door on each device. You also need a way to record and reconcile changes made locally, so any back-end analytics and transaction processing can account for their effects.
IoT management will never be as easy as mobile management. But it could be easier. Adopting some of the key elements of mobile management, such as an API- and policy-driven approach, is a key first step. So is recognizing that IoT is a diverse, often independent collection of environments that should not be tackled as if it were one thing (you’ll never boil that ocean!) but as a series of separate items for which common patterns may emerge.
The IoT label is unfortunate because it suggests there is a single underlying network and set of protocols for various objects to all use. There’s not. But that doesn’t mean we can’t evolve toward clusters—networks of things, not an internet of things.