Data at the edge: the promise and challenge

Massive amounts of unstructured data created by devices will move data storage and processing power to the edge of a network instead of in a central data warehouse

Mobile datacenters bring computing to the edge
Stephen Lawson/IDG News Service

What happens when cloud computing goes away? A bold—and perhaps surprising—question. But one that Peter Levine, general partner at Andreessen Horowitz, didn’t shy away from asking during a presentation at the VC firm’s a16z Summit in 2016.

Just as the distributed client server model that took off in the 1980s replaced the centralized mainframes of the 1960s and 1970s, distributed edge intelligence will replace today’s centralized cloud computing, Levine predicts. And he believes this change is already underway but will really take off beginning in 2020.

“Everything that’s ever popular in technology always gets replaced by something else,” Levine said. “It always goes away. That’s either the opportunity or the beauty of the business.”

Why does edge computing make sense?

Edge computing means data processing power is at the edge of a network instead of in a cloud or a central data warehouse. It’s an intriguing computing prediction, with self-driving cars perhaps the most frequently mentioned example of why this technology shift is necessary.

The World Economic Forum’s article “The most revolutionary thing about self-driving cars isn’t what you think” discusses how most of the hype has been about the novelty of the cars themselves but the most exciting development is actually with the digital technology that powers them.

Because of the massive amounts of unstructured data collected by self-driving cars, the cars must essentially act as datacenters. Sending data to cloud to be processed and then sent back could cost valuable seconds needed to avoid construction slowdowns and avert traffic accidents. To act quickly on the data, self-driving cars must be able to do so at the source of collection—on the road.

Decreasing sensor costs and increasing advancements in machine learning and the internet of things will produce more real-world data. How much data? 507.5 zettabytes of data by 2019, according to “Cisco Global Cloud Index: Forecast and Methodology, 2015-2020” whitepaper.

Sensors will be everywhere, collecting data that must then be processed. The desire for agility will mean that data collection and processing happens at the edge of computing instead of being sent to cloud.

Networks simply aren’t fast enough to handle all that unstructured data. While computer power and storage densities have continued to increase, common networks have been limited in practice, largely to 10Gbps and more recently 25Gbps.

People used to talk about moving data faster via Fed Ex truck or sailboat than via a network link. Given Cisco’s estimate of Internet traffic growing at about 29 percent a year, we wouldn’t catch up until 2040, according to What If?, but transfer rates would have to grow faster than storage rates.

The promise of edge computing

Promising use cases for edge computing abound. Picture the oxygen sensors in a mine. Informed by machine learning, these sensors collect data that help ensure the safety of the miners. Processing the data locally makes sense and could save lives.

On the factory floor, quality control systems collect data from the machinery. If a process is out of control, why wait for that data to be sent to the cloud, processed, and then for appropriate signaling to come back? In that time, you could have made a change that increases safety and profits, decrease delays, or catches a defective product?

In such instances, public cloud can still play an important role in aggregating data across multiple sites and improving organization-wide machine learning. But why not have local compute make the local decisions?

Edge computing is what’s required to make the most of machine learning and artificial intelligence.

The challenge of data at the edge

When computers are not longer “attached to humans,” as Andreessen Horowitz’s Levine phrases it, the world could be filled with trillions of data-producing devices. This will challenge everything, including networking, storage, compute, programming language, and management.

Whether your datacenter is a factory, a fleet of automobiles, or a scientist’s lab, someone must make certain the datacenter is running smoothly, even if that “datacenter” amounts to a self-driving vehicle or a factory machine sensor. They also must make certain all the data is being correctly collected and processed.

In this new world of sensors and devices at the edge, managing the infrastructure supporting massive amounts of unstructured and distributed data could be overwhelm an enterprise. But enterprises should focus on managing their digital assets, not their infrastructure.

Copyright © 2017 IDG Communications, Inc.