Be careful what you call ‘fog computing’

What to look for in a true fog computing architecture

Fog computing is picking up steam as a buzzword in the tech world, often used in comparison to cloud or confused with edge, both of which have geography built in: either the computer is at the edge, or the computer is in the cloud. The easiest way to understand what is unique about fog is that it is location agnostic. The computers in a fog infrastructure can be anywhere: from edge to cloud and anywhere in between.

In fog, you program against what a service does, not where it is. So the same service that was deployed to cloud today can be deployed at the edge tomorrow. Think of it as a framework that supports a vast ecosystem of resources. It enables the flexible consumption of computing resources that span a continuum from on-premises, to nearby, to cloud—with each used for the benefits it may provide like speed, availability, bandwidth, scalability, and cost.  

Fog enables us to look differently at the spare computing power that surrounds us in our daily lives and opens up opportunities to put all computing power to use, regardless of location. As fog’s star continues to rise, people are using the term fog computing to market a variety of products, so if you are truly interested in the benefits it can provide, make sure it meets these two main criteria:

1. Provides a spectrum of computing power that spans a continuum of onsite to cloud

In current cloud-centric computing infrastructures, much of the processing power used is located in the far cloud. But with the number of connected devices skyrocketing and set to reach 20 billion in the next two years, the quantity of data travelling that distance is increasing dramatically

As a result, there has been a surge in demand for processing power located closer to the devices that need it, achieved through edge computing. Edge typically involves installing servers, often called “edge nodes,” closer to the source of demand for processing power, providing important benefits like reduced latency and bandwidth strain.

Because fog computing can leverage compute everywhere, including on computers that are the most geographically appropriate, it can also provide low-latency compute, and is therefore often sought out for the same reasons as edge computing.  As a result, the terms “edge” and “fog” are often used synonymously, despite edge computing being just one aspect of the more comprehensive fog computing infrastructure.

While edge computing is an effective way to reduce latency and bandwidth strain for high-traffic tech like IoT, the services running at any point in a given business or home have varying needs for performance, scalability, uptime and cost that a single “edge node” cannot address.

An effective fog computing infrastructure should be geographically diverse enough to enable edge-appropriate computing to be done at the edge, cloud-appropriate computing to be done in the cloud, and ideally a spectrum of resources in between for flexibility and resiliency.

Unless hardware involved is just one component of a much broader spectrum of resources, it is not fog.  

2. Dynamically use optimal computing resources on demand

Fog computing not only encompasses a greater geography than cloud or edge, but that geography can be dynamic. The computer processing data can be anywhere, and its location can change regularly (whether for scaling, optimizing location to better serve demand, or recovering from failures). This is achieved through the use of location agnostic services.

For engineers deploying a software service, this means they specify what a service needs when deploying to fog architecture, instead of where it will run. If low latency is the requirement, for example, a service will automatically be deployed to the best available match, whether that’s a server in the same room, a regional datacenter, or, if nothing faster is available, perhaps a cloud datacenter.

The ability to broadly specify for business requirements through fog computing has the potential to make life far simpler for engineers by relieving the burden of provisioning, scaling, and maintaining fixed computing resources. Through some fog computing platforms, engineers need simply to prioritize features like low latency, lost cost, or green energy for each of their services, and the platform will automatically deploy services to the computers that best meet that criteria on demand.

In summary, an effective fog computing architecture should provide a geographically diverse set of computing resources, and a platform through which you can easily use the most optimal set of those resources at any given time, according to your unique (and changing) business needs.

If you decide that fog computing is the solution you’ve been looking for, do your homework. The number of “fog”-labeled offerings will surely increase as its benefits become even more widely known, but before beginning to adopt fog computing infrastructure yourself, make sure it’s the real deal.

This article is published as part of the IDG Contributor Network. Want to Join?