The top 3 serverless problems and how to solve them

Follow these tips to eliminate noncompute bottlenecks, avoid provider throttling and queueing, and keep your serverless functions responsive

Current Job Listings

Serverless computing is all the rage. Everyone who is anyone is either investigating it or has already deployed on it. Don’t be last in line or you’ll be sure to miss out!

What’s all the fuss about? Serverless computing gives you an infrastructure that allows server resources to be applied to a system as necessary for it to scale, effectively providing compute horsepower as a utility to be consumed by the system as load demands.

This means that nobody needs to care about individual servers at runtime (frankly, nobody ever did care about them). Economies of scale make it cost effective to outsource managing a fleet of servers to a cloud provider, while the “serverless” interface makes that outsource relationship as simple as possible by minimizing the contract.

Being human, the immediate reaction of many people is to try and replace the charts, traffic lights, and alerts they attached to their servers with charts, traffic lights, and alerts relating to their individual serverless functions. Sadly, however, this does not fundamentally address the application management challenge. Because just as nobody really cared about servers, nobody really cares about the serverless functions in isolation either.

To continue reading this article register now