The hidden costs of the data explosion

As data grows out of control, it's easy to overlook the high price of maintaining BC/DR solutions to keep all that data safe

Rampant data growth makes slaves of us all. The effects are well known: massive growth in storage infrastructure, prematurely obsolete storage resources, the endless scramble to stay on top of it all.

Storage vendors have reacted to the onslaught with cheaper, more capable primary storage hardware to sate the data addiction. But as primary storage resources are continuously upgraded in response to growth, disaster recovery and business continuity architectures are often pushed beyond their original design limits -- leaving organizations at significant risk.

A failure to plan is a plan for failure
The dangers of neglecting to plan expansion of BC/DR capabilities in lockstep with those of the primary storage environment are many and varied. The most common examples I've seen can be found in traditional backup infrastructures.

All too often, so much primary data needs to be supported that backup windows start overlapping with production hours. To prevent that encroachment, well-meaning admins often start by trimming "unimportant" data out of the backup rotation. But it usually doesn't end there. Before long, entire servers are being backed up less frequently, then sometimes not at all. From there, you're only a hop, skip, and a jump from neglecting to protect something that is important and living to regret it.

Worse still, storage infrastructures packed to the gills with data not only take longer to back up, they also take much longer to restore in the event that data is lost or corrupted. A resource on which you may have been able to deliver a one-hour RTO a few years ago now might take two, three, or even four times as long to restore.

When you consider more advanced business continuity measures such as hot and warm sites, things get even worse. Not only do you need to make sure you're growing your disaster recovery site's storage in line with that of your primary site, you also need to make sure that the site will be able to withstand the additional transactional storage and compute loads that go along with it. In this case, a healthy dose of disaster recovery plan testing is absolutely crucial to determine where you stand.

Preventing a disaster recovery disaster
The most obvious step you can take to avoid these sorts of scenarios playing out in your own environment is to insist on factoring the cost of enhancing your BC/DR resources into that of your primary storage environment -- as if they were one and the same.

1 2 Page 1
Page 1 of 2
How to choose a low-code development platform