Facebook founder Mark Zuckerberg has an interesting take on risk:
“The biggest risk is not taking any risk … In a world that’s changing really quickly, the only strategy that is guaranteed to fail is not taking risks.”
The world is changing “really quickly” for IT professionals, with a new and rapidly growing set of mobile, social, and cloud-native apps driving growth for business. This new way of doing business requires new technology, and every time you bring new technology into the data center, you add risk – risk that it won’t easily integrate with your existing infrastructure, that silos will make resource management more complex, or that your current team might be lacking the skill sets necessary to manage the new infrastructure.
How do you balance the need to support business change with the need to maintain stability in IT systems? Here’s another piece of advice from Zuckerberg:
“Move fast and break things. Unless you are breaking stuff, you are not moving fast enough.”
Balancing risk with the need for innovation
OK, that might not sound like a good answer. In fact, many of us in IT view that statement as a prelude to career change because we’re paid to make sure things don’t break. Many businesses have spent millions of dollars on their infrastructure; it’s what keeps the business going, and they’re understandably reluctant to risk changing too much too fast. Still, the larger point remains valid – your business needs to move faster to stay competitive.
You see it in your day-to-day operations. Line of business (LOB) developers want to innovate now, but are constrained by the limitations of legacy systems. And then some developer, who’s thinking outside the box, decides to go around IT and push unproven applications onto the network without following protocols. Even worse, they create a shadow IT environment by pushing a new application out into the public cloud without consulting or even informing IT. If something breaks in the process, you own it – even if you didn’t put it there in the first place.
Today’s data center is a complex mix of assets spread across operational silos. Complexity equals risk, and each new demand from the business seems to add to that risk profile. So how do you meet the business demands for change and still keep the lights on? How do you enable the business to move fast without breaking things?
Small changes, small gains?
You could stick with what you know – making incremental changes to your on-premises infrastructure, following a standard refresh cycle and deploying next-gen upgrades that are a little more reliable and maybe a little less complex to operate and manage. The upside is that stable IT remains relatively stable, but there’s always the risk that legacy equipment may not integrate well with next-generation systems.
Or you could add an additional software control layer in your environment to reduce the complexity of managing different resource silos. But what happens when your new software layer isn’t supported by the next-gen hardware upgrade you plan to deploy? If your LOBs are unhappy with the current state, you haven’t really fixed anything.
Back in the early days of Facebook, Zuckerberg told a reporter, “I’m here to build something for the long term. Anything else is a distraction.” For IT, making incremental changes for incremental benefits can become a distraction. It’s not that these aren’t good solutions – for the right use cases they add value and provide incremental improvements. But they don’t resolve your IT fragmentation, don’t operate across generational silos, and often lack compatibility across vendors. That is not building for the long term.
A new hybrid model for infrastructure
Your business is looking for public cloud-like convenience and speed. IT would like to provide that experience, but without the risks and anxiety often associated with public cloud – for example, the dependence on third-party capabilities, and the additional failure points that can be introduced with remote infrastructure. The public cloud can provide significant benefits by taking on certain types of workloads – email and collaboration apps, for example. But applications that process critical information like customer IDs or credit card data aren’t a good fit. Compliance and security are key concerns; indeed, they’re the top drivers of the decision to keep IT services on-premises, according to a 2015 IDG survey, cited by 63% and 60% of respondents respectively (see this IDG Market Pulse white paper: 5 strategies for transforming on-premises infrastructure).
The alternative is a hybrid approach, establishing the right mix of public cloud, private/hybrid cloud, and traditional IT systems in a hybrid infrastructure, controlled by a single management environment. Companies ready to move beyond their existing IT models should consider these options:
Option 1: Hyperconverged infrastructure. These systems bring together compute and storage in a single frame with an easy-to-use software management layer. Hyperconverged systems simplify deployment and management, enabling IT organizations to take a confident first step toward hybrid cloud. By providing cloud-like velocity and convenience, hyperconverged solutions reduce the risk of shadow IT. By tightly integrating resources from a single vendor, they reduce the risk of support glitches and finger-pointing among multiple vendors – they give you “one throat to choke.” They are especially well-suited to branch/remote office solutions and desktop virtualization, and are usually configured to support specific workloads.
IT shops that are considering a hyperconvergence deployment should give some thought to how the solution will integrate with other elements in the data center. Some hyperconverged solutions don’t hook into your current systems easily, creating a long-term risk of silo management and complexity. There’s also a risk that some of the startup vendors in this space may not survive to support your systems for the long haul.
Option 2. Composable Infrastructure.This new category of infrastructure eliminates resource silos by providing fluid pools of resources, a software-defined intelligence and a unified API for compute, storage and fabric. These can be provisioned on-demand for the needs of a specific workload, and then released back into the pool when no longer needed. Composable infrastructure is true infrastructure-as-code, on-premises, with template-driven, fully-automated provisioning and frictionless updates that simplify lifecycle management.
As with all options there’s an element of risk. In this case, the primary risk is the potential disruption in the short term as you transition to new architecture and tools. But there’s a huge upside. You eliminate silos at the hardware level and simplify application integration at the software level. Your ecosystem can grow exponentially without adding complexity or risk. Best of all, you enable the business to move fast – without breaking things.
Taking risks without creating risk
Hybrid infrastructure is the new reality for IT, providing an effective bridge from traditional IT to the digital enterprise. The right strategy can give your IT team an easier, more reliable way to manage infrastructure so that your business can take risks without putting the business at risk.
Learn more about the many ways Hewlett Packard Enterprise can help you transform to a hybrid infrastructure and achieve cloud-like velocity without compromising stability.
Related posts by Mark Potter:
Ending IT’s Game of Groans: the path to better control
Geared for speed: accelerating IT with hybrid infrastructure