sponsored

Stop the Butterfly Effect in Your Datacenter with Better Infrastructure Visibility

oneview bp
BigStock (One-Time Use)
By Bharath Vasudevan, HPE Product Manager, Software-defined and Cloud Group

The butterfly effect is the idea that even small changes can have large consequences. No one understands the butterfly effect better than a datacenter administrator.

IT departments continue to face the challenge of how to set up and maintain numerous applications and services in a hybrid IT environment. One of the main considerations is how any changes in their current environment might disrupt existing applications. Add to that their worry about separate management domains, fault domains, resource monitoring, future capacity, current utilization, and a host of other concerns. Even a small change can cause big problems.

In today’s dynamic datacenter environment, being able to see how the infrastructure is performing is vital. This visibility allows you to keep “eyes” on everything going on in your datacenter and therefore mitigate the possibility of the butterfly effect. When selecting a tool that will help give you this kind of visibility, you should keep in mind five key needs: compliance, inventory, resource status, global reporting, and bandwidth consumption.

1. Simple compliance                  

Many IT departments struggle to ensure standard compliance and adherence to corporate policy. Someone can manually inspect and track compliance; however, that process is time-consuming and error prone. The process could be automated via scripting, but how does the administrator ensure that the script is updated to collect data from new systems added to the environment? Being able to view servers, their expected firmware revisions and the firmware revision those servers are actually running in a simple compliance report is needed.

2. Current inventory view

The need to view inventory and generate reports based on that inventory is common in any IT organization. Having access to the right types of reports in a timely manner assists IT operations as well as executives in making better business decisions. In many environments, handcrafted reports in Excel spreadsheets are used.  Alternatively, IT administrative staff may create and maintain scripts that poll the devices and collect the desired information. Maintaining a current view of the existing inventory is a manual process that is often time-consuming and may not yield the most up-to-date information. An automated process that easily keeps track of current inventory would ensure that it was quickly available and always accurate.

3. Quick understanding of resource status

Understanding the state of resources under management is one of the primary tasks of a datacenter operator. The admin needs to be able to quickly identify any resources that are having issues, drill-down to those resources and begin to diagnose the genesis of the problem. Viewing an integrated status of resources along with how they are associated to other resources and their connectivity can often be difficult and require multiple tools. Multiple tools means multiple places where partial bits of necessary data are located--information that needs to be manual pieced together to get the overall picture. And jumping between numerous tools leads to frustration. What’s needed is a single, integrated tool that collects and aggregates resource details from multiple systems and provides a high level overview along with the capability to drill down into the details.

4. Automated global reporting

Creating reports of all of managed servers (including verification of compliance) across distributed datacenters is also helpful. These reports provide status to executives for questions like, “How many Gen8 servers do we still have in our Boston datacenter?” or “How many servers need to be updated to the newest firmware?” Many IT admins create homegrown reports, which are manual and time-consuming. A tool with automated global reporting can free up your time for more valuable activities.

5. Minimum bandwidth consumption

When discussing centralized status, views and reporting, one area of concern is the bandwidth consumption on the network links between datacenters. With many traditional systems, there is direct correlation between having current data and the network bandwidth required to maintain that data.  This is because constant polling of the remote resources is required to maintain current data. Continuous polling is inefficient and uses network bandwidth needlessly; it also doesn’t renew its state until the polling interval has elapsed. Rather than continuous polling of remote resource states, only changes to the remote environment should be pushed out, thereby minimizing bandwidth consumption.

Solution: HPE OneView Global Dashboard

HPE OneView Global Dashboard helps infrastructure administrators address all of these needs by providing instant access to the health status of resources and a rich user experience with seamless visualization, reporting, and problem resolution. To achieve enterprise scale, administrators can view multiple HPE OneView instances not only within a datacenter, but also across multiple datacenters and geographic locations.

Stop the butterfly effect in your datacenter. By using HPE OneView Global Dashboard, IT administrators can be sure that a small change won’t have a disastrous effect on their current applications. HPE OneView Global Dashboard can help you simplify compliance, view current inventory, quickly understand resource status, automate global reporting and minimize bandwidth across your datacenter.

To get more information on HPE OneView Global Dashboard, watch this 52-second video or read more here.

Related: