The goal of a software-defined environment is to enable business users to describe their expectations of IT in a systematic way, which in turn drives automation of the infrastructure. The infrastructure understands application's needs through defined policies that control the configuration of compute, storage, and networking, and it optimizes application execution. Through this approach, organizations are able to respond in real time to provide improved availability, as well as support for shifting volumes of work.
For example, in an application such as fraud analytics, spikes often occur when processing large amounts of data. The data frequently includes unstructured data from social sources, as well as transactional history. A software-defined environment enables the business to allocate compute and storage resources automatically to meet peak demand and to prevent degradation in performance.
Three steps to a software-defined environment
A software-defined environment can't be built in a day. Organizations must develop the architecture over time, step by step. The three most important steps are mandating an open approach to virtualization, creating policies to optimize the infrastructure, and enabling the elastic scale of data.
1. Open virtualization. Opening up hardware capabilities through defined APIs that integrate into open frameworks such as OpenStack is the first step toward building an agile, responsive, and flexible IT infrastructure. A software-defined environment starts with a virtualized data center that includes compute, storage, and networking resources built on open interfaces and an integrated framework. Open interfaces increase the speed of domain integration, break down silos of expertise, and offer organizations choice. Building software-defined offerings based on open standards enables choice, flexibility, and interoperability across the data center.
2. Policy optimization and elastic scaling. Organizations need to enhance infrastructure automation with a policy manager that ensures adherence to ongoing service-level agreements -- and responds to changing workload demands in real time. Organizations also need to have extensive capability to automate resources at the compute layer and integrate this optimization with the storage layer. In the storage arena, organizations need the ability to store and share large amounts of structured and unstructured data across their data centers quickly, reliably, and efficiently. A high-performance enterprise file management platform that includes a clustered file system brings together the power of multiple file servers and multiple storage controllers to provide increased reliability and performance.