Data centers have undergone their share of physical, functional and definitional change in the last 50 years. Expectations driven by growing awareness and reliance on IT from business have accompanied this change making performance and reliability requirements.
Today the data center carries with it, a number of high expectations. Included within them is the need to be highly virtualized, energy efficient, reliable, resilient, high performance and secure. Massive amounts of on-tap compute power should provide scale as needed with everything residing in a condensed footprint that easily managed and poised to deliver mission critical applications in an instant. Not surprisingly, business continuity is top of mind as IT downtime can have catastrophic consequences.
While some have made incredible progress toward modernizing their data centers, many are just starting. With technology advancing rapidly, a bright light has been cast on the data center by the executive office. Fueled by a desire to unearth the value of information of every source, line of business executives are looking to their IT counterparts to drive competitive advantage, customer engagement, market opportunity, productivity and more. Their thirst for the real time delivery of game changing information delivered via a growing number of bandwidth intense, mission critical applications, shows no sign of slowing down or reversing.
Enterprise data centers have long invested in Fibre Channel storage as means of ensuring that the requirements of their mission critical environments are met with low latency, determinism and performance. The backward compatible nature of Fibre Channel and flexibility it provides in supporting multiple protocols has further fueled enterprise adoption as it makes for easier integration. Further, Fiber Channel provides the high performance network links needed to support the adoption of faster, next-gen storage systems like SSD and for physical servers with growing VM density.
As organizations actively adopt 16 GFC to meet these demands, a point of exposure is the cable plant that transports information through it. This can result in system outages, application latency, higher operational expenses and an inability to optimize extraordinary investments in next-gen technology.
Some key cabling considerations when designing or refreshing your data center include:
- T-11 specifications for dB loss budgets based on distance and fiber type
- Next-generation technology readiness/investment protection
- Risk mitigation
- Operational overhead associated with move, add, and change activity
So what can be done to ensure that your data center meets and exceeds service level requirements of today while being poised to carry your organization into the future? As far back as the mid 90’s, IBM introduced the first structured cabling system for data centers known as the Fiber Transport System (FTS). This concept was later adopted into the TIA-942 Data Center Standard stating that every port on every active device (in the data center) is represented by a port on the front side of a panel at the CPL (Central Patching Location). This effectively takes all move, add, and change activity away from active equipment, simplifying each process and mitigating risk of unintended down time while facilitating accurate documentation.
Inevitably, light loss budgets will continue to contract, while bandwidth needed to support high demand applications expand. This contrasting dynamic jeopardizes IT’s ability to act as a true services organization to the business, while placing customer satisfaction and engagement expectations at risk. Structured connectivity topologies that effectively utilize low-loss connectors (such as LC’s) help to ensure compliance with industry standards while protecting investments in infrastructure by supporting new technology adoption.