5 data center breakthroughs I'm thankful for

FREE

Become An Insider

Sign up now and get free access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content from the best tech brands on the Internet: CIO, CITEworld, CSO, Computerworld, InfoWorld, ITworld and Network World. Learn more.

Savor these server, networking, and storage advancements while you're fixing Mom's computer this holiday season

Technology has a habit of evolving at a rate that's difficult to keep up with. New hardware and software with ever-broader capabilities and increased performance hit the market at a dizzying rate. Staying on top of it all -- much less trying to figure out how best to leverage it -- can be a job in itself. However, sometimes it's helpful to take a step back and count your blessings.

Fortunately, there's a flip side to rapid technology advancement: When we're not merely striving to keep up, we're working with much better tools than before. In no particular order, here are five data center innovations I simply couldn't do without today.

Infrastructure APIs
If you've worked in a large IT department, or especially an ISP, chances are you've done a ton of scripting over the years. Either that, or you've had to perform a lot of manual, repetitive tasks for the lack of a script to automate them. I've written my fair share of kludges, usually built around some combination of Expect and Perl to take care of tasks that really should never have been addressed that way.

Many years ago, I needed to pull the MAC address table from a collection of switches to build a database of MAC-to-port assignments. My solution (if you can call it that) involved a pile of Perl that would telnet into the console of each switch, manually log itself in, figure out which switch OS it was talking to (some were Cisco CATOS, some Cisco IOS), run the right commands, and scrape the output. Depending upon the OS and version, it would then try to parse the output from a command that was never intended to be consumed programmatically. In the end, the script worked reasonably well -- as long as nothing really changed on the network -- but the code was brimming with regexs from hell. It was impossible to read without cringing.

Today, thanks to the widespread adoption of the Netconf standard, I can fire an XML query at each switch, which is authenticated using HTTPS, and the switch will happily cough up the information I want in a standardized XML format. I would have killed for that years ago. Today I take it for granted.

However, due to the "joys" of Spanning Tree, one of each of those two redundant links from each aggregation switch will be sitting idle in a Spanning Tree "blocked" state -- necessary to prevent creating a disastrous network loop. (If you've ever seen a rack full of network gear with every port activity light lit solid, you know what I'm talking about.) While stranding half of your uplink capacity was required to make the whole thing work, it was always painful to see those expensive resources wasted.

Today, an increasing number of switches support Multichassis EtherChannel (MEC) technology. MEC allows you to pair your redundant core or distribution switches and build a single logical link down to the devices to which they are both attached, whether that's an aggregation switch or a server. Examples include Cisco's MEC (a feature of VSS on the 6500-series switches and soon the 4500 series), Cisco's vPC (in the Nexus-series switches), and Brocade's MCT.

All of these solutions are great because they allow you to build a single logical link to each edge device served by the dual core and treat that link as a port channel. If one switch or link fails, the bandwidth is halved, but under normal circumstances both links can be used to their fullest extent. Better yet, failover and failback times are typically far less noticeable than even an optimally configured Rapid Spanning Tree implementation.

Converged networking
When you're pressed for time and short on capital, the last thing you want to do is invest in two parallel, high-bandwidth networks: one for "network" traffic and another for storage traffic. But that's just what IT has been doing for years. These days, you can effectively have your cake and eat it too by investing in switching tech that can handle both network and storage traffic -- generally either iSCSI or FCoE -- simultaneously. Better yet, with the increasing prevalence of converged networking adapters in servers of all sizes and shapes, you typically won't need more than a pair of high-bandwidth (generally 10GbE) cables for each server. That should handle all of the connectivity needs the server will ever have.

True thin provisioning
When virtualized block-based primary storage started to replace traditional storage, I was really excited. Gone were the days of building individual RAID sets and trying to allocate volumes over them in such a way that you could get the performance and capacity you need. Instead, the storage gear would spread blocks out across all of the disks you could make available to the array -- maximizing both capacity and performance at the same time.

Thin provisioning allows you to present more storage to a server than you are allocating on the array side. Effectively, you can present more storage than you have -- with the knowledge that when presented storage starts to be consumed, you can add more physical storage before you run out.

That's good and all, but it doesn't address the issue of what happens when storage is consumed by the server, then released. If I create a 10GB file on a disk that is thin-provisioned at the array, the file will obviously consume 10GB of storage from the array's pool of free storage. However, if I delete the file, my operating system will simply delete the association between that file and the blocks that comprise it -- it won't tell the array that those 10GB of disk blocks aren't being used anymore.

Fortunately, newer "true" thin provisioning implementations allow this information to be passed through the SCSI UNMAP primitive. Although it's generally a manual process today, in many cases you can run a process that will allow the server and array to agree on which blocks of disk have live data versus which blocks have deleted data. The effect of all of this is that your primary storage array will require only enough capacity for the data that's really in use. The upshot is that your storage needs can be halved or even quartered in some cases (hugely exciting in the era of the enterprise data explosion).

Virtualization
No list of technologies I'm thankful for would ever be complete without virtualization. Sure, it's pretty much old hat these days and just about everyone is doing it, but the wide swath of life-changing benefits it brings to pros in every corner of IT can't be overlooked. Whether you're excited about consolidating physical servers, increasing reliability (by bringing the benefits of clustering to literally any kind of application), providing an incredibly easy means to get solid image-based backups, or building a large and multitenanted cloud solution, virtualization was the key that unlocked the door.

As much as life in IT today runs at breakneck speed and is occasionally thankless, every once in a while it's worth looking back on how things were in the bad old days. For a few minutes at least, you can appreciate how much better they are today.

To continue reading, please begin the free registration process or sign in to your Insider account by entering your email address:
Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies