Mellanox Introduces Infiniband to VMware Environments

Mellanox Technologies, supplier of semiconductor-based, server and storage interconnect products, announced that later this year VMware's newest server virtualization platform, VMware ESX Server 3.5, would be providing enablements for Mellanox Infiniband based adapters. The InfiniBand LAN networking and block storage drivers are based on the OpenFabrics Enterprise Distribution (OFED) version 1.2.5 and were devel

Mellanox Technologies, supplier of semiconductor-based, server and storage interconnect products, announced that later this year VMware's newest server virtualization platform, VMware ESX Server 3.5, would be providing enablements for Mellanox Infiniband based adapters.

The InfiniBand LAN networking and block storage drivers are based on the OpenFabrics Enterprise Distribution (OFED) version 1.2.5 and were developed by Mellanox, VMware and other participants of the VMware Community Source program. Mellanox's industry leading InfiniBand adapters seamlessly replace multiple fiber channel and Gigabit Ethernet adapters typically deployed in virtualized environments, which reduces data center power, cooling, capital expenditure and cost of ownership.

"Higher I/O bandwidth and I/O consolidation in VMware environments are critical needs, further exacerbated by deployment of multi-core CPUs and I/O real estate constraints driven by green data center initiatives," said Thad Omura, vice president of product marketing at Mellanox Technologies. "Mellanox is pleased to have worked with VMware to deliver a solution that marries the high price-performance capabilities on Mellanox I/O adapters to the robust, seamless and easy-to-use virtual infrastructure platform from VMware."

According to the company, when InfiniBand I/O adapters are used with VMware ESX Server 3.5, a single adapter can replace multiple gigabit Ethernet NICs and Fibre Channel HBAs while maintaining or enhancing I/O throughput from virtual machines. For example, SAN throughput from a virtual machine can reach up to 1500 Megabytes per second (MB/s) or it can be shared linearly across multiple virtual machines, e.g., about 400 MB/s per virtual machine across four virtual machines on the same VMware ESX Server host. This is equivalent to using four 4 Gb/s Fibre Channel HBAs, one dedicated to each of the four virtual machines. This I/O consolidation and scale out, which results in significant cost and power savings, is achieved transparently as operating systems and applications running in the virtual machines continue to run over traditional virtual NIC and HBA interfaces available in VMware virtual machines. Network and storage I/O provisioning for virtual machines and features like VMware VMotion are also configured transparently using VMware VirtualCenter 2.5, which exposes only the familiar virtual NIC and virtual HBA interfaces available over the unified InfiniBand I/O adapter. Similarly, features such as high availability and migration of virtual machines are preserved as if they were operating on Ethernet NICs and Fibre Channel HBAs.

Solution Benefits

  • LAN and SAN functionality is available over unified server I/O using high bandwidth InfiniBand connectivity. This reduces cabling complexity and reduces I/O cost (50 – 60%) and power (25-30%) significantly

  • LAN performance (from VMs) of close to 3X Gigabit Ethernet has been achieved

  • SAN performance (from VMs) of close to 10X 2Gb/s Fibre Channel has been achieved

  • LAN and SAN performance scales linearly across multiple VMs

  • Completely transparent to VMs as applications in VMs continue to work over legacy NIC and HBA interfaces that have been already qualified on

  • All VMware functionality including migration, high availability and others are available over the InfiniBand fabric (just as they are available over Ethernet and Fibre Channel fabrics)

  • All VMware ESX Server 3.5 supported guest operating systems are supported

  • Management and configuration using VMware VirtualCenter 2.5 maintains the look an feel of configuring NICs and HBAs, making it easier for IT managers to configure the allocation and management of LAN and SAN resources over InfiniBand

  • InfiniBand to Ethernet and InfiniBand to Fibre Channel gateways from leading OEMs are supported, ensuring end to end connectivity with Ethernet LANs and Fibre Channel SANs respectively.

You can find out more information by reading one of the company's whitepapers, I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology.

Related:

Copyright © 2007 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!