Virtualizing the server is great, but what about network I/O virtualization?

The next-generation datacenter is filled with virtualization, offering virtual servers, virtual desktops, virtual storage, and virtual applications -- but what about the network?

When creating the datacenter of tomorrow, sometimes dubbed datacenter 2.0 or even 3.0 (depending on who you ask), we look to certain virtualization technologies to get us there. We focus on server virtualization as the giant of this industry, but we also look to application, storage, and even desktop virtualization to help us carve out and shape this thing. An area that hasn't received a lot of attention is network I/O virtualization.

To find out more about this, I spoke with Rolf Neugebauer, a software engineer who works on virtualization support for Netronome's line of Intelligent Network Processors. Prior to joining Netronome, Rolf worked at Microsoft and Intel Research. While at Intel, he was one of the initial researchers developing the Xen hypervisor in collaboration with academics at Cambridge University. If anyone had the hands-on experience needed to explain this side of the industry, Rolf seemed like the logical choice.

InfoWorld: Can you explain why there is a need for network I/O virtualization and why you think enterprises should care about the technology?

Rolf Neugebauer: As companies grow their IT infrastructure also grows, leading to an increase in the number of stand-alone servers, storage devices, and applications. Unmanaged, this growth can lead to enormous inefficiency, higher expense, availability issues, and systems management headaches that negatively impact the company's core business. To address these challenges, organizations are implementing a variety of virtualization solutions for servers, storage, applications, and client environments. These virtualization solutions can deliver real business value through practical benefits, such as decreased IT costs and business risks; increased efficiency, utilization, and flexibility; streamlined management; and enhanced business resilience and agility.

With rising network traffic and the need for application awareness, content inspection, and security processing, the amount of network I/O processing increases exponentially. This increase in network processing, coupled with the need for virtualization, places a huge burden on the network I/O subsystem; [it's] an increasing challenge in the datacenter which negatively impacts overall system performance.

InfoWorld: How then does intelligent network I/O virtualization help?

Neugebauer: As more applications are consolidated onto virtualized server platforms, bandwidth requirements per server and server utilization both increase significantly. The result is that an intelligent network interface card [NIC] is needed to offload network processing for the host to keep the host CPU from becoming the bottleneck due to network I/O that limits application consolidation. This trend requires low overhead delivery of network data directly to a guest virtual machine [VM]. By classifying network traffic into flows, applying security rules, and pinning flows to a specific VM on a specific core on the host, and/or by load balancing various flows into various VMs, an intelligent network I/O virtualization co-processor [IOV-P] enables the overall system to achieve full network performance.

InfoWorld: For those of us who are new to this, tell us what are the current approaches to performing network I/O virtualization?

Neugebauer: Methods traditionally include using software IOV [SW IOV], multiqueue IOV [MQ IOV], and single PCIe Root complex [SR-IOV]. The challenge has been that these IOV methods either create a bottleneck at the CPU or offer decreased network device functionality.

InfoWorld: What is network I/O addressing and what is the performance overhead for these other methods of network I/O virtualization?

Neugebauer: Next-generation datacenters need to address a complex set of issues such as IT consolidation, service continuity, service flexibility, and energy efficiency. Virtualization of servers is already seen as a key factor in moving to a next-generation datacenter; however, without I/O virtualization, the final returns on the effort can be severely limited. However, depending on the approach used, overhead can be an issue. For SW IOV, the burden falls on the hypervisor or service VM, where up to five times as many cycles can be spent on network processing when compared to non-virtualized environments. Using MQ NICs for IOV, the device is performing some of this processing, but the host still uses about twice as many cycles on network processing as a native system. And finally, SR-IOV virtualizes a NIC into several Virtual Functions [VFs]. These can be accessed directly by VM. Since I/O virtualization is performed in hardware on the device, the overhead for the host is significantly decreased, but the approach offers enterprises little network device flexibility.

InfoWorld: How does the approach of the Netronome NFP differ from SR-IOV?

Neugebauer: Since the NFP is a programmable network device, the multiplexing and de-multiplexing of packets from and to Virtual Functions [VFs] is not limited to a fixed function hardware implementation as in most SR-IOV or MQ NICs.  Instead, in the NFP, extensive packet processing can be performed including flow-based classification and filtering, load balancing to x86 cores, as well as content and security processing.  This provides a more flexible solution than other hardware-based IOV approaches.  The flexibility in packet processing offered by the NFP requires flexible device support -- the Netronome IOV provides enhanced SR-IOV support including the capability to dynamically manage Virtual Functions of different kinds at run time.

InfoWorld:  What can enterprises do with NFP network IOV capabilities?

Neugebauer:  As servers and network appliances in the datacenters are built around commodity x86 multi-core CPUs, implementing IOV over PCIe becomes critical in allowing the many VMs to share network I/O devices.  NICs with IOV capability are a key ingredient of the virtualized system, as they greatly reduce utilization of the host CPU for network processing, allowing the system to support a larger number of applications, while saving power.  Adding IOV capability ensures that each application can be configured with its own virtual NIC, allowing a number of applications to share a single 10GbE physical NIC but be guaranteed its own resources over PCIe.  At the same time the IOV-P concept allows a single physical NIC to provide many different "intelligent" functions to the VM and even to create and refine these functions at run time.

InfoWorld:  What does the NFP offer enterprises that previous network I/O virtualization methods did not?

Neugebauer:  Unlike previous network I/O virtualization methods, the NFP enables a single physical interface card to provide multiple virtual NIC types including a traditional NIC, a NIC with application specific off-load options, a crypto NIC or a NIC specifically tuned for high-speed inline or passive packet capture [pcap] device.

Datacenters deploy a wide range of advanced network and management technologies within the Ethernet infrastructure, such as extensive use of Access Control Lists [ACLs], sophisticated VLAN setups, Quality of Service, and even some limited Layer 3 and 4 processing.  These technologies are readily available in modern network infrastructure equipment such as Top of the Rack [TOR] switches.  However, even modern SR-IOV based NICs only provide very limited, fixed function switching capabilities, creating a disconnect between the sophisticated physical network infrastructure and virtual network infrastructure implemented on the host.  An IOV solution combined with intelligent I/O processing [IOV-P] bridges this gap and extends sophisticated network processing and management into virtualized servers.  An IOV-P based NIC can implement the same functionality as modern datacenter switches, including monitoring and policy enforcement [ACLs], within the server with separate policies being applied on a per VM basis.  This enables these policies to be applied even for inter-VM network traffic without having to be passed through the TOR switches.  Furthermore, the proliferation of encrypted network traffic [IPSec and SSL] provides the opportunity to offload some or all of its required processing to the NIC, freeing up host CPU cycles for application processing.

I'd like to thank Rolf Neugebauer from Netronome for talking to me and educating me a bit more about network I/O virtualization.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies