VMWARE vNIC types and vNIC features – vSphere 5.x
1. Virtual Network Adapters (vNIC) for vSphere VMs· Standard VM is deployed with one Network Adapter (vNIC)
· Different types of vNIC adapters are available for VM.
· Type of network adapter will be selected depending on the VM’s Hardware version
· Following are the different types of vNICS available
· Flexible.
· This Is the default vNIC type for 2-bit guests used for VMs and deployed on ESX 3.x.
· Function as Vlance adapter if VMWare Tools are not installed.
· Will function as VMXNET if VMWARE Tools are installed.
§ Vlance: It is emulated version of the AMD 79C970 PCnet32- LANCE NIC, and it is an older 10 Mbps NIC with drivers available in most 32-bit guest operating systems except Windows Vista and later. A virtual machine configured with this network adapter can use its network immediately.
§ VMXNET: The VMXNET virtual network adapter has no physical counterpart. VMXNET is optimized for performance in a virtual machine. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
§
· E1000: An emulated version of the Intel 82545EM Gigabit Ethernet NIC. A driver for this NIC is not included with all guest operating systems. Typically Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later include the E1000 driver.
· E1000e: This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the "e1000e" vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere 5. It is the default vNIC for Windows 8 and newer (Windows) guest operating systems. For Linux guests, e1000e is not available from the UI (e1000, flexible vmxnet, enhanced vmxnet, and vmxnet3 are available for Linux).
· VMXNET 2 (Enhanced): The VMXNET 2 adapter is based on the VMXNET adapter but provides some high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. This virtual network adapter is available only for some guest operating systems on ESXi/ESX 3.5 and later.
VMXNET 2 is supported only for a limited set of guest operating systems:
· 32- and 64-bit versions of Microsoft Windows 2003 (Enterprise, Datacenter, and Standard Editions).
Note: You can use enhanced VMXNET adapters with other versions of the Microsoft Windows 2003 operating system, but a workaround is required to enable the option in the VMware Infrastructure (VI) Client or vSphere Client. If Enhanced VMXNET is not offered as an option, see Enabling enhanced vmxnet adapters for Microsoft Windows Server 2003 (1007195).
· 32-bit version of Microsoft Windows XP Professional
· 32- and 64-bit versions of Red Hat Enterprise Linux 5.0
· 32- and 64-bit versions of SUSE Linux Enterprise Server 10
· 64-bit versions of Red Hat Enterprise Linux 4.0
· 64-bit versions of Ubuntu Linux
In ESX 3.5 Update 4 or higher, these guest operating systems are also supported:
· Microsoft Windows Server 2003, Standard Edition (32-bit)
· Microsoft Windows Server 2003, Standard Edition (64-bit)
· Microsoft Windows Server 2003, Web Edition
· Microsoft Windows Small Business Server 2003
· VMXNET 3: The VMXNET 3 adapter is the latest paravirtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery
VMXNET 3 is supported only for virtual machines version 7 and later, with a limited set of guest operating systems:
· 32- and 64-bit versions of Microsoft Windows 7, XP, 2003, 2003 R2, 2008, 2008 R2, and Server 2012
· 32- and 64-bit versions of Red Hat Enterprise Linux 5.0 and later
· 32- and 64-bit versions of SUSE Linux Enterprise Server 10 and later
· 32- and 64-bit versions of Asianux 3 and later
· 32- and 64-bit versions of Debian 4
· 32- and 64-bit versions of Ubuntu 7.04 and later
· 32- and 64-bit versions of Sun Solaris 10 and later
Notes:
· In ESXi/ESX 4.1 and earlier releases, jumbo frames are not supported in the Solaris Guest OS for VMXNET 2 and VMXNET 3. The feature is supported starting with ESXi 5.0 for VMXNET 3 only.
· Fault Tolerance is not supported on a virtual machine configured with a VMXNET 3 vNIC in vSphere 4.0, but is fully supported on vSphere 4.1.
· Windows Server 2012 is supported with e1000, e1000e, and VMXNET 3 on ESXi 5.0 Update 1 or higher.
2. vNIC Features
TSO (TCP Segmentation Offload)
· TSO reduces the CPU overhead associated with network traffic, to improve I/O performance.
· To enable TSO at the virtual machine level, you must replace the existing vmxnet or flexible virtual network adapters with enhanced vmxnet virtual network adapters. This replacement might result in a change in the MAC address of the virtual network adapter
· TSO is enabled on the VMkernel interface by default, but must be enabled at the virtual machine level.
Jumbo Frames
· Jumbo frames allow ESXi to send larger frames out onto the physical network. The network must support jumbo frames end-to-end.
· Jumbo frames up to 9kB (9000 bytes) are supported.
· Before enabling Jumbo frames, check with your hardware vendor to ensure that your physical network adapter supports jumbo frames
· You enable jumbo frames on a vSphere distributed switch or vSphere standard switch by changing the maximum transmission units (MTU).
SplitRx
· SplitRx allows the ESXi host to use more than one physical CPU to process packets received from one queue.
· When there is intra-host VM traffic, SplitRx helps to increase the throughput.
· If several VMs on a single host are all receiving the same multicast traffic, then SplitRx can increase the throughput and reduce the CPU load.
· vSphere 5.1 will automatically enable SplitRx mode on vmxnet3 adapters if inbound external traffic is destined for at least 8 VMs or vmknics.
· SplitRx can be manually enabled for an entire ESXi host or on a single vNIC.
MSI/MSI-X
· Message signal interrupts (MSI) is supported by VMXNET 3 drivers with three levels of interrupt mode: MSI-X, MSI, and INTx.
· MSI allows the guest driver to optimize the interrupt method, depending on the guest’s kernel support
· When using passthrough devices with a Linux kernel version 2.6.20 or earlier, avoid MSI and MSI-X modes because these modes have significant performance impact.
Ring Size
· You can alter the buffer size in the VM’s VMX configuration file
· With each newer vNIC, the receive and transmit buffers have increased. A larger ring size creates more buffer, which can deal with sudden traffic bursts.
· There is a small impact on CPU overhead as the ring size increases, but this may be justified if your network traffic has bursty throughput..
RSS (Receive-Side Scaling)
· RSS must be enabled in the guest’s NIC driver settings.
· RSS can be used by some new Windows guest VMs.
· It distributes traffic processing across multicore processors to aid scalability and reduces the impact of CPU bottlenecks with 10GbE network cards.
NAPI (New API)
· NAPI is a feature for Linux-based guests to improve network performance by reducing the overhead of packet receiving.
· It defers incoming message handling to process messages in bundles.
· This allows for greater CPU efficiency and better load handling.
LRO (Large Receive Offload)
· LRO is another Linux guest technology, which increases inbound throughput by aggregating packets into a larger buffer before processing.
· This reduces the number of packets and therefore reduces CPU overhead.
· Large receive offload (LRO) might cause the number of mirroring packets to not equal to the number of mirrored packets.
· Not suitable for extremely latency-sensitive TCP-dependent VMs, because the traffic aggregation adds a small amount of latency.
· When TSO is enabled on a vNIC, the vNIC might send a large packet to a distributed switch. When LRO is enabled on a vNIC, small packets sent to it might be merged into a large packet
3. Compare vNIC features to vNIC types
vNIC Features | Flexible | E1000/1000e | VMXNET2(Enhanced) | VMXNET3 |
TSO IP4 | NO | YES | YES | YES |
TSO IP6 | NO | NO | NO | YES |
Jumbo Frames | NO | YES | YES | YES |
SplitRx | NO | NO | NO | YES |
MSI/MSI-X | NO | NO | NO | YES |
Large Ring Sizes | NO | YES | NO | YES |
RSS | NO | NO | NO | YES |
NAPI | NO | NO | NO | YES |
LRP | NO | NO | YES | YES |
Comments
Post a Comment