Please be aware that
- NPAR is not implemented on Intel 10G NIC (X520, X540)
- NPAR is not SR-IOV. More about SR-IOV is here and here.
- More logical interfaces partitioned from single interface which appears in the OS as normal PCI-e adapter.
- Switch independent solution. I'll explain what does it mean in the minute.
Let's describe the picture. On the picture you can see one physical server with ESXi hypervisor and two CNAs. Each CNA is divided into four logical partitions where each partition act as independent NIC with unique MAC address. You can see two physical wires interconnecting CNA ports with switch ports. Inside each physical wire are four "virtual wires" interconnecting CNA logical interfaces with single physical switch port. That's important!!! Four virtual ports on CNA are connected into single switch port. You can imagine it like four connectors on one side of the wire and just single connector on the other side.
That's not common, right?
The benefit of this architecture is switch independence.
The drawback is that ethernet flows between NPAR interfaces on single CNA port will fail.
So with this information in the mind let's explain NPAR architecture behavior in bigger detail.
Physical switch will never forward Ethernet frame back to the port from which the frame is coming. So, if src-mac and dst-mac is on the same physical port switch (these are entries in switch mac-address-table) the L2 communication will be broken. That’s standard Ethernet switch behavior.
So what happen in NPAR architecture where are 4 virtual cables (NPAR interfaces with independent MAC addresses) connected into single physical switch port? No communication.
It is shown on picture below.
That’s the reason CISCO has VN-TAG (802.1Qbh) and HP has multi-channel VEPA (802.1Qbg)
These solutions multiplex Ethernet on both sides of the wire.
I have hands-on experirence with CISCO VN-TAG so I can admit it works correctly but I have never tested HP VEPA.
NPAR is relatively good technology to separate and prioritize Storage and Ethernet traffic on unified (converged) ethernet networks. It can be also used to separate and prioritize L2 traffic. But it will not work if L2 communication between logical NPAR interfaces are required.
Problematic scenarios can be for example following configurations
- vCenter in VM <-> ESX vmkernel management port in the same L2 segment but different portgroups routed through separated NPAR interfaces (uplinks) as depicted above.->
- Cisco Nexus 1000v VSM in VM <-> ESX VEM communicate over L2 protocol routed through separated NPAR interfaces.->