Introduction to DCBDatacenter bridging (DCB) is group of protocols for modern QoS mechanism on Ethernet networks. There are four key DCB protocols described with more details here. In this blog post I'll show you how to configure DCB ETS, PFC and DCBX on Force10 S4810.
ETS (Enhanced Transmission Selection) is bandwidth management allowing reservations of link bandwidth resources when link is congested. DCB QoS is based on 802.1p CoS (Class of Service) which can handle up to 8 class of services (aka priority levels). Any QoS is always done via dedicated queues for different class of services and I/O scheduler which understand configured priorities.
S4810 has 4 queues and 802.1p CoS are by default mapped as outputted bellow …
Command service-class dot1p-mapping can reconfigure mapping but let's use default mapping for our example. Queue CoS mapping:DCSWCORE-A#show qos dot1p-queue-mapping Dot1p Priority : 0 1 2 3 4 5 6 7 Queue : 0 0 0 1 2 3 3 3
- To Queue 0 are mapped CoS'es 0,1,2
- To Queue 1 is mapped CoS 3
- To Queue 2 is mapped CoS 4
- To Queue 3 are mapped CoS'es 5,6,7
PFC (Priority Flow Control) is nothing else then classic Ethernet flow control protocol but just in one specific 802.1p CoS. Force10 S4810 support PFC on two queues.
Now, let's define our design requirements and constraints for our specific design decision.
Design decision justificationR1: 4Gb guarantee for iSCSI traffic on each 10Gb converged link is required.
R2: Lost-less ethernet is required for iSCSI traffic
R3: 1Gb guarantee for Hypervisor Management network on each 10Gb converged link is required.
R4: 2Gb guarantee for Hypervisor Live Migration network on each 10Gb converged link is required.
R5: 3Gb guarantee for production networks on each 10Gb converged link is required.
C1: We have 10Gb links to edge devices (servers and storage)
C2: We have only four switch queues for DCB on DELL Force10 S4810
C3: We have DCB capable iSCSI storage DELL EqualLogic
A1: No other storage protocol then iSCSI is required
A2: No other network traffic type requires QoS
A3: We have iSCSI traffic in 802.1p CoS 4
Let's design best DCB Mapping based on requirements, constraints and assumptions above. Following priority groups reflects all requirements and constraints.
- PG0 - Hypervisor management; 10% reservation; lossy ethernet; CoS 0,1,2 -> Switch Queue 0
- PG1 - Hypervisor live migrations; 20% reservation; lossy ethernet; CoS 3 -> Switch Queue 1
- PG2 - iSCSI; 40% reservation; loss-less ethernet; CoS 4 -> Switch Queue 2
- PG3 - Production; 30% reservation; lossy ethernet; CoS 5,6,7 -> Switch Queue 3
Below is Force10 configuration snippet of DCB mapping to 802.1p CoS'es.
dcb-map convergedDCB map has to be configured on particular Force10 switch port. One particular switch port configuration snippet is below.
priority-group 0 bandwidth 10 pfc off
priority-group 1 bandwidth 20 pfc off
priority-group 2 bandwidth 40 pfc on
priority-group 3 bandwidth 30 pfc off
priority-pgid 0 0 0 1 2 3 3 3
interface TenGigabitEthernet 0/6Following technologies are configured on switch port Te 0/6 by configuration snippet above.
no ip address
spanning-tree rstp edge-port
dcbx port-role auto-downstream
- DCB ETS and PFC defined in dcb-map converged
- LLDP protocol streaming down DCB information configured in the network
- MTU 12000 (Force10 maximum) because Jumbo Frames are beneficial for iSCSI. iSCSI Jumbo Frames require payload 9000 bytes plus some Ethernet and TCP/IP protocol overhead. MTU 9216 woudl be enough but why not set maximal MTU in the core network? Performance overhead is negligible and we are ready for everything.
- Edge port configuration for faster port transition to forwarding state