Saturday, August 31, 2013

DELL Force10 S6000 as a physical switch for VMware NSX

Based on this  document
DELL Force10 S6000 is going to be fully integrated with VMware NSX (NSX is software defined networking platform).

Dell Networking provides:
  • Data center switches for robust underlays for L2 overlays
  • CLI for virtual and physical networks
  • Network management and automation with Active Fabric Manager
  • S6000 Data Center Switch Gateway for physical workloads to connect to virtual networks
  • Complete end-to-end solutions that include server, storage, network, security, management and services with world wide support
Dell S6000 use cases:
  • Extend virtual networks to physical servers -  S6000 works as VXLAN gateway to VLANs on physical network (VXLAN VTEP).
  • Connect physical workloads reachable on a specific VLAN to logical networks via an L2 service
  • Connect physical workloads reachable on a specific port to logical networks via an L2 service
  • Connect to physical workloads in a Physical to virtual migration
  • Migration from existing virtualized environments to public clouds, creating hybrid clouds
  • Access physical router, firewall, load balancer, WAN optimization and other network resources

I cannot wait to test it in my lab or on customer PoC engagement. After hands-on experience I'll share it on this blog. 

Tuesday, August 27, 2013

What’s New in vSphere 5.5

On this article I'll try to collect all important (at least for me) vSphere 5.5 news and improvements announced at VMworld 2013. I wasn't there so I rely on other blog posts and VMware materials.

Julian Wood reported about vCloud Suite 5.5 news announced at VMworld 2013 at

Chris Wahl wrote deep dive blog posts into vSphere 5.5 improvements at

Cormac Hogan listed storage improvements in vSphere 5.5 at 

Thanks Julian, Chris, and Cormac for excellent blog posts and keep informed as who was not able to attend VMworld 2013.

BTW: Official VMware What's New paper is at 

Here are few citations with my comments from above blog posts. I'll mention just improvements which are important and/or interesting for me. I will concentrate on these topics and in near future I have to find and test more hidden details.
  1. Management: VMware is strongly recommending using a single VM for all vCenter Server core components (SSO, Web Client, Inventory Service and vCenter Server) or to use the appliance rather than splitting things out which just add complexity and makes it harder to upgrade in the future. << "This is excellent approach and I really like it."
  2. Management: The vCenter Appliance has also been beefed up and with its embedded database supports 300 hosts and 3000 VMs or if you use an external Oracle DB the supported hosts and VMs are the same as for Windows. << "Finally"
  3. Storage: vSphere 5.5 now supports VMDK disks larger than 2TB. Disks can be created up to 63.36TB in size on both VMFS and NFS.The max disk size needs to be about 1% less than the datastore file size limit. << "The last vSphere storage limit disappeared however how big datastores we will create?"
  4. Storage: vSphere Flash Read Cache leveraging local SSDs to eliminate read IO operations from datastores and save storage performance (IOPSes) for other purposes (writes, other workloads, etc.)  For more info look at or << "Sounds good but looks better."
  5. Storage: vSphere vSAN leveraging SSD and SATA server internal disks and form it into shared storage pool. VMware promised it is match better than VSA (VMware Storage Appliance). For more info look at << "We will see. have you tested VSA? I still believe real storage is real storage. At least now. However if someone considers vSAN I would recommend to invest into really good server disks and SSDs."
  6. Storage: PDL AutoRemove in vSphere 5.5 automatically removes a device with PDL from the host. PDL stands for Permanent Device Lost and receive it from storage array as a SCSI Sense Code. << "It would be beneficial when some storage admin removes empty LUN. Then nothing should be done on vSphere in case storage send appropriate SCSI Sense Code. MUST BE CAREFULLY TEST IT!!!"
  7. Networking: LACP in 5.5 gives you over 22 load balancing algorithms and you are now able to create 32 LAGs per host so you can bond together all those physical Nics. << "Finally, Nexus 1000v had it already from the beginning."
  8. Networking: Flow based marking and filtering provides granular traffic marking and filtering capabilities from a simple UI integrated with VDS UI. You can provide stateless filtering to secure or control VM or Hypervisor traffic. Any traffic that requires specific QoS treatment on physical networks can now be granularly marked with COS and DSCP marking at the vNIC or Port group level. << "Nice improvement, but I have never had such requirement so far."
  9. High Availability: Someone mentioned to me that VMware announced vSphere 5.5  Multi-processor Fault Tolerance (FTin VMworld 2013.  << "This would be interesting but must be validated as I cannot find any official statement or some blog post about it. It seems to me it was Fault Tolerance tech preview like in VMworld 2012 session I attended last year."
  10. Authentication:  SSO 2.0 is now a multi-master model. Replication between SSO servers is automatic and built-in. SSO is now site aware. The SSO database is completely removed. For more info look at  << "Finally, previous SSO 1.0 was a nightmare!!!"
  11. Disaster Recovery: VMware Replication (VR) now supports more VR Server Appliances responsible for replication, more point in time instances (aka snapshots), the ability to use Storage vMotion on protected VMs, and vSphere Web Client will show you details on your vSphere Replication status when you click on the vCenter object. For more info look at << "Cool. Good evolution."

Sunday, August 25, 2013

DELL OpenManage Essentials (OME)

OpenManage Essentials (OME) is a systems management console that provides simple, basic Dell hardware management and is available as a free download.

DELL OME can be downloaded at

Patch 1.2.1 downloadable at

For more information look at DELL Tech Center.

Data Center Bridging

DCB 4 key protocols:
  •  Priority-based Flow Control (PFC): IEEE 802.1Qbb
  •  Enhanced Transmission Selection (ETS): IEEE 802.1Qaz
  •  Congestion Notification (CN or QCN): IEEE 802.1Qau
  •  Data Center Bridging Capabilities Exchange Protocol (DCBx)
PFC - provides a link level flow control mechanism that can be controlled independently for each frame priority. The goal of this mechanism is to ensure zero loss under congestion in DCB networks.PFC is independent traffic priority pausing and enablement of lossless packet buffers/queuing for particular 802.1p CoS.

ETS - provides a common management framework for assignment of bandwidth to frame priorities. Bandwidth can be dynamic based on congestion and relative ratios between defined flows. ETS provides minimum, guaranteed bandwidth allocation per traffic class/priority group during congestion and permits additional bandwidth allocation during non-congestion.

CN - provides end to end congestion management for protocols that are capable of transmission rate limiting to avoid frame loss. It is expected to benefit protocols such as TCP that do have native congestion management as it reacts to congestion in a more timely manner. Excellent blog post about CN is here.

DCBX - a discovery and capability exchange protocol that is used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network. Performs discovery, configuration, and mismatch resolution using Link Layer Discovery Protocol (IEEE 802.1AB - LLDP).

DCBX can be leveraged for many applications.
One DCBX application example is iSCSI application priority - Support for the iSCSI protocol in the application priority DCBX Type Length Value (TLV). Advertises the priority value (IEEE 802.1p CoS, PCP field in VLAN tag) for iSCSI protocol. End devices identify and tag Ethernet frames containing iSCSI data with this priority value.

Friday, August 23, 2013

DELL Force10 I/O Aggregator 40Gb Port Question

Today I have received question how to inter connect DELL Force10 IOA 40Gb uplink with DELL Force10 S4810 top of rack switches.

I assume the reader is familiar with DELL Force10 datacenter networking portfolio.

Even if you have 40Gb<->40Gb twinax cable with QSFPs between IOA and Force10 S4810 switch it is in IOA side configured by default as 4x10Gb links grouped  in Port-Channel 128.

If you connect it directly into 40Gb port in Force10 S4810 switch the 40Gb port is by default configured as 1x40Gb interface.

That’s the reason why it doesn’t work out-of-the-box. Port speeds are simply mismatched.

To make it correct you have to change 40Gb switch port to 4x10Gb port. Here is the S4810 command to change switch port from 1x40Gb to 4x10Gb:
stack-unit 0 port 48 portmode quad

Here is the snip from S4810 configuration where 40Gb port 0/48 is configure as 4x10Gb port in port-channel 128
interface TenGigabitEthernet 0/48
no ip address
port-channel-protocol LACP
  port-channel 128 mode active
no shutdown
interface TenGigabitEthernet 0/49
no ip address
port-channel-protocol LACP
  port-channel 128 mode active
no shutdown
interface TenGigabitEthernet 0/50
no ip address
port-channel-protocol LACP
  port-channel 128 mode active
no shutdown
interface TenGigabitEthernet 0/51
no ip address
port-channel-protocol LACP
  port-channel 128 mode active
no shutdown

interface Port-channel 128
no ip address
portmode hybrid
no shutdown

Tuesday, August 20, 2013

Best Practices for Faster vSphere SDK Scripts

Reuben Stump published excellent blog post at about performance optimization of PERL SDK Scripts.

The main takeaway is to minimize the ManagedEntity's Property Set.

So instead of

my $vm_views = Vim::find_entity_views(view_type => "VirtualMachine") ||
  die "Failed to get VirtualMachines: $!";

you have to use

# Fetch all VirtualMachines from SDK, limiting the property set
my $vm_views = Vim::find_entity_views(view_type => "VirtualMachine",
          properties => ['name', '', 'datastore']) ||
  die "Failed to get VirtualMachines: $!";

This small improvement have significant impact on performance because it eliminates big data (SOAP/XML) generation and transfer between vCenter service and the SDK script.

It helped me improve performance of my script from 25 seconds to just 1 second. And the impact is even better for bigger vSphere environment. So my old version of script was almost useless and this simple improvement help me so much.

Thanks Reuben for sharing this information.

Monday, August 19, 2013

DELL Blade Chassis power consumption analytics in vCenter Log Insight

DELL Blade Chassis has a capability to send power consumption information via syslog messages. I have never understood  how to practically leverage this capability. When VMware released vCenter Log Insight I have immediately realized how to leverage this tool to visualize blade chassis power consumption.

I prepared short video how to create blade chassis power consumption graph in vCenter Log Insight. The video is located at

Wednesday, August 14, 2013

ESXi Advanced Settings for NetApp NFS

Here are NetApp 

Enabled SIOC or if you don't have Entrprise+ license set NFS.MaxQueueDepth=64, 32 or 16 based on storage workload and utilization

Sunday, August 11, 2013

Unified Network, DCB and iSCSI challenges

iSCSI SAN is Storage Area Network. Storage need lost less fabric. If, for any reason, unified fabric need to be used then quality of ethernet/IP network  is crucial for problem less storage operation.

For example DELL EqualLogic supports and leverage DCB (PFC, ETS and DCBX).
iSCSI-TLV is a part of DCBX. However the DCB protocol primitives must be supported end to end so if one member of the chain doesn't support it than it is useless.

How DCB makes iSCSI better is deeply explained here

So think twice if you really want converged network (aka unified fabric) or dedicated iSCSI network is better option for you.

Tuesday, August 06, 2013

DELL EqualLogic general recommendations for VMware vSphere ESXi

Bellow are eight major recommendations for DELL EqualLogic implementation with vSphere ESXi:

  1. Delayed ACK disabled
  2. LRO disabled
  3. If using Round Robin, set IOs to 3
  4. If  ENTERPRISE or ENTERPRISE+ license and ONLY with Enterprise/Enterprise+  install MEM 1.1.2
  5. Extend login_timeout to 60 seconds.
  6. Don’t have multiple VMDKs on a single Virtual SCSI controller.  (major cause of latency alerts)
  7. Align partitions on 64K boundary
  8. Format with 64K cluster (allocation unit) size with Windows

Monday, August 05, 2013

CISCO Nexus 1000v - Quality Of Service configuration

class-map type queuing match-any n1kv_control_packet_mgmt_class
 match protocol n1k_control
 match protocol n1k_packet
 match protocol n1k_mgmt

class-map type queuing match-all vmotion_class
 match protocol vmw_vmotion

class-map type queuing match-all vmw_mgmt_class
 match protocol vmw_mgmt

class-map type queuing match-any vm_production
 match cos 0

policy-map type queuing uplink_queue_policy
 class type queuing n1kv_control_packet_mgmt_class
   bandwidth percent 10
 class type queuing vmotion_class
   bandwidth percent 30
 class type queuing vmw_mgmt_class
   bandwidth percent 10
 class type queuing vm_production
   bandwidth percent 40

port-profile type ethernet uplink
 service-policy type queuing output uplink_queue_policy