Friday, January 31, 2020

VMware vSphere Replication

VMware vSphere Replication is a software-based replication solution for virtual machines running on vSphere infrastructure. It is storage agnostic so it can replicate VMs from any source storage to any target storage. Such flexibility and simplicity is the biggest value of vSphere Replication. It doesn't matter if you have Fibre Channel, DAS, NAS, iSCSI or vSAN based datastores you can simply start full replica from any to any datastore and based on defined RPO your data are in sync between two places.

It is very simple to install and use solution and also very cost-effective as it is included in all vSphere Editions higher than vSphere Essential Plus.

The installation is straight forward

  1. Download the installation package (ISO)
  2. Mount ISO
  3. Deploy OVF (Virtual Appliance in Open Virtual Format)
  4. Configure Virtual Appliance to register into vCenter

The installation package is in the OVF format. You can deploy the package by using the Deploy OVF wizard in the vSphere Client. 

The installation package contains
  • vSphere_Replication_OVF10.ovf - Use this file to install all VR components, including the vSphere Replication Management Server and a vSphere Replication Server.
  • vSphere_Replication_AddOn_OVF10.ovf - Use this file to install an optional additional vSphere Replication Server.
Of course, there are some solution limitations. Let's write down some of them
  1. vSphere Replication supports RPO 5 min and higher, therefore it is an asynchronous replication and you cannot use it if you have strict RPO Zero requirement. However, do your business really require RPO Zero? Do not hesitate to ask and challenge somebody responsible for Business Impact Analysis and costs associated with it. Do not simply assume you cannot lose any data. There should always be a risk management exercise.
  2. vSphere Replication is a one-to-one VM mapping and not one-to-many. While you can replicate to multiple vCenters (target sites), only one VM can be replicated to one target vCenter. Here are further details and supported topologies
  3. vSphere Replication supports up to 24 recover points. These 24 recover points can have retention for up to 24 days or you can configure up to 24 snapshots per 1 day. So you can spread these 24 recover points within these guide rails based on your specific business requirements.
As vSAN does not have any native storage replication, vSphere Replication is the great add on to VMware HCI solution. If you have higher requirements you can leverage 3rd party solutions like EMC Recoverpoint, which is by the way included in the VxRAIL hardware appliance.

For more information see the vSphere Replication online documentation at

Saturday, January 18, 2020

How to configure Jumbo Frames not only for vSAN

Not only vSAN but also vMotion, NFS and other types of traffic can benefit from Jumbo Frames configured on an ethernet network as the network traffic should consume fewer CPU cycles and achieve higher throughput.

Jumbo Frames must be configured end-to-end, therefore we should start the configuration in the network core on Physical Switches, then continue to Virtual Switches and finish on VMkernel ports (vmk). These three configuration places are depicted on schema below.

Physical Switch
Jumbo Frames on physical switches can be configured per the whole switch or per switch ports. It depends on a particular physical switch but my Force10 switch supports configuration only per switch ports as shown on the screenshot below. The configuration per the whole switch would be easier with less configuration and as far as I know, some Cisco switches support it.

If you have more physical switches, all ports in the path must be configured for Jumbo Frames.

Virtual Switch
On the screenshot below you can see the Jumbo Frame configuration on my VMware Virtual Distributed Switch.

VMkernel port
And last but not least, the configuration on VMkernel port, in this case, the vmk interface used for vSAN traffic.

Final test
After any implementation, we should do the test that implementation was successful and all is working as expected. We should log in to ESXi host via ssh and use following ping command

vmkping -I vmk5 -s 8972 -d

-d                  set DF bit (IPv4) or disable fragmentation (IPv6)
-I                   outgoing interface
-s                   set the number of ICMP data bytes to be sent.
                      The default is 56, which translates to a 64 byte
                      ICMP frame when added to the 8 byte ICMP header.
                      (Note: these sizes does not include the IP header).

and here is the result in case everything is configured correctly.

In case the message is longer than configured MTU we would see the following ...

You can ask why we use size 8972 and not 9000?
The reason for the 8972 on *nix devices is that the ICMP/ping implementation doesn’t encapsulate the 28 byte ICMP (8) + IP (20) (ping + standard internet protocol packet) header – thus we must take the 9000 and subtract 28 = 8972. [source & credits for the answer]

Hope this helps.