Wednesday, July 03, 2019

VMware Skyline

VMware Skyline is a relatively new Phone Call or Home Call functionality developed by VMware Global Services. It is a proactive support technology available to customers with an active Production Support or Premier Services contract. Skyline automatically and securely collects, aggregates and analyzes customer specific product usage data to proactively identify potential issues and improve time-to-resolution.

You are probably interested in Skyline Collector System Requirements which are documented here.

Skyline is packaged as a VMware Virtual Appliance (OVA) which is easy to install and operate. From a networking standpoint, there are only two external network connections you have to allow from your environment:
  • HTTPS (443) to vcsa.vmware.com
  • HTTPS (443) to vapp-updates.vmware.com
Do you have more questions about Skyline? Your questions can be addressed in Skyline FAQ.

Tuesday, July 02, 2019

vSAN logical design and SSD versus NVMe considerations

I'm just preparing vSAN capacity planning for PoC of one of my customers. Capacity planning for traditional and hyper-converged infrastructure is principally the same. You have to understand TOTAL REQUIRED CAPACITY of your workloads and  USABLE CAPACITY of vSphere Cluster you are designing. Of course, you need to understand how vSAN hyper-converged system conceptually and logically works but it is not rocket science. vSAN is conceptually very straight forward and you can design very different storage systems from performance and capacity point of view. It is just a matter of components you will use. You probably understand that performance characteristics differ if you use rotational SATA disks, SSD or NVMe. For NVMe, 10Gb network can be the bottleneck so you should consider 25Gb network or even more. So, in the figure below is an example of my particular vSAN capacity planning and proposed logical specifications.


Capacity planning is the part of the logical design phase, therefore any physical specifications and details should be avoided. However, within the logical design, you should compare multiple options having an impact on infrastructure design qualities such as

  • availability, 
  • manageability, 
  • scalability, 
  • performance, 
  • security, 
  • recoverability 
  • and last but not least the cost.  

For such considerations, you have to understand the characteristics of different "materials" your system will be eventually built from. When we are talking about magnetic disks, SSD, NVMe, NICs, etc. we are thinking about logical components. So I was just considering the difference between SAS SSD and NVMe Flash for the intended storage system. Of course, different physical models will behave differently but hey, we are in the logical design phase so we need at least some theoretical estimations. We will see the real behavior and performance characteristics after the system is built and tested before production usage or we can invest some time into PoC and validate our expectations.

Nevertheless, cost and performance is always a hot topic when talking with technical architects. Of course, higher performance costs more. However, I was curious about the current situation on the market so I quickly checked the price of SSD and NVMe on DELL.com e-shop.

Note that this is just the indicative, kind of street price, but it has some informational value.

This is what I have found there today

  • Dell 6.4TB, NVMe, Mixed Use Express Flash, HHHL AIC, PM1725b, DIB - 213,150 CZK
  • Dell 3.84TB SSD vSAS Mixed Use 12Gbps 512e 2.5in Hot-Plug AG drive,3 DWPD 21024 TBW - 105,878 CZK

1 TB of NVMe storage costs 33,281 CZK
1 TB of SAS SSD storage costs 27,572 CZK
This is approximately 20% difference advantage for SSD.

So here are SSD advantages

  • ~ 20% less expensive material
  • scalability because you can put 24 and more SSD disks to 2U rack server but the same server supports usually less than 8 PCIe slots
  • manageability as you can more easily replace disks than PCI cards

The NVMe advantage is the performance with a positive impact on storage latency as SAS SSD has ~250 μs latency and NVMe ~= 80 μs so you should improve performance and storage service quality by a factor of 3.

So as always, you have to consider what infrastructure design quality is good for your particular use case and non-functional requirements and do the right design decision(s) with justification(s).

Any comment? Real experience? Please, leave the comment below the article. 

Monday, June 10, 2019

How to show HBA/NIC driver version

How to find the version of HBA or NIC driver on VMware ESXi?

Let's start with HBA drivers. 

STEP 1/ Find driver name for the particular HBA. In this example, we are interested in vmhba3.

We can use following esxcli command to see driver names ...
esxcli storage core adapter list


So now we have driver name for vmhba3, which is qlnativefc

STEP 2/ Find the driver version.
The following command will show you the version.
vmkload_mod -s qlnativefc | grep -i version

NIC drivers

The process to get NIC driver version is very similar.

STEP 1/ Find driver name for the particular NIC. In this example, we are interested in vmhba3.
esxcli network nic list


STEP 2/ Find the driver version.
The following command will show you the version.
vmkload_mod -s ntg3 | grep -i version


You should always verify your driver versions are at VMware Compatibility Guide. The process of how to do it is documented here How to check I/O device on VMware HCL.

For further information see VMware KB - Determining Network/Storage firmware and driver version in ESXi 4.x and later (1027206)

Thursday, June 06, 2019

vMotion multi-threading and other tuning settings

When you need to boost overall vMotion throughput, you can leverage Multi-NIC vMotion. This is good when you have multiple NICs so it is kind of scale-out solution. But what if you have 40 Gb NICs and you would like to do scale-up and leverage the huge NIC bandwidth (40 Gb) for vMotion?

vMotion is by default using a single thread (aka stream), therefore it does not have enough CPU performance to transfer more than 10 Gb of network traffic. If you really want to use higher NIC bandwidth, the only way is to increase the number of threads pushing the data through the NIC. This is where advanced setting Migrate.VMotionStreamHelpers comes in to play.

I have been informed about these advanced settings by one VMware customer who saw it on some VMworld presentation. I did not find anything in VMware documentation, therefore these settings are undocumented and you should use it with special care.

Advanced System Settings
Default
Tunning
Desc
Migrate.VMotionStreamHelpers
0
8
Number of helpers to allocate for VMotion streams
Net.NetNetqTxPackKpps
300
600
Max TX queue load (in thousand packet per second) to allow packing on the corresponding RX queue
Net.NetNetqTxUnpackKpps
600
1200
Threshold (in thousand packet per second) for TX queue load to trigger unpacking of the corresponding RX queue
Net.MaxNetifTxQueueLen
2000
10000
Maximum length of the Tx queue for the physical NICs 





Wednesday, June 05, 2019

How to get more IOPS from a single VM?

Yesterday, I have got a typical storage performance question. Here is the question ...
I am running a test with my customer how many IOPS we can get from a single VM working with HDS all flash array. The best that I could get with IOmeter was 32K IOPS with 3ms latency at 8KB blocks. No matter what other block size I choose or outstanding IOs, I am unable to have more then 32k. On the other hand I can't find any bottlenecks across the paths or storage. I use PVSCSI storage controller. Latency and queues looks to be ok
IOmeter is good storage test tool. However, you have to understand basic storage principles to plan and interpret your storage performance test properly. The storage is the most crucial component for any vSphere infrastructure, therefore I have some experience with IOmeter and storage performance tests in general and here are my thoughts about this question.

First thing first, every shared storage system requires specific I/O scheduling to NOT give the whole performance to a single worker. The storage worker is the compute process or thread sending storage I/Os down the storage subsystem. If you think about it, it makes a perfect sense as it mitigates the problem of a noisy neighbor. When you invest a lot of money to a shared storage system, you most probably want to use it for multiple servers, right? Does not matter if these servers are physical (ESXi hosts) or virtual (VMs). To get the most performance from shared storage you must use multiple workers and optimally spread them across multiple servers and multiple storage devices (aka LUNs, volumes,  datastores).

IOmeter allows you to use

  • Multiple workers on a single server (aka Manager)
  • Outstanding I/Os within a single worker (asynchronous I/O to a disk queue without waiting for acknowledge)
  • Multiple Managers – the manager is the server generating storage workload (multiple workers) and reporting results to a central IOmeter GUI. This is where IOmeter dynamos come in to play.
To test the performance limits of a shared storage subsystem, it is an always good idea to use multiple servers (IOmeter managers) with multiple workers on each server (nowadays usually VMs) spread across multiple storage devices (datastores / LUNs). This will give you multiple storage queues, which means more parallel I/Os. Parallelism is the way which will give you more performance when such performance exists on shared storage. If such performance does not exist on the shared storage, queueing will not help you to boost performance. If you want, you can also leverage Oustanding I/Os to fill disk queue(s) more quickly and make an additional pressure to a storage subsystem, but it is not necessary if you use the number of workers equal to available queue depth. Outstanding I/Os can help you potentially generating more I/Os with fewer workers but it does not help you to get more performance when your queues are full. You will just increase response times without any positive performance gain.

Just as an example of IOmeter performance test, on the image below, you can see the results from IOmeter distributed performance tests on 2-node vSAN I planned, designed, implemented and tested recently for one of my customers. There is just one disk group (1xSSD cache, 4xSSD capacity).


Above storage performance test was using 8xVMs and each VMs was running 8 storage workers.
I have performed different storage patterns (I/O size, R/W ratio, 100% random access). The performance is pretty good, right? However, I would not be able to get such performance from the single VM having a single vDisk. 
Note: vSAN has a significant advantage in comparison to traditional storage because you do not need to deal with LUNs queueing (HBA Device Queue Depth) as there are no LUNs. On the other hand, in vSAN storage, you have to think about the total performance available for a single vDisk and it boils down to vSAN DiskGroup(s) layout and vDisk object components distribution across physical disks. But that's another topic as the asker is using traditional storage with LUNs.

Unfortunately, using multiple VMs is not the solution for the asker as he is trying to get all I/Os from a single VM.

In the question is declared that a single VM cannot get more than 32K IOPS and observed I/O response time is 3ms. The asker is curious why he cannot get more IOPS from the single VM?

Well, there can be multiple reasons but let’s assume the physical storage is capable provide more than 32K IOPS. I think, that more IOPS cannot be achieved because only one VM is used and IOmeter is using a single vDisk having a single queue. The situation is depicted in drawing below.


So, let’s do the simple math calculation for this particular situation …
  • We have a single vDisk queue having default queue depth 64 (we use Paravirtual SCSI adapter. Non-paravirtualized SCSI adapters have queue depth 32)
  • We have an HBA QLogic having queue default depth 64 (other HBA vendors like Emulex, have default queue depth 32, so it would be another bottleneck on the storage path)
  • The storage has average service time (response time) around 3ms
We have to understand the following basic principles
  • IOPS is the number of I/O operations per second
  • 64 queue depth = 64 I/O operations in parallel = 64 slices for I/O operations
  • Each I/O from these 64 I/Os are in the vDisk queue until SCSI response from the LUN will come back
  • All other I/Os have to wait until there is the free I/O slice in the queue.
And here is the math calculation ...

Q1: How many I/Os can be delivered in this situation per 1 millisecond?
A1: 64 (queue depth) / 3 (service time in ms)  = 64 / 3 = 21.33333 I/Os per 1 millisecond
 
Q2: How many I/Os can be delivered per 1 second?
A2: It is easy. 1,000 times more than in millisecond. So, 21.33333 x 1,000 = 21333.33 IOs per second ~= 21.3K IOPS
 
The asker is claiming he can get 32K IOPS with 3 ms response time, therefore it seems that the response time from storage is better than 3 ms. The math above would tell me that storage response time in this particular exercise is somewhere around 2 ms. There can be other mechanisms to boost performance. For example, I/O coalescing but let's keep it simple.

If the storage would be able to service I/O in 1 ms we would be able to get ~64K IOPS.
If the storage would be able to service I/O in 2 ms we would be able to get ~32K IOPS. 
If the storage would be able to service I/O in 3 ms we would be able to get ~21K IOPS. 

The math above would work if END-2-END queue depth is 64. This would be the case when QLogic HBA is used as it has HBA LUN Queue Depth 64. In the case of Emulex HBA, there is HBA LUN Queue Depth 32, therefore higher vDisk Queue Depth (64), would not help.
 
Hope the principle is clear now.

So how can I boost storage performance for a single VM? If you really need to get more IOPS from the single VM you have only three following options:
  1. increase queue depth, but not only on vDISK itself but END-2-END. IT IS GENERALLY NOT RECOMMENDED as you really must know what you are doing and it can have a negative impact on overall shared storage. However, if you need it and have the justification for it, you can try to tune the system.
  2. use the storage system with low service time (response time). For example, the sub-millisecond storage system (for example 0.5 ms) will give you more IOPS for the same queue depth as a storage system having higher service time (for example 3 ms).
  3. leverage multiple vDisks spread across multiple vSCSI controllers and datastores (LUNs). This would give you more (total) queue depth in a distributed fashion. However, this would have additional requirements for your real application as it would need a filesystem or other mechanism supporting multiple storage devices (vDisks).
I hope options 1 and 2 are clear. Option 3 is depicted in the figure below.


CONCLUSION
On a typical VMware vSphere environment, you use the shared storage system from multiple ESXi hosts, multiple VMs having vDisks on multiple datastores (LUNs). That's the reason why the default queue depth usually makes perfect sense as it provides fairness among all storage consumers. If you have storage system with, let's say 2 ms response time, and queue depth 32, you can still get around 16K IOPS. This should be good enough for any typical enterprise application, and usually, I recommend to use IOPS limiting to limit some VMs (vDisks) even more. This is how storage performance tiering can be very simply achieved on VMware SDDC with unified infrastructure.  If you need higher storage performance, your application is specific and you should do a specific design and leverage specific technologies or tunings.

By the way, I like Howard's Marks (@DeepStorageNet) statement I have heard on his storage technologies related podcast "GrayBeards".  It is something like ...
"There are only two storage performance types - good enough and not good enough." 
This is very true.
 
Hope this writeup helps to broader VMware community.

Relevant articles:

Friday, May 24, 2019

Syslog.global.logHost is invalid or exceeds the maximum number of characters permitted

I have a customer who has a pretty decent vSphere environment and uses VMware vRealize LogInsight as a central syslog server for advanced troubleshooting and actionable loging. VMware vRealize LogInsight is tightly integrated with vSphere so it configures syslog configuration on ESXi hosts automatically through vCenter API. Everything worked fine but one day customer realized there is the issue with one and only one ESXi host.

He saw the following failed vCenter task in his vSphere Client.


The error message:
setting["Syslog.global.logHost"] is invalid or exceeds the maximum number of characters permitted
seemed very strange to me.

From ESXi logs collected by LogInsight was evident that ESXi advanced setting cannot be configured through API. However, the same or similar issue can be reproduced by esxcli command for setting the advanced parameter.

Command:
esxcli system settings advanced set -o /Syslog/global/logHost -s “udp://1.2.3.4:514”

Output:
Unable to find branch Syslog

The resolution of this problem was to configure syslog configuration (as described in my older blog post here) instead of setting advanced parameter /Syslog/global/logHost.

The command to configure remote syslog is
esxcli system syslog config set --loghost='tcp://1.2.3.4:514'  help customer to resolve the issue.

I have never seen this issue in other vSphere environments so hope this helps to at least one other person from VMware community.

Thursday, May 16, 2019

The SPECTRE story continues ... now it is MDS

Last year (2018) started with shocked Intel CPU vulnerabilities Spectre and Meltdown and two days ago was published another SPECTRE variant know as Microarchitectural Data Sampling or MDS. It was obvious from the beginning, that this is just a start and other vulnerabilities will be found over time by security experts and researchers. All these vulnerabilities are collectively known as Speculative Executions aka SPECTRE variants.

Here is the timeline of particular SPECTRE variant vulnerabilities along with VMware Security Advisories.

2018-01-03 - Spectre (speculative execution by performing a bounds-check bypass) / Meltdown (speculative execution by utilizing branch target injection) - VMSA-2018-0002.3 
2018-05-21 - Speculative Store Bypass (SSB) - VMSA-2018-0012.1
2018-08-14 - L1 Terminal Fault - VMSA-2018-0020
2019-05-14 - Microarchitectural Data Sampling (MDS) - VMSA-2019-0008

I published several blog posts about SPECTRE topics in the past

The last two vulnerabilities "L1 Terminal Fault (aka L1TF)" and "Microarchitectural Data Sampling (aka MDS)" are related to Intel CPU Hyper-threading. As per statement here AMD is not vulnerable.

When we are talking about L1TF and MDS, a typical question of my customers having Intel CPUs is if they are safe when Hyper-Threading is disabled in the BIOS. The answer is yes but you would have to power cycle the physical system to reconfigure BIOS settings which can be pretty annoying and time-consuming in larger environments. That's' why VMware recommends leveraging SDDC concept and set it by software change - ESXi hypervisor advanced setting. It is obviously much easier to change two ESXi advanced settings VMkernel.Boot.hyperthreadingMitigation and VMkernel.Boot.hyperthreadingMitigationIntraVM to the value true and disable hyperthreading in ESXi CPU scheduler without a need of physical server power cycle. You can do it by PowerCLI one-liner in a few minutes which is much more flexible than BIOS changes.

So that's it from the security point of view but what about performance?

It is simple and obvious. When hyper-threading is disabled you will obviously lose the CPU performance benefit of Hyper-Threading technology which can be somewhere between 5 - 20% and heavily depends on the type of particular workload. Let's be absolutely clear here. Until the issue is addressed inside the CPU hardware architecture it will be always the tradeoff between security and performance. If I understand Intel messaging correctly, the first hardware solution for their Hyper-Threading is implemented in Cascade Lake family. You can double check it by yourself here ...
Side Channel Mitigation by Product CPU Model
https://www.intel.com/content/www/us/en/architecture-and-technology/engineering-new-protections-into-hardware.html

You can get hyperthreading performance back but only in VMware vSphere 6.7 U2. VMware vSphere 6.7 U2 includes new scheduler options that secure it from the L1TF vulnerability, while also retaining as much performance as possible. This new scheduler has introduced ESXi advanced setting
VMkernel.Boot.hyperthreadingMitigationIntraVM which allows you to set it to FALSE (this is the default) and leverage HyperThreading benefits within Virtual Machine but still do isolation between VMs when VMkernel.Boot.hyperthreadingMitigation is set to TRUE. This possibility is not available in older ESXi hypervisors and there are no plans to backport it. For further info read paper "Performance of vSphere 6.7 Scheduling Options".

By the way, last year I have spent a significant time to test the performance impact of SPECTRE and MELTDOWN vulnerabilities remediations. If you want to check the results of the performance tests of Spectre/Meltdown 2018 variants along with the conclusion, you can read my document published on SlideShare. It would be cool to perform the same tests for L1TF and MDS but it would require additional time effort. I'm not going to do so until sponsored by some of my customers. But anybody can do it by himself as a test plan is described in the document below.