Wednesday, February 17, 2021

VMware Short URLs

 VMware has a lot of products and technologies, here are few interesting URL shortcuts to quickly get resources for a particular product, technology, or other information.

VMware HCL and Interop

https://vmware.com/go/hcl - VMware Compatibility Guide

https://vmwa.re/vsanhclc or https://vmware.com/go/vsanvcg - VMware Compatibility Guide vSAN 

https://vmware.com/go/interop - VMware Product Interoperability Matrices

VMware Partners

https://www.vmware.com/go/partnerconnect - VMware Partner Connect

VMware Customers

https://www.vmware.com/go/myvmware - My VMware Overview
 
https://www.vmware.com/go/customerconnect - Customer Connect Overview

https://www.vmware.com/go/patch - Customer Connect, where you can download VMware bits

http://vmware.com/go/skyline - VMware Skyline

http://vmware.com/go/skyline/download - Download VMware Skyline

VMware vSphere

http://vmware.com/go/vsphere - VMware vSphere

VMware CLIs

http://vmware.com/go/dcli - VMware Data Center CLI

VMware Software-Defined Networking and Security

https://vmware.com/go/vcn - Virtual Cloud Network

https://vmware.com/go/nsx - VMware NSX Data Center

https://vmware.com/go/vmware_hcx - Download VMware HCX

VVD

https://vmware.com/go/vvd-diagrams - Diagrams for VMware Validated Design

https://vmware.com/go/vvd-stencils - VMware Stencils for Visio and OmniGraffle

http://vmware.com/go/vvd-community - VVD Community

http://www.vmware.com/go/vvd-sddc - Download VMware Validated Design for Software-Defined Data Center

VCF

https://vmware.com/go/vcfrc - VMware Cloud Foundation Resource Center

http://vmware.com/go/cloudfoundation - VMware Cloud Foundation

http://vmware.com/go/cloudfoundation-community - VMware Cloud Foundation Discussions

http://vmware.com/go/cloudfoundation-docs - VMware Cloud Foundation Documentation

Tanzu Kubernetes Grid (TKG)

http://vmware.com/go/get-tkg - Download VMware Tanzu Kubernetes Grid

Hope this helps at least one person in the VMware community.

Sunday, February 14, 2021

Top Ten Things VMware TAM should have on his mind and use on a daily basis

The readers may or may not know, that I work for VMware as a TAM. For those who do not know, TAM stands for Technical Account Manager. VMware TAM is the billable consulting role available for VMware customers who want to have an on-site dedicated technical advisor/consultant for long term cooperation. VMware TAM organization historically belonged under VMware PSO (Professional Services Organization), however, recently has been moved under Customer Success Organization, which makes perfect sense if you ask me, because customer success is the key goal of a TAM role.

How TAM engagement works? It is pretty easy. VMware Technical Account Managers have 5 slots (days) per week which can be consumed by one or many VMware customers. There are Tier1, Tier2, and Tier3 offerings, where Tier 1 TAM service includes one day per week for the customer, Tier 2 has 2.5 days per week and Tier 3 TAM is fully dedicated.

The TAM job role is very flexible and customizable based on specific customer demand. I like the figure below, describing TAM Service standard Deliverables and On-Demand Activities.


VMware TAM is delivering standard deliverables like
  • Kickoff Meeting and TAM Business Reviews to continuously align with customer expectations
  • Standard Analytics and Reporting including the report of customer estate in terms of VMware products and technologies (we call it CI.Next), Best Practices Review report highlighting a few best practices violations against VMware Health Check’s recommended practices.
  • Technical Advisory Service about VMware Product Releases, VMware Security Advisories, Specific TAM Customer Technical Webinars, Events, etc.
However, what is the most interesting part of VMware TAM job role, at least for me, are On Demand Activities including
  • Technical Enablements, DeepDives, Roadmaps, etc.
  • Planning and Conceptual Designing of Technical Solutions and Transformation Project
  • Problem Management and Design Troubleshootings
  • Product Feature Request management
  • Etc.

And this is the reason why I love my job, because I like technical planning, designing, coordinating technical implementations, validating and testing implementations before it is handed over to production. And I also like to communicate with operation teams and after a while, reevaluate the implemented design and take the operational feedback back to the architecture and engineering for continuous solution improvement. 
That’s the reason why the TAM role is my dream job for one of the best and impactful IT companies in the world.

During the last One on One meeting with my manager, I have been asked to write down the top ten things VMware TAM should have on his mind and use on a daily basis in 2021. To be honest, the rules I will ist are not specific to the year 2021 but very general applying to any other year, and also easily reusable for any other human activity.

After 25 years in the IT industry, 15 years in Professional Consulting, and 5 years as a VMware TAM, I immodestly believe, the 10 things below are the most important things to be the valuable VMware TAM for my customers. These are just my best practices and it is good to know, there are no best practices written into stone, therefore your opinion may vary. Anyway, take it or leave it. Here we go.

#1 Top Bottom approach

I use the Top Bottom approach, to be able to split any project or solution into Conceptual, Logical, and Physical layers. I use Abstraction and Generalization. While abstraction reduces complexity by hiding irrelevant detail, generalization reduces complexity by replacing multiple entities that perform similar functions with a single construct. Do not forget, the modern IT system complexity can be insane. Check the video “Power of Ten” to understand details about other systems' complexity and how it can be visible at various levels.

#2 Correct Expectations

I always set correct expectations. Discover customer’s requirements, constraints, and specific use cases before going into any details or specific solutions is the key to customer success.

#3 Communication

Open and honest communication is the key to any long term successful relationship. As a TAM, I have to be the communicator who can break barriers between various customer silos and teams, like VMware, compute, storage, network, security application, developers, DevOps, you name it. They have to trust you, otherwise, you cannot make success.

#4 Assumptions

I do not assume. Sometimes we need some assumptions to not be stuck and move forward, however, we should validate those assumptions as soon as possible, because false assumptions lead to risks. And one of our primary goals as TAMs is to mitigate risks for our customers. 

#5 Digital Transformation

I leverage online and digital platforms. Nothing compares to personal meetings and whiteboarding, however, tools like Zoom, Miro.com, and Monday.com increase efficiency and help with communication especially in COVID-19 times. This is probably the only related point to the year 2021, as COVID-19 challenges are staying with us for some time.

#6 Agile Methodologies

I use an agile consulting approach leveraging tools like Miro.com, Monday.com, etc. gives me a toolbox to apply agile software methodologies into technical infrastructure architecture design. In the past, when I worked as a software developer, software engineer, and software architect I was a follower of Extreme Programming. I apply the same or similar concepts and methods to Infrastructure Architecture Design and Consulting. This approach helps me to keep up with the speed of IT and high business expectations.

#7 Documentation

I document everything. The documentation is essential. If it’s not written down, it doesn’t exist! I like "Eleven Rules of Design Documentation" by Greg Ferro.

#8 Resource Mobilization

I leverage resources. Internal and External. As TAMs, we have access to a lot of internal resources (GSS, Engineering, Product Management, Technical Marketing, etc.) which we can leverage for our TAM customers. We can also leverage external resources like partners, other vendors from the broader VMware ecosystem, etc. However, we should use resources efficiently. Do not forget, all human resources are shared, thus limited. And time is the most valuable resource, at least for humans, therefore Time Management is important. Anyway, resource mobilization is the true value of the VMware TAM program, therefore we must know how to leverage these resources. 

#9 Customer Advocacy

As a TAM, I work for VMware but also for TAM customers. Therefore, I do customer advocacy within VMware and VMware advocacy within the Customer organization. This is again about the art of communication.

#10 Technical Expertise

Last but not least, I must have technical expertise and competency. I’m a Technical Account Manager, therefore I try to have deep technical expertise in at least one VMware area and broader technical proficiency in few other areas. This approach is often called Full Stack Engineering. I’m very aware of the fact that expertise and competency are very tricky and subjective. It is worth understanding the Dunning Kruger-Effect which is the law about the correlation between competence and confidence. In other words, I’m willing to have real competence and not only false confidence about the competence. If I do not feel confident in some area, I honestly admit it and try to find another resource (see rule #8). The best approach to get and validate my competency and expertise is to continuously learn and validate it by VMware advanced certifications.

Hope this write-up will be useful for at least one person on the VMware TAM Organization.

Thursday, February 04, 2021

Back to basics - MTU & IP defragmentation

This is just a short blog post as it can be useful for other full-stack (compute/storage/network) infrastructure engineers.

I have just had a call from my customer with the following problem symptom. 

Symptom:

When ESXi (in ROBO)  is connected to vCenter (in Datacenter), TCP/IP communication overloads 60 Mbps network link. In such a scenario, huge packet retransmit is observed. IP packets are defragmented and packet retransmission is observed.

Design drawing:

Hypothesis:

MTU Defragmentation is happening in the physical network and MTU is lower than 1280 Bytes.

Planned test:

Find the smallest MTU in the end-2-end network path between ESXi and vCenter

vmkping -s 1472 -d VCENTER-IP

Decrease -s parameter value until the ping is successful. This is the way how to find the smallest MTU in the IP network path. 

Back to basics

IP fragmentation is an Internet Protocol (IP) process that breaks packets into smaller pieces (fragments), so that the resulting pieces can pass through a link with a smaller maximum transmission unit (MTU) than the original packet size. The fragments are reassembled by the receiving host. [source]

The vmkping command has some parameters you should know and use in this case:

-s to set the payload size

Syntax:vmkping -s size IP-address

With the parameter -s you can define the size of the ICMP payload. If you have defined an MTU size from eg. 1500 bytes and use this size in your vmkping command, you may get a “Message too long” error. This happens because ICMP needs 8 bytes for its ICMP header and 20 bytes for IP header:

The size you need to use in your command will be:

1500 (MTU size) – 8 (ICMP header) – 20 (IP header) = 1472 bytes for ICMP payload

-d to disable IP fragmentation

Syntax:vmkping -d IP-address

Use the command “vmkping -s 1472 IP-address” to test your end-2-end network path.

Decrease -s parameter until the ping is successful.

Monday, January 11, 2021

Server rack design and capacity planning

Our VMware local SE team has got a great Christmas present from regional Intel BU. Four rack servers with very nice technical specifications and the latest Intel Optane technology. 

Here is the server technical spec: 

Node Configuration

Description

Quantity

CPU

Intel Platinum 8280L (28 cores, max memory 4.5TB)                          

2

DDR4 Memory

768GB DDR4 DRAM RDIMM

12 x 64GB 

Intel Persistent Memory

3TB Intel Persistent Memory

12 x 256GB

Caching Tier

Intel Optane SSD DC P4800X Series

(750GB, 2.5in PCIe* x4, 3D XPoint™)

2

Capacity Tier

Intel SSD DC P4510 Series

(4.0TB, 2.5in PCIe* 3.1 x4, 3D2, TLC)

4

Networking

       +

transceivers, cables

Intel® Ethernet Network Adapter XXV710-DA2

(25G, 2 ports)

1


These servers are vSAN Ready and the local VMware team is planning to use them for demonstration purposes of VMware SDDC (vSphere, vSAN, NSX, vRealize), therefore VMware Cloud Foundation (VCF) is a very logical choice.

Anyway, even Software-Defined Data Center requires power and cooling, so I've been asked to help with server rack design with proper power capacity planning. To be honest, the server rack plan and design is not rocket science. It is just simple math & elementary physics, however, you have to know the power consumption of each computer component. I did some research and here is the math exercise with a power consumption of each component:

  • CPU - 2x CPU Intel Platinum 8280L (110 W Idle, 150 W Computational,  360 W Peak load)
    • Estimation: 2x150 W = 300 W
  • RAM - 12x 64 GB DDR4 DRAM RDIMM (768 GB)
    • Estimation: 12x 24 Watt = 288 W
  • Persistent RAM - 12x 256GB (3TB) Intel Persistent Memory
    • Estimation: 12x 15 Watt = 180 W
  • vSAN Caching Tier - 2x Intel Optane SSD DC P4800X 750GB
    • Estimation: 2x18W =>  36W
  • vSAN Capacity Tier - 4x Intel SSD DC P4510 Series 4TB
    • Estimation: 4x 16W => 64 W
  • NIC - 1x Intel® Ethernet Network Adapter XXV710-DA2 (25G, 2 ports)
    • Estimation: 15 W

If we sum the power consumption above, we will get 883 Watt per single server.  

To validate the estimation above, I used the DellEMC Enterprise Infrastructure Planning Tool available at http://dell-ui-eipt.azurewebsites.net/#/, where you can place infrastructure devices and get the Power and Heating calculations. You can see the IDLE and COMPUTATIONAL consumptions below.

Idle Power Consumption


Computational Power Consumption

POWER CONSUMPTION
Based on the above calculations, the server power consumption range between 300 and 900 Watts, so it is good to plan a 1 kW power budget per server which in our case would be 4 kW / 17.4 Amp per a single power brach, which would mean 1x32 Amp PDUs just for 4 servers. 

For a full 45U Rack with 21 servers, it would be 21 kW / 91.3 Amp, which would mean 3x32 Amp per a single branch in the rack.

HEATING AND COOLING
Heating and cooling are other considerations. Based on Dell Infrastructure Planning Tool, the temperature in the environment will rise by 9°C (idle load) or even 15 °C (computational load). This also requires appropriate cooling and electricity planning.

Conclusion

1 kW per server is a pretty decent consumption. When you design your cool SDDC, do not forget for basics - Power and Cooling.

Sunday, November 29, 2020

Virtual Machine Advanced Configuration Options

First and foremost, it is worth mentioning, that it is definitely not recommended to change any advanced settings unless you know what you are doing and you are fully aware of all potential impacts. VMware default settings are the best for general use covering the majority of use cases, however, when you have some specific requirements you might need to do the VM tuning and change some advanced virtual machine configuration options. In this blog post, I'm trying to document advanced configuration options I've found useful in some specific design decisions.

Time synchronization

  • time.synchronize.tools.startup
    • Description:
    • Type: Boolean
    • Values:
      • true / 1 (default)
      • false / 0
  • time.synchronize.restore
    • Description:
    • Type: Boolean
    • Values:
      • true / 1 (default)
      • false / 0
  • time.synchronize.shrink
    • Description:
    • Type: Boolean
    • Values:
      • true / 1 (default)
      • false / 0
  • time.synchronize.continue
    • Description:
    • Type: Boolean
    • Values:
      • true / 1 (default)
      • false / 0
  • time.synchronize.resume.disk
    • Description:
    • Type: Boolean
    • Values:
      • true / 1 (default)
      • false / 0

Relevant resources:

Ethernet

Isolation

With the isolation option, you can restrict file operations between the virtual machine and the host system, and between the virtual machine and other virtual machines.

VMware virtual machines can work both in a vSphere environment and on hosted virtualization platforms such as VMware Workstation and VMware Fusion. Certain virtual machine parameters do not need to be enabled when you run a virtual machine in a vSphere environment. Disable these parameters to reduce the potential for vulnerabilities.

Following advanced settings are booleans (true/false) with default value false. You can disable it by changing the value to true.

  • isolation.tools.unity.push.update.disable
  • isolation.tools.ghi.launchmenu.change
  • isolation.tools.memSchedFakeSampleStats.disable
  • isolation.tools.getCreds.disable
  • isolation.tools.ghi.autologon.disable
  • isolation.bios.bbs.disable
  • isolation.tools.hgfsServerSet.disable
  • isolation.tools.vmxDnDVersionGet.disable
  • isolation.tools.diskShrink.disable
  • isolation.tools.memSchedFakeSampleStats.disable
  • isolation.tools.guestDnDVersionSet.disable
  • isolation.tools.unityActive.disable
  • isolation.tools.diskWiper.disable

Snapshots

Remote Display

Tuesday, November 24, 2020

vSAN 7 Update 1 - What's new in Cloud Native Storage

 vSAN 7 U1 comes with new features also in Cloud Native Storage area, so let's look at what's new.

PersistentVolumeClaim expansion

Kubernetes v1.11 offered volume expansion by editing the PersistentVolumeClaim object. Please note, that volume shrink is not supported and extension must be done offline. Online expansion is not supported in U1 but planned on the roadmap.  

Static Provisioning in Supervisor Cluster

This feature allows exposing an existing storage volume within a K8s cluster integrated within vSphere Hypervisor Cluster (aka Supervisor Cluster, vSphere with K8s, Project Pacific).

vVols Support for vSphere K8s and TKG Service

Supporting external storage deployments on vK8s and TKG using vVols.

Data Protection for Modern Applications

vSphere 7.0 U1 comes with support Dell PowerProtect and Velero backup for Pacific Supervisor and TKG clusters. Velero only option to initiate snapshots from supervisor Velero plugin and store on S3.


vSAN Direct

vSAN Direct is the feature introducing Directly Attach Storage (typically physical HDD) for object storage solutions running on top of vSphere. 


There will not be a shared vSAN Datastore like typical vSAN has but vSAN Direct Datastores are allowing connect physical disks directly to virtual appliances or containers on top of vSphere/vSAN Cluster providing Object Storage services and bypassing traditional vSAN datapath.

Hope you find it useful.

Monday, November 23, 2020

Why HTTPS is faster than HTTP?

Recently, I was planning, preparing, and executing a network performance test plan, including TCP, UDP, HTTP, and HTTPS throughput benchmarks. The intention of the test plan was the network throughput comparison between two particular NICs

  • Intel X710
  • QLogic FastLinQ QL41xxx

There was a reason for such exercise (reproduction of specific NIC driver behavior) and I will probably write another blog post about it, but today I would like to raise another topic. During the analysis of testing results, I've observed very interesting HTTPS throughput results in comparison to HTTP throughput. These results were observed on both types of NICs, therefore, it should not be a benefit of specific NIC hardware or driver.

Here is the Test Lab Environment:

  • 2x ESXi hosts
    • Server Platform: HPE ProLiant DL560 Gen10
    • CPU: Intel Cascade Lake based Xeon
    • BIOS: U34 | Date (ISO-8601): 2020-04-08
    • NIC1: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5
    • NIC2: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm 
    • OS/Hypervisor: VMware ESXi 6.7.0 build-16075168 (6.7 U3)
  • 1x Physical Switch
    • 10Gb switch ports  <<  network bottleneck by purpose, because customer is using 10Gb switch ports as well

Below are the observed interesting HTTP and HTTPS results.

HTTP


HTTPS


OBSERVATION, EXPLANATION, AND CONCLUSION

We have observed

  • HTTP throughput between 5 and 6 Gbps
  • HTTPS throughput between 8 and 9 Gbps

which means 50% higher throughput of HTTPS over HTTP. Normally, we would be expecting HTTP transfer faster than HTTPS as HTTPS requires encryption, which should end-up with some CPU overhead. Encryption overhead is questionable, but nobody would expect HTTPS significantly faster than HTTP, right? That's the reason I was asking myself, 

why HTTPS overachieved HTTP results on HPE Lab with the latest Intel CPUs?

Here is my process of the "issue" troubleshooting or better to say, root cause analysis. 

Conclusion

  • In my home lab, I have old Intel CPUs models (Intel Xeon CPU E5-2620 0 @ 2.00GHz), that's the reason HTTP and HTTPS throughputs are identical.
  • In the HPE test lab, there are the latest Intel CPU models, therefore, HTTPS can be offloaded and client/server communication can leverage asynchronous advantages for web servers using Intel® QuickAssist Technology introduced in the Intel Xeon E5-2600 v3 product family. 
  • It is worth to mention, that it is not only about CPU hardware acceleration, but also about software code which must be written in the form, hardware acceleration can leverage for a positive impact on performance. This is the case of OpenSSL 1.1.0, and NGINX 1.10 to boost HTTPS server efficiency. 

Lesson learned

When you are virtualizing network functions, it is worth considering the latest CPUs, as it can have a significant impact on overall system performance and throughput. Does not matter, if such network function virtualization is done by VMware NSX or other virtualization or containerization platforms.

Investigation continues

To be honest, I do not know if I really fully understand the root cause of such behavior. I still wonder why HTTPS is 50% faster than HTTP, and if CPU offloading is the only factor for such performance gain.

I'll try to run the test plan on other hardware platforms, compare results, and do some further research to understand much deeper. Unfortunately, I do not have direct access to the latest x86 servers of other vendors, so it can take a while. If you have access to some modern x86 hardware and want to run my test plan by yourself, you can download the test plan document from here. If you will invest some time into the testing, please share your results in the comments below this article or simply send me an e-mail

Hope this blog post is informative, and as always, any comment or idea is very welcome.