In this chapter, we take a look at the technology that underpins a lot of the enterprise features of VMware’s vSphere: clustering.
Welcome back to Runecast Academy. We hope that you've enjoyed the series so far, and maybe learned something new along the way. In this chapter, we take a look at the technology that underpins a lot of the enterprise features of VMware’s vSphere: clustering.
A few of the Clustering topics covered here include:
- Why Resource Pools and VMs should not be siblings in the hierarchy
- vMotion, High Availability (HA), Fault Tolerance (FT), and Distributed Resource Scheduler (DRS)
- A few words about vSAN
Runecast Academy Series 1 – Part 9. Clustering
So what do we mean by clustering? Clustering is the act of pooling the resources in your datacenter. So let’s say that you have ten servers, each with 2 CPUs and 12 cores per CPU, and 512Gb of RAM per server. You now have 240 CPU cores and 5Tb of RAM at your disposal! It’s a simplified way of looking at this, as no one VM can consume more compute than the resources available on a single host, but once you pool those resources, you can do some cool stuff.
What does Clustering enable?
We’re glad you asked! As we hinted above, clustering brings the fun, enterprise tools to the table. Let’s take a look at what that means.
Resource Pools are a construct that allows you to carve up those resources that you just pooled together into the construct of a cluster. You can use them to prioritize access to available resources for specific VMs. The easiest way to think of these is as a series of pies. The top-level pie is the cluster, and if you create no further resource pools then all VMs have equal access to the available resources in the cluster. If you hit resource contention (where the demand for one or more resources is higher than the available pool of resources), all VMs will be impacted equally. You can carve up CPU and RAM using this construct, and you can also nest Resource Pools inside one another.
Two things to bear in mind... Resource Pools and VMs should not be siblings in the hierarchy, as this can have unintended consequences. Also, Resource Pools are not and absolutely should not be used to organize VMs in the same way that you would use Folders. Use Folders for that, and don’t be this guy.
To this day we recall the first time we saw vMotion in action. For those of you who haven’t heard of vMotion, it’s essentially magic. It allows you to take a VM running on one host and throw it across a network to another host, without needing to power off that VM. Nowadays, this feels like ‘table stakes’ (minimum requirements for market entry), but back in the day, this was amazing. It meant that if we needed to take down a host for patching, or even replace the host entirely, we could do this easily. It also means that we can scale-out the infrastructure supporting the environment if our workloads require it.
While historically the cluster was the limit of vMotion actions, they can now be performed between clusters, and even between datacenters.
HA stands for High Availability, but don’t confuse this with FT (Fault Tolerance). High Availability monitors between the hosts in a cluster and, when a host server has failed, will restart the VMs that were running on this host elsewhere within a cluster.
But what if your vCenter Server Appliance is one of those machines on the downed host? No problem!
Creating an HA cluster requires vCenter to deploy an agent (known as the FDM, or “fault domain manager”) to each host in the cluster. Once this completes, an election process takes place between the hosts themselves. The FDM module (also known as a VIB, or VMware Installable Bundle) actively monitors the availability of the hosts in the cluster. If one is deemed to have failed, it will restart the VMs that were running on the failed host elsewhere in the cluster. The best thing about this? Even if vCenter Server is on the failed host, this will work without issues.
So your VMs still experience a little downtime. Still, service is restarted quickly, rather than needing to wait for your monitoring tool to detect that a bunch of VMs and a host are no longer available and assign a ticket to someone to investigate.
Two things to bear in mind for HA... because the VM will not have been cleanly shutdown, it’s possible that some extra things may need to happen, such as filesystem check. Secondly, HA requires shared storage of some sort since a surviving host must have immediate access to the files that make up the VM.
As mentioned in the previous section, FT stands for Fault Tolerance. If vMotion seems like it’s magic, then FT is something akin to voodoo. It spins up a second, identical VM on another host, and this VM is kept in lockstep over a dedicated FT Logging network. If the host with the primary VM fails, the guest OS simply keeps on ticking (becoming the new primary), and a new secondary VM spins up on another host. As you can imagine, this can provide a tool to keep things ticking when the hardware is failing. It’s not without its caveats, though. As this requires a second VM to be running at the same time, it doubles resource consumption. It also requires a dedicated fault tolerance network, and there are limits as to the number of vCPUs, and also the number of FT VMs per host. Finally, several vSphere features are incompatible with Fault Tolerance, such as snapshots, encryption, and more.
DRS is the Distributed Resource Scheduler, which is used to balance workloads between hosts in a cluster. It leverages vMotion (and as such, a vMotion network is necessary) to migrate workloads from one host to another in line with a set of algorithms baked into the product. When you enable DRS on a cluster, you specify an automation level. This level can be Manual (where DRS will make recommendations as to where you should move a workload to). The next level is Partially Automated (DRS will automatically select a host to run a VM on when you power it on, but then make recommendations as to where you should move it). Finally (and most regularly used), there is Fully Automated (DRS will automatically balance workloads with no human interaction required). Atop this, a migration threshold is available. Tweaking the migration threshold allows you to tune DRS so that it’s not vMotion-ing workloads all of the time (as the act of vMotion may cause performance degradation of some workloads, it’s best to keep this to a minimum.
While vSAN isn’t included in the regular vSphere license, it’s a cool technology that also leverages clustering. For the uninitiated, vSAN takes advantage of local storage presented in each ESXi host and creates a storage pool. This storage is then presented back to all hosts in the cluster, even if some hosts aren’t presenting any storage themselves. vSAN is a critical component of VMware Cloud on AWS, VMware Cloud Foundation, and Dell EMC’s VxRail offerings and, as such, is a major player in the HCI (HyperConverged Infrastructure) space – VMware HCI owns 41.1% of the market, according to IDC. To learn more about vSAN, why not check out VMware’s Virtual Blocks blog?
So that just about brings this chapter to a close. Hopefully, this brings some clarity to what we mean when we say “cluster” and the cool features that are made available to you when you do it. You can pool all kinds of resources, and use Resource Pools to carve those resources up, giving priority where needed. You can also leverage DRS to balance workloads across hosts in a cluster to deal with the noisy neighbour effect. With all of these tools set up and working correctly, your SDDC can be somewhat self-healing - automatically recovering your workloads from host failures using HA and FT, using DRS to balance workloads across available resources for best performance.
That’s all for this chapter. Join us next time, where we’ll be looking into Automation, CLIs, and Developer Interfaces! We hope that you’ve found this chapter or Runecast Academy helpful, and we’d welcome all feedback. Reach out to us on Twitter!