Proper VM Provisioning for Efficiency and Savings
A Virtual Machine (VM) is analogous to a plot in a farming ground with simulated weather conditions and resource requirements that differ from adjacent plots. A farmer would know for certain (and possibly contrary to the landlord’s notion) that providing resources more than required would not aid in surpassing the yield capacity of the plot. A farmer’s efficiency thus revolves around allocating the right amount of resources the crop would require.
Though virtualization creates an illusion of having ‘many’ systems atop a physical infrastructure, it is still bound by the Law of Conservation (can’t create anything out of nothing) because clouds are basically big server farms that operate together to host virtual servers. Even if virtual enclosures could ever house an Artificial Intelligent ‘genie’ as philosopher Nick Bostrom warns, it would still need necessary resource allocations. vCPUs allocated to VMs are queued and scheduled to wait for a physical CPU in order to process instructions and data for the VMs.
Over-provisioning often a poor practice
The practice of IT administrators to overcompensate computing resources to VMs as a safe bet to bypass shortage issues is often a poor practice that could cost the companies hefty amounts of money especially when they bank upon a large number of VMs. For instance, while allocating additional vCPU may seem free, T&Cs of software license are bound by processor counts that could lead to unforeseen license fees. Besides, adding memory to a VM could lower the overall number of VMs that a server can support which would in turn limit workload consolidation initiatives and balancing schemes causing businesses to buy more servers or storage than required and end up shelling out money for maintenance as well.
Many organizations take the vendor’s recommendation as the final verdict for allocating resources under an assumption that they have the best knowledge of the application and requirement. Vendors merely provide the stage and spotlight, the performance is to be choreographed by the organization. Proper testing combined with IT staff expertise helps establish resource levels before the workload is deployed.
Achieving Proper Resource Allocation
As far as resource allocation is concerned, VMs bank upon allocated vCPUs, Memory and Storage. From a CPU usage perspective, as opposed to temporary spikes in usage, consistent spikes (more than 90 percent) is indicative of over-provisioning with probable cause being high ready time (10-20 percent) resulting from too many vCPUs, VMs, or a poorly configured CPU limit set on the troubled VM. Reducing the number of vCPUs allocated to the VM, setting CPU reservations for the VM which give vCPUs more access to the physical CPUs; and workload balancing (migrating the troubled VM to another server with more free resources) can reduce the number of vCPUs running on the server.
It is recommended for VMs to have a bit more memory than it may require as lack of free memory may affect VM performance by excessive disk swapping. But that does not mean over-provisioning memory to VMs as it offers no notable benefits. Calculating memory limit is a challenging task. Memory reclamation techniques involve smart paging (disk space being used to supplement shortages of solid state memory), ballooning (retrieving unused memory from VMs and sharing it with those in need) and setting memory limit to zero such that vendor’s (like vSphere) inbuilt memory optimization tools can take control.
Although storage capacity seldom impacts VM performance directly, it is still a good practice to review the Logical Unit Number (LUN) volumes assigned to VMs. Thin Provisioning, the process of allocating disk storage space in a flexible manner among multiple VMs, based on the minimum space required by each VM at any given time, would be a good strategy to tackle storage allocation issues among VMs. Monitoring usage and disk performance factors such a latency ensures that VMs are resistant to storage performance issues.
Remote Monitoring and Management (RMM) tools such as Kaseya VSA and SolarWinds Virtualization Manager can alert staff whenever VM resources need change. Hypervisor (computer software, firmware or hardware that creates and runs VMs) vendors also offer similar tools such as vRealize for VMWare’s vSphere that gives insights into over-provisioned and under-provisioned systems. The Hypervisor platform can itself include performance counters and monitoring features—such as vSphere’s performance charts, host health dashboard, reporting and alerting—and other tools such as VMware’s esxtop command line product.
An approach similar to thin provisioning i.e. starting off by allocating minimum resources determined for workload and tweaking inadequate and strained resources through small increments is a good strategy. Resource recovery and workload balancing tools like memory ballooning mentioned earlier along with vendor offering such as VMware’s distributed resource scheduler (DRS) would prove to be handy tools.
Knowing when to Over-commit
Over provisioning per se is not a disadvantage thanks to virtualization’s key ability: abstraction which in effect translates to server consolidation. “If you cannot afford to over-commit, resources are likely being used ineffectively. Over-commitment is safe; just use CPU Ready Times as your guide and take care to place VMs so that you can pull additional resources as needed,” writes IT Architect Brian Kirsch. Over-commitment levels are fluid in nature depending on the type of application, time workload demands and consolidation goals.
CPU Ready Time is a vSphere metric that records the amount of time a virtual machine is ready to use CPU but is unable to schedule time because all CPU resources are busy. Less than three percent of CPU Ready Time is indicative of VMs having reliable access to CPU resources while a measure greater than five percent reveals an increased demand for the resource. A value more than ten percent is equivalent to red alert. Rising CPU Ready Time is a prompt for making use of shares and resource pools. The VM-per-core average value is depended on the type of CPU (dual core, quad core etc.) as well as on the type of workload.
It is a good practice to plan ahead towards some over-commitment of CPU resources. Not all VMs are created equal, it is therefore necessary to strike a balance between CPU intense VMs and those with lower demand such that production servers can pull resources from the latter during times of contention. Using a mix of VMs with different resource demands on the same server facilitates squeezing additional resources in times of need.