Check out this weeks’ update to the Virtualization comparison where we’ve reflected all changes introduced with Windows Server 2019, thanks to our community expert Romain Serre from Tech-coffee – all Kudos to him!
You can now select this independent evaluation of Windows Server 2019 from the online Virtualization Comparison.
We thought it would be great to provide some context with a few different views and who better to give us an opinionated view than Andrea Mauro from vinfrastructure.it 😉 Happy reading!
(visit back here for an upcoming second view!)
Windows Server 2019 vs VMware vSphere 6.7
Comparing two different product is not always easy. One needs to found a homogenous set of critera to make the comparison, at least on the technical level. Sometimes “numbers” are really easy, but not enough to describe the differences: for example memory management is still really different (VMware implements different technologies, Hyper-V only Dynamic Memory, only on some OSes) … so it’s not exactly “the same” what you can do – even with the same “amount” of memory.
VMware vSphere 6.7 introduced several enhancements (especially in the security aspect) and slightly improved some scalability aspects. Most of the real improvement in scalability are on the management layer (the vCenter Server). The latest version is VMware vSphere 6.7 Update 2 released on April 4th 2019.
Microsoft Windows Server 2019 with Hyper-V is a new milestone after the Windows Server 2016 version released just 4 years ago!
Realistically a big part of the features gap was already filled by version 2016 and the new version does not have that much difference in the Hyper-V or the scalability aspect. For the security aspect, it now has extended security support of shielded VMs to Linux VMs.
Hardware requirements have become overall similar, considering also VMware requires hardware assisted technologies. Hyper-V now also requires mandatory memory virtualization assist. Note that both drop the compatibility with several old processors and hardware, so be sure to plan carefully your upgrade or deployment.
Hypervisor space requirements are completely different (ESXi could be installed on a 1 GB USB or SD card, Hyper-V actually cannot be installed on a SD card, but nano server installation finally require less than 1 GB of disk space!) and also minimum memory requirements are different.
Scalability
As stated before there isn’t much of a difference compared to previous product scalability and most of the maximum numbers remain the same.
System | Resource | Microsoft Hyper-V 2019 | VMware vSphere 6.7 | ||
Free Hypervisor | Essential Plus | Enterprise Plus | |||
Host | Logical Processors | 512 | 768 | 768 | 768 |
Physical Memory | 24 TB | 4 TB? | 4 TB? | 16 TB | |
Virtual CPUs per Host | 2048 | 4096 | 4096 | 4096 | |
VM per Host | 1024 | 1024 | 1024 | 1024 | |
Nested Hypervisor | Yes (only some OSes) | Yes | Yes | Yes | |
VM | Virtual CPUs per VM | 240 for Generation2 64 for Generation1 | 8 | 128 | 128 |
Memory per VM | 12 TB for Generation2 1 TB for Generation1 | 6128 GB | 6128 GB | 6128 GB | |
Maximum Virtual Disk | 64 TB for VHDX format 2040 GB for VHD format | 62 TB | 62 TB | 62 TB | |
Number of disks | 256 (SCSI) | 256 (SCSI) | 256 (SCSI) | 256 (SCSI) | |
Cluster | Maximum Nodes | 64 | N/A | 64 | 64 |
Maximum VMs | 8000 | N/A | 8000 | 8000 |
As stated memory management is really different and can not be directly compared because VMware ESXi has several unique optimization techniques.
But some features disappear or becoming less relevant. For example, VMware Transparent Page Sharing feature has some limitations with new OSes (also that it’s working on a page hash, and not on a real page comparison) and starting with vSphere 6.0 has been disabled by default across different workloads. But both have some kind of dynamic memory management and the possibility also to hot-add static memory to running workload.
So is Microsoft Dynamic Memory better or worse compare to VMware memory management? For a supported OS in my opinion it is an interesting approach (and VMware could implement it, considering that they already have the RAM hot-add feature), but of course, having more physical memory is always a better option. And anyway, for business critical application the most common configuration is to pre-allocate memory (rather than using sophisticated optimization techniques) and maybe just use the hot-add feature.
For more information on scalability see also:
VM features
The main difference is that in Hyper-V all features of the Standard edition are available also in the free edition (big change from the previous version where the free edition has the same features as the DataCenter edition), and there are some differences between the Standard and the Datacenter edition (like nano server support, Storage Replica, …). In VMware each edition has a different feature sets and the free edition remains limited on the backup capabilities (no VADP support).
For example, Live Migration and Storage Live migration are pretty much the same, with communication encryption (added in vSphere 6.5), multichannel support, dedicated network (for the VM migration across hosts). Formally Hyper-V does not have a Geo-vMotion but only replication across clouds.
Also, Hyper-V has got some limitation in VM live migration across host with different versions, but starting with 2016 version this is now possible at least across 2012 R2 and 2016 versions.
Features | Microsoft Hyper-V 2019 | VMware vSphere 6.7 | ||
Free Hypervisor | Essential Plus | Enterprise Plus | ||
VM host live migration | Yes | No | Yes | Yes |
VM storage live migration | Yes | No | No | Yes |
Storage/Network QoS | Yes | No (just disk shares) | No (just disk shares at host level) | Yes |
Hardware passthrough | Discrete Device Assignment | PCI VMDirectPath USB redirection | PCI VMDirectPath USB redirection | PCI VMDirectPath USB redirection |
Hot-Add | Disks/vNIC/RAM | Disks/vNIC/USB | Disks/vNIC/USB | Disks/vNIC/USB/ CPU/RAM |
Hot-Remove | Disks/vNIC/RAM | Disks/vNIC/USB | Disks/vNIC/USB | Disks/vNIC/USB/CPU |
Disk resize | Hot-grow and shrink | Hot-grow | Hot-grow | Hot-grow |
VM encryption | Yes | No | No? | Yes |
VMware vSphere added various options in the VM configuration, e.g. allows using different types of controllers, including NNVMe for disks and RDMA for networking (only for Linux VMs). And with version 6.7 also added support for the emerging Persistent Memory.
Guest Clustering
A guest cluster is a cluster between VM, that means usually installing and configuring a Microsoft Failover Cluster across two or more VMs. Both VMware vSphere and Microsoft Hyper-V support guest clustering, but with different configurations, requirements, and limitations.
Microsoft Hyper-V uses a specific type of virtual disk (a shared VHDX) to implement shared storage across VM cluster nodes. And starting with Windows Server 2016 it supports interesting features on the shared disks:
- Dynamic resize (resizing while VMs are running)
- Host level backup
- Hyper-V Replica support
In VMware vSphere the required configuration can vary depending by the cluster types:
- For Microsoft Failover Cluster see KB 2147661
- For Oracle RAC see KB 2121181
Management
Also management capabilities are difficult to be compared, just because Hyper-V does not require System Center VMM in order to implement many of the cluster features (like VM templates and better resource provisioning). VMware vCenter on the other hand is mandatory, but now vCSA has finally been improved and has become the “first choice” management tool, but vCenter remains mandatory and is needed to implement several core features.
With Windows Server 2019 finally there is an, formerly known as “Project Honolulu“, to support a web-oriented management and both products now have powerful HTML5 UI tool to manage them.
Both could be controlled from the command line (PowerShell is the first choice for Microsoft, but PowerCLI is gaining also attraction for VMware).
Most of the changes are on the vSphere 6.7 vCSA that delivers great performance improvements (all metrics compared at cluster scale limits, versus vSphere 6.5):
- 2X faster performance in vCenter operations per second
- 3X reduction in memory usage
- 3X faster DRS-related operations (e.g. power-on virtual machine)
As you realise, comparing vCSA and System Center VMM it’s not always straight forward, they have a slightly different purpose (with VMM you can really build a private cloud). It would be more appropriate to compare VMM with “vCenter plus vRA”, but even in this case it would not be an “apple to apple” comparison. The most appropriate comparison would be between Azure Stack and the entire vCloud Suite. (you can compare the private cloud capabilities of the two vendors here
Looking at cluster resiliency, Microsoft has finally done major work on failover clustering in Windows Server 2019. Becoming much more resilient and less dependent on the AD structure, also adding a new capabilities to easily move an entire failover cluster from one domain to another. Also, Windows Server 2019 has a new technology called “True Two-Node” clusters: a greatly simplified architecture for Windows Server Failover Clustering topology.
Cost
VMware ESXi remain licensed per socket (physical CPU), for Microsoft stating with Windows Server 2016 is now licensed also per core and there are again features difference between the standard and the datacenter editions.
VMware has different editions (Essential, Essential Plus, Standard, Enterprise Plus and now also Platinum) with different features; and the free edition is very very limited.
For Hyper-V a zero license cost option could use Hyper-V Server (that is the free version of Hyper-V, but completely full features). Of course, you still need the guest OS licenses (but the same apply for ESXi) and the licenses for the rest of the physical infrastructure (for Hyper-V a physical Domain Controller could be useful).
For the management part, for VMware vCenter is mandatory (if you want cluster features), but does not require anymore a Windows license (neither for VUM). For Hyper-V SCVMM could be useful (like the rest of System Center suite), but not mandatory.
For a comparison between the different Windows and vSphere editions see:
HCI
Both vendors offer a hyper-converged infrastructure (HCI) solution integrated at the kernel level: VSAN for vSphere and Storage Space Direct for Microsoft (it’s a feature of Windows Server, so could be used not only for hyper-converged deployment).
Both types of solutions are able to build a 2-nodes cluster: vSAN require an external ESXi host (virtual) for the quorum, SD2 is just based on the Windows Fail-Over cluster, so an external withness (file or cloud) should be fine.
With Windows Server 2019, there are a lot of improvements in the S2D part: now supports a maximum of 4 Petabytes of storage per cluster. And seems that also performance has been improved: during the last Ignite event, Microsoft has demonstrated how an 8 node S2D cluster can reach 13 million IOPS (although this kind of demo does not mean anything withoud detailed workload specification).
Please note that this article was first posted (adjusted content) by Andrea Mauro at vinfrastructure.it
See our SDS & HCI comparison to compare specifically the HCI offerings of both vendors.
Latest posts by Community Author (see all)
- WhatMatrix Q&A with Citrix– Virtual Desktop, DaaS, VDI and WVD - July 5, 2020
- We hope you are well – help for vendors – free lead generation - March 27, 2020
- Landscape Report Guidance: Cloud Management Platforms - February 5, 2020