Addon
|
|
|
|
|
|
|
custom |
|
|
Unique Feature 1
|
Add-On not supported by this product
|
Add-On not supported by this product
|
Add-On not supported by this product
|
|
|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
|
- + mature and feature-rich offering
- + great ecosystem & support
- + skills prevelant
|
|
|
Cons
|
|
- - needs clear strategy for public cloud future
- - comprehensive capability but can become expensive
- - NSX leading SDN solution but skills scarce
|
|
|
|
|
Content |
|
|
|
=BI12
|
THE VIRTUALIST
Content created by THE VIRTUALIST
|
Content created by Virtualizationmatrix; (Contributors: Jon Benedict, Yaniv Dary, Raissa Tona, Larry W. Bailey)
Content created by Virtualizationmatrix
Thanks to Jon Benedict, Yaniv Dary, Sean Cohen, and Larry W. Bailey for content contribution and review.
|
|
|
|
Assessment |
|
|
|
vSphere 6.0 with Operations Management - Acceleration Kit Enterprise Plus - Click Here For Overview
|
vSphere 6.7 Enterprise Plus - Click Here For Overview
vSphere is the collective term for VMwares virtualization platform, it includes the ESX hypervisor as well as the vCenter Management suite and associate components. vSphere is considered by many the industrys most mature and feature rich virtualization platform and had its origins in the initial ESX releases around 2001/2002.
|
RHV 4.1 - Click Here For Details
Red Hat Virtualization is Red Hats flagship server and workstation virtualization platform. It consists of the smart management product RHV-M (Red Hat Virtualization Manager) that can manage both RHV-H (Red Hat Virtualization Host), a purpose built image for easy management, and RHEL-H (Red Hat Enterprise Linux Hypervisor). Both hosts types contain KVM based virtualization capabilities. All listed features apply to both RHV-H and RHEL host - unless stated otherwise.
RHV is a complete virtualization management solution for virtualized servers and workstations that aims to provide performance advantages, competitive pricing and a trusted, stable environment. It is built to work best with Linux mission critical and high proformance workloads on x86 and Power architectures. It can also run Windows guests and is Microsoft SVVP (Server Virtualization Validation Program) certified. RHV is co-engineered with Red Hat Enterprise Linux and inherits its characteristics of reliability, performance, security and scalability
RHV is derived from oVirt, the community open virtualization management project and is a strategic virtualization alternative to proprietary virtualization platforms. Red Hat Virtualization is also integrated with OpenStack for a smooth transition into Private and Public clouds.
With Red Hat Virtualization, you can:
* Take advantage of existing people skills and investment.
* Decrease TCO and accelerate ROI.
* Automate time-consuming and complicated manual tasks.
* Standardize storage (tech-preview), infrastructure, and networking services on OpenStack .
|
|
|
=AU14
|
Release Dates:
vSphere 6.7 : April 17th 2018
vSphere 6.7 is VMwares 6th generation of bare-metal enterprise virtualization software, from ESX1.x (2001/2), 2.x (2003) to Virtual Infrastructure 3 (2006), in May 2009 to vSphere 4.x. The ESXi architecture (small-footprint) became available in Dec 2007. vSphere 5 was announced July 2011 with GA August 2011 and was the first vSphere release converged on ESXi only, vSphere 5.1 was released 10th Sep 2012, vSphere 5.5 on Sep 22nd 2013, vSphere 6 on Feb 2nd 2015, vSphere 6.5 on November 15th 2016, vSphere 6.7 : April 17th 2018
|
RHV 4.1 released in April 2017
NEW
RHV 4.1 is the 11th major release of Red Hats enterprise virtualization management software based on the KVM hypervisor.
In 2008 Red Hat acquired Qumranet, a technology startup that began the development of KVM. Red Hats first release of RHV was v2.1 in 2009. The v3.0 release in 2012 was a major milestone in porting the RHV-M manager from .NET to Java (and fully open-sourcing). RHV 3.1 removed all requirements of any Windows-based infrastructure, but still support Microsoft Active Directory and guests. Since RHV 3.2, Red Hat has provided many feature enhancements, improvements in scale, enhanced reliability and integration points to other Red Hat offerings based on the cutting edge KVM developement.
Previous releases:
- RHV 4.0 - Aug 2016
- RHEV 3.6 - March 2016
- RHEV 3.5 - Feb 2015
- RHEV 3.4 - June 2014
- RHEV 3.3 - January 2014
- RHEV 3.2 - June 2013
- RHEV 3.1 - Dec 2012
- RHEV 3.0: Jan 2012
- RHEV 2.2: Aug 2010
- RHEV 2.1 - Nov 2009
|
|
|
|
Pricing |
|
|
|
n/a
Virtualization and management are priced together for this kit
Details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf
http://www.vmware.com/products/vsphere/pricing.html
|
Ent+ :
$3,305/socket + S&S 1Y: $693 (B) or $825 (Prod)
vSphere is licensed per physical CPU (socket, not core), without restrictions on the amount of physical cores or virtual RAM configured. There are also no license restrictions on the number of virtual machines that a (licensed) host can run.
S&S basic or production (1Year example) - Production (P): 24 Hours/Day 7 Days/Week 365 Days/Year; Basic (B):12 Hours/Day Monday-Friday. Subscription and Support is mandatory. Details and other packages (Acceleration Kits Essential Kits are available). Details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf and here http://www.vmware.com/products/vsphere/pricing
|
Included in hypervisor subscription
The RHV-M management component is included in the RHV subscription model (i.e. single part number for both, hypervisor and management).
|
|
|
n/a
Virtualization and management are priced together for this kit
Details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf
http://www.vmware.com/products/datacenter-virtualization/vsphere/pricing.html
|
$5,835(Std) + $1,224(B) or $1,458 (P)
vCenter Server
Centralized visibility, proactive management and extensibility for VMware vSphere from a single console
VMware vCenter Server provides a centralized platform for managing your VMware vSphere environments, so you can automate and deliver a virtual infrastructure with confidence.
http://www.vmware.com/products/vcenter-server.html
|
Yes, combined RHEL and RHV offering 26% savings
Red Hat offers Red Hat Enterprise Linux with Smart Virtualization (a combined solution of Red Hat Enterprise Linux and Red Hat Virtualization) that offers a 26% savings over buying each product separately. Red Hat Enterprise Linux with Smart Virtualization is the ideal platform for virtualized Linux workloads. Red Hat Enterprise Linux with Smart Virtualization enables organization to virtualize mission-critical applications while delivering unparalleled performance, scalability, and security features. See details here: http://red.ht/1Tzr9pq
|
|
Bundle/Kit Pricing
Details
|
AK OM Ent+ enables 6 CPUs:
$23,495 + 7,615 (Prod) or $6,395 (B)
VMware vSphere with Operations Management Enterprise Plus Acceleration Kit - Convenience bundles that include: six processor licenses for vSphere with Operations Management + vCenter Server Standard license. Six processor licenses of vSphere Data Protection Advanced with the Enterprise and Enterprise Plus Acceleration Kits only)
Unlike the Essentials Kits that function as single entity, vSphere with Operations Management Acceleration Kits decompose into their individual kit components after purchase. This allows customers to upgrade and renew SnS for each individual component. Acceleration kits can be scaled by adding vSphere licenses.
S&S - Subscription and Support listed for 1 year (Production (P): 24 Hours/Day 7 Days/Week 365 Days/Year; Basic (B):12 Hours/Day Monday-Friday.)
Details here: http://www.vmware.com/files/pdf/vsphere_pricing.pdf
http://www.vmware.com/products/vsphere/pricing.html
|
yes
Kits:
- VMware vSphere Remote Office Branch Office Editions
- VMware vSphere Essentials Kits
- VMware vSphere and vSphere with Operations Management Acceleration Kits
|
RHV: No included (RHEL: 4/unlimited)
Customers can buy the Red Hat Enterprise Linux with Smart Virtualization which includes both RHV and unlimited RHEL guests for use as the guest operating System. http://red.ht/1Tzr9pq
RHV stand alone subscriptions include the RHV-H hypervisor and RHV-Manager, they do not include the rights to use RHEL as the guest operating system in the virtual machines being managed by RHV.
The customer would purchase this separately by buying a RHEL for Virtual Datacenter subscription.
Please note that RHEL hosts generate additional subscription costs that are not included with RHV (see https://www.redhat.com/apps/store/server/ for details). RHEL hosts are priced by socket pairs (can be stacked to the number of sockets in the server) and subscription levels (Standard/Premium) and are allowed to run up to 4 virtual guests in case they are not managed by RHV.
|
|
Guest OS Licensing
Details
|
=AU19
|
No
|
Yes (RHV-M)
NEW
RHV-M - the Red Hat Virtualization Manager with a web-driven UI is central management console. It is based entirely on Open Source Software, with no dependencies on proprietary server infrastructures or web browsers. A centralized management system with a search-driven graphical interface supporting up to hundreds of hosts and thousands of virtual machines. The fully featured enterprise management system enables customers to centrally manage their entire virtual environments, which include virtual datacenters, clusters, hosts, guest virtual servers and technical workstations, networking, and storage.
RHV-M is also localized in various languages including: French, German, Japanese, Simplified Chinese, Spanish and English.
(NEW) The Cockpit Web UI is also available for use as a hypervisor manager, providing deeper insight into resource usage. It also provides console access as well as a means to interact with hypervisor services for troubleshooting purposes.
|
|
|
VM Mobility and HA
|
|
|
|
|
|
|
VM Mobility |
|
|
Live Migration of VMs
Details
|
=AU26
|
Yes (vMotion)
vMotion
- Cross-vCenter vMotion with mixed-version (new)
- Cross vSwitch vMotion (all versions)
- Cross vCenter vMotion (n/a for Standard)
- Long Distance vMotion (n/a for Standard)
- Cross Cloud vMotion (n/a for Standard)
- Encryption vMotion (n/a for Standard)
|
Yes
Each cluster is configured with a minimal CPU type that all hosts in that cluster must support (you specify the CPU type in the RHV-M GUI when creating the cluster). Guests running on hosts within the cluster all run on this CPU type, ensuring that every guest can be live migrated to any host within the cluster. This cannot be changed after creation without significant disruption. All hosts in a cluster must run the same CPU type (Intel or AMD).
|
|
Migration Compatibility
Details
|
=AU27
|
Yes (EVC)
EVC can be enabled Per-VM (new)
Enhanced vMotion Compatibility - enabled on vCenter cluster-level, utilizes Intel FlexMigration or AMD-V Extended Migration functionality available with most newer CPUs (but cannot migrate between Intel and AMD), Details here: http://kb.vmware.com/kb/1005764
|
Yes
|
|
|
=AU28
|
Yes
Maintenance mode is a core feature to prepare a host to be shut down safely.
vSphere Quick Boot is a new innovation that restarts the ESXi hypervisor without rebooting the physical host. (new)
|
Yes - Built-in (CPU\Memory) and plugable scheduler
A policy engine determines the specific host on which a virtual machine runs. The policy engine decides which server will host the next virtual machine based on whether load balancing criteria have been defined, and which policy is being used for that cluster. RHV-M will use live migration to move virtual machines around the cluster as required.
A scheduler handles virtual machine placement, allowing users to create new scheduling policies, and also write their own logic in Python and include it in a policy.
- The scheduler serves scheduling requests for running or migrating virtual machines according to a policy.
- The scheduling policy also includes load balancing functionality.
- Scheduling is performed by applying hard constraints and soft constraints to get the optimal host for that request at a given point of time
- The infrastructure allowing users to extend the new scheduler, is based on a service called ovirt-scheduler-proxy. The services purpose is for RHV admins to extend the scheduling process with custom python filters, weight functions and load balancing modules.
- Every cluster has a scheduling policy. Administrators can create their own policies or use the built-in policies which were extended to support new capabilities such as shutting down servers for power saving policy.
The load balancing process runs once every minute for each cluster in a data center. You can disable automatic migration for individual vm or pin them to specific hosts.
You can choose to set the policy as either even distribution or power saving, but NOT both.
|
|
Automated Live Migration
Details
|
=AU29
|
Yes (DRS) - CPU, Mem
VM Distribution: Enforce an even distribution of VMs.
Memory Metric for Load Balancing: DRS uses Active memory + 25% as its primary metric.
CPU over-commitment: This is an option to enforce a maximum vCPU:pCPU ratios in the cluster.
Network Aware DRS - system look at the network bandwidth on the host when considering making migration recommendations.
Storage IO Control configuration is now performed using Storage Policies and IO limits enforced using vSphere APIs for IO Filtering (VAIO).
Storage Based Policy Management (SPBM) framework, administrators can define different policies with different IO limits, and then assign VMs to those policies.
|
Yes
When Power saving is enabled in a cluster it distributes the load in a way that consolidates virtual machines on a subset of available hosts. This enables surplus hosts that are not in use to be powered down, saving power. You can set the threasholds in the RHV-M GUI to specify the Minimum Service Level a host is permitted to have.
You must also specify the time interval in minutes that a host is permitted to run below the minimum service level before remaining virtual machines are migrated to other hosts in the cluster - as long as the maximum service level set also permits this.
|
|
|
=AU30
|
Yes (DPM)
Distributed Power Management - enables to consolidate virtual machines onto fewer hosts and power down unused capacity - reducing power and cooling. This can be fully automated where servers are powered off when not needed and powered back on when workload increases.
|
Yes
Storage Live Migration is supported and allows migration of virtual machine disks to different storage devices while the virtual machine is running. There is also an option to move an entire storage domain between datacenters or even between setups.
|
|
Storage Migration
Details
|
=AU31
|
Yes (Live Storage vMotion)
Storage vMotion allows to perform live migration of virtual machine disk files (e.g. across heterogeneous storage arrays) without vm downtime. Storage DRS handles initial vmdk placement and gives migration recommendations to avoid I/O and space utilization bottlenecks on the datastores in the cluster. The migration is performed using storage vMotion.
|
200 hosts/cluster
That is the supported maximum number of hosts per RHV datacenter and also per cluster (the theoretical KVM limit is higher).
|
|
|
|
HA/DR |
|
|
|
=AU32
|
max 64 nodes / 8000 vm per cluster
Up to 64 nodes can be in a DRS/HA cluster, with a maximum of 8000 vm/cluster
|
Yes
High availability is an integrated feature of RHV and allows for virtual machines to be restarted in case of a host failure.
HA has to be enabled on a virtual machine level. You can specify levels of priority for the vm (e.g. if resources are restrained only high priority vm are being restarted). Hosts that run highly available vm have to be configured for power management (to ensure accurate fencing in case of host failure).
Fencing Details: When a host becomes non-responsive it potentially retains the lock on the virtual disk images for virtual machines it is running. Attempting to start a virtual machine on a second host could cause data corruption. Fencing allows RHV-M to safely release the lock (using a fence agent that communicates with the power management card of the host) to confirm that a problem host has truly been rebooted.
RHV-M gives a non-responsive host a grace period of 30 seconds before any action is taken in order to allow the host to recover from any temporary errors.
Note: The RHV-M manager needs to be running for HA to function (unlike e.g. VMware HA or Hyper-V HA that do not rely on vCenter / VMM for the failover capability), also HA can not be enabled on the cluster level.
|
|
Integrated HA (Restart vm)
Details
|
=AU33
|
Yes (VMware HA)
vSphere 6.5 Proactive HA detect hw condition evacuate host before failure (plugin provided OEM vendors)
Quarantine mode - host is placed in quarantine mode if it is considered in degraded state
Simplified vSphere HA Admission Control - 'Percentage of Cluster Resources' admission control policy
vSphere 6.0
- Support for Virtual Volumes – With Virtual Volumes a new type of storage entity is introduced in vSphere 6.0.
- VM Component Protection – This allows HA to respond to a scenario where the connection to the virtual machine’s datastore is impacted temporarily or permanently.
“Response for Datastore with All Paths Down”
“Response for Datastore with Permanent Device Loss”
- Increased scale – Cluster limit has grown from 32 to 64 hosts and to a max of 8000 VMs per cluster
- Registration of “HA Disabled” VMs on hosts after failure.
VMware HA restarts virtual machines according to defined restart priorities and monitors capacity needs required for defined level of failover.
vSphere HA in vSphere 5.5 has been enhanced to conform with virtual machine-virtual machine anti-affinity rules. Application availability is maintained by controlling the placement of virtual machines recovered by vSphere HA without migration. This capability is configured as an advanced option in vSphere 5.5.
vSphere 5.5 also improved the support for virtual Microsoft Failover Clustering (cluster nodes in virtual machines) - note that this functionality is independent of VMware HA and requires the appropriate Microsoft OS license and configuration of a Microsoft Failover Cluster. Microsoft clusters running on vSphere 5.5 now support Microsoft Windows Server 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
While not obvious to the user - with vSphere 5, HA has been re-written from ground-up, greatly reducing configuration and failover time. It now uses a one master - all other slaves concept. HA now also uses storage path monitoring to determine host health and state (e.g. useful for stretched cluster configurations).
|
Yes (HA, WatchDog)
RHV supports watchdog device for linux guests that restarts virtual machines in case of OS failure. High availability (in addition to monitoring physical hosts) also monitors all virtual machines, so if the virtual machines operating system crashes, a signal is sent to automatically restart the virtual machine, but this is with host change.
|
|
Automatic VM Reset
Details
|
=AU34
|
Yes (VMware HA)
vSphere 6.5 - Orchestrated Restart - VMware has enforced the VM to VM dependency chain, for a multi-tier application installed across multiple VMs.
Uses heartbeat monitoring to reset unresponsive virtual machines.
|
No
There is no live lock-step mirroring support in RHV - although the theoretical capability is available in KVM. Red Hat tends to points out that the limitations around this technology (inability to take e.g. snap shots, perform a live storage migrate, limited guest vCPU support, high bandwidth/processing requirements) can make it inappropriate for enterprise implementation.
|
|
VM Lockstep Protection
Details
|
=AU35
|
Yes (Fault Tolerance) 8 vCPUs. 128 GB RAM.
NEW
vSphere 6.7 now supports 8 vCPUs and 128 GB RAM per VM.
vSphere 6.5 FT has more integration with DRS and enhanced Network (lower the network latency)
Fault Tolerance brings continuous availability protection for VMs with up to 4 vCPUs in Enterprise Plus and Standard is 2 vCPUs.
Uses a shadow secondary virtual machine to run in lock-step with primary virtual machine to provide zero downtime protection in case of host failure.
|
No (native);
Yes (with Vendor Add-On: Red Hat Cluster Suite)
There is no integrated application level monitoring or restart of services/vm in case of application failures. RHV supports watchdogs and HA.
This is possible using Red Hat Cluster Suite. This is a Fee-based Add-On.
|
|
Application/Service HA
Details
|
=AU36
|
App HA
vSphere 6.5 Proactive HA detect hw condition evacuate host before failure (plugin provided OEM vendors)
Quarantine mode - host is placed in quarantine mode if it is considered in degraded state
Simplified vSphere HA Admission Control - 'Percentage of Cluster Resources' admission control policy
VMware HA restarts virtual machines according to defined restart priorities and monitors capacity needs required for defined level of failover.
|
No - See Details
There is no natively provided Site Failover capability in RHV. Red Hat does provide the tools needed to provide a disaster recovery solution.
This is possible via 3rd party partners integration (such as Veritas, Acronis, SEP, Commvault and vProtect via IBM Spectrum Protect).
|
|
Replication / Site Failover
Details
|
=AU37
|
Yes (vSphere Replication)
DR Orchestration (Vendor Add-On: VMware SRM)
vSphere Replication is VMware’s proprietary hypervisor-based replication engine designed to protect running virtual machines from partial or complete site failures by replicating their VMDK disk files.
This version extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5.
|
Yes
You are able to update both RHV-H or RHEL-H via the management UI. The management sends events on updates pending on the hosts and the manager machine.
Updates can be also managed via Red Hat Satellite.http://red.ht/1Oxs20B
|
|
|
Management
|
|
|
|
|
|
|
General |
|
|
Central Management
Details
|
=AU20
|
Yes (vCenter Server Standard)
vCenter Server Standard
Centralized visibility, proactive management and extensibility for VMware vSphere from a single console
VMware vCenter Server provides a centralized platform for managing your VMware vSphere environments, so you can automate and deliver a virtual infrastructure with confidence.
Available as Windows or Apliance VCSA with embeded or separate PSC.
vCenter with embedded platform services controller now supports enhanced linked mode and vCenter Server Hybrid Linked Mode (new)
- Simplified architecture (integrated vCenter Server Appliance, Update Manager included all-in-one), no Windows/SQL licensing
- Native High Availability (HA) of vCenter for the appliance is built-in – Automatic failover (Web Client may require re-login)
- Native Backup and Restore of vCenter appliance (now suport scheduler) – Simplified backup and restore with a new native file-based solution. Restore the vCenter Server configuration to a fresh appliance and stream backups to external storage using HTTP, FTP, or SCP protocols.
- New HTML5-based vSphere Client that is both responsive and easy to use (Based on our new Clarity UI)
|
Third-party plug-in framework
NEW
RHV focuses on managing the virtual infrastructure and can also manage Red Hat Gluster Storage nodes.
Also RHV-M integrates with 3rd party applications including:
- BMC connector for RHV-M REST API to collect data for managing RHV boxes without having to install an agent.
- HP OneView for Red Hat Virtualization (OVRHV) UI plug-in that that allows you to seamlessly manage your HP ProLiant infrastructure from within RHV Manager and provides actionable, valuable insight on underlying HP hardware (HP Insight Control plug-in is also available).
- Veritas Storage Foundation that delivers storage Quality of Service (QoS) at the application level and maximizes your storage efficiency, availability and performance across operating systems. This includes Veritas Cluster Server provides automated disaster recovery functionality to keep applications up and running. Cluster Server enables application specific fail-over and significantly reduces recovery time by eliminating the need to restart applications in case of a failure.
- Tenable Network Securitys Nessus Audit for RHV-M which queries the RHV API and reports that information within a Nessus report.
- Ansible RHV module that allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the RHV platform.
- (NEW) External Partner Network API - allows third party SDN providers to integrate with RHV.
|
|
Virtual and Physical
Details
|
=AU21
|
Limited (plug-ins)
vCenter and associated components focus on management of virtual infrastructure - physical (non-virtualized) infrastructure will typically require separate management.
However, one can argue that there are increasingly aspects of physical management (bare metal host deploy, vCenter Operations capabilities, vCenter monitoring of physical hosts etc. but the core focusses on the virtual aspects).
Additionally, VMware encourages 3rd party vendors to provide management plug-ins for the vCenter Client (classic or web) that can manage peripheral components of the environment (3rd party storage, servers etc.).
|
Yes
RHV offers the choice to integrate with many LDAP servers (Microsoft Active Directory, Red Hat Directory Server, Red Hat Enterprise IPA, OpenLDAP, iPlanet Directory Server and more) with support for simple or Kerberos based authentication, centrally managed identity, single sign-on services, high availability directory services.
RHV also provides complete solution for users/groups management using PostgreSQL database as a backend, which can be used in RHV the same way users/groups from LDAP.
RHV provides a range of pre-configured or default roles, from the Superuser or system administration of the platform, to an end user with permissions to access a single virtual machine only. Additional roles can be added and customized to suit the end user environment.
|
|
RBAC / AD-Integration
Details
|
=AU22
|
Yes (vCenter and ESXi hosts)
NEW
Platform Services Controller (PSC) deals with identity management for administrators and applications that interact with the vSphere platform.
vCenter with embedded platform services controller now suppots enhanced linked mode (new)
|
No (native)
Yes (with Vendor Add-On: CloudForms)
No - RHV exclusively manages Red Hat based environments.
With Red Hat CloudForms users can manage multiple hypervisor vendors and reduce training costs to switch over to RHV. Details here: http://red.ht/I8JG3E (additional cost, not included in RHV subscription)
|
|
Cross-Vendor Mgmt
Details
|
=AU23
|
vRealize Automation (Vendor Add-On)
VMware vRealize Automation automates the delivery of personalized infrastructure, applications and custom IT services.
This cloud automation software lets you deploy across a multi-vendor hybrid cloud infrastructure, giving you both flexibility and investment protection for current and future technology choices.
http://www.vmware.com/mena/products/vrealize-automation.html
|
Yes (RHV-M, Power User Portal)
Yes, RHV-M is Java based and is accessed through a web browser GUI, RESTful API with session support, Linux CLI, Python SDK, Java SDK.
RHV also offers a Power User Portal, a web-based access portal for user (Red Hat positions it as an entry-level Infrastructure as a Service (IaaS) user portal). It allows users to: Create, edit and remove virtual machines, Manage virtual disks and network interfaces, Assign user permissions to virtual machines, Create and use templates to rapidly deploy virtual machines, Monitor resource usage and high-severity events, Create and use snapshots to restore virtual machines to a previous state.
|
|
Browser Based Mgmt
Details
|
=AU24
|
Yes (vSphere Web Client, HTML5 Web Client)
vSphere Client - new version of the HTML5-based vSphere Client that will run alongside the vSphere Web Client. The vSphere Client is built right into vCenter Server (both Windows and Appliance) and is enabled by default.
vSphere Web Client - improvements will help with the overall user experience. (Home screen reorganized, Performance improvements, Live refresh for power states, tasks.)
|
Yes (extended functionality with CloudForms - Fee-Based Add On)
RHV has comprehensive data warehouse with a stable API.
It provides a system monitoring dashboard in the UI to monitor the system in these different levels.
Red Hat Virtualization includes a deeper integration with Red Hat Satellite that allows the querying of errata information for the RHV-Manager’s operating system and provides a complete view into critical updates for the infrastructure lifecycle management. The release also includes the ability to modify the health status of Host, Storage Domain, or Virtual Machine objects based on external factors such as hardware failure or OS monitoring alerts. Users can quickly perform an impact analysis of their environment in the event an object beyond RHV’s normal visibility is at risk of failure.
CloudForms offers cloud and virtualization operations management advance capabilities.
Features of the cloud and virtualization operations management capabilities:
- delivering IaaS with self-service
- service catalogs, automated provisioning and life cycle management
- monitoring and optimization of infrastructure resources and workloads
- metering, resource quotas, and chargeback
- proactive management, advanced decision support, and intelligent automation through predictive analytics
- provides visibility and reporting for governance, compliance, and management insight
- Enforces enterprise policies in real-time, ensuring cloud security, reliability, and availability
- IT process, task, and event automation.
Note that CloudForms is an additional Fee-Based offering not covered by the RHV subscription.
Details here: www.redhat.com/en/technologies/cloud-computing/cloudforms
|
|
Adv. Operation Management
Details
|
=BI25
|
Limited (native) - vCenter Operations
Full (with Vendor Add-On: vRealize Operations)
VMware vRealize Operations. Optimize resource usage through reclamation and right sizing, cross-cluster workload placement and improved planning and forecasting. Enforce IT and configuration standards for an optimal infrastructure.
http://www.vmware.com/products/vrealize-operations.html
|
Yes
NEW
Yes, live migration is fully supported with unlimited concurrent migrations (depending only on available resources on other hosts and network speed). RHV 4.0 adds abilities to use compression and auto-convergence to complete migration of heavier workloads faster.
(NEW) Advanced Live Migration Policies - virtualization admistrator can tune the migration policy, greatly reducing both the time live migration time as well as the actual cutover time.
|
|
|
|
Updates and Backup |
|
|
Hypervisor Upgrades
Details
|
=AU38
|
Yes (Update Manager)
VMware Update Manager (VUM) is now part of the vCenter Server Appliance.
VUM is using own postgress db, can benefit from VCSA native HA and Backup.
|
Yes (Red Hat Network)
Updates to the virtual machines are typically performed as in the physical environment. For Red Hat virtual machines updates can be downloaded from the Red Hat Network. For Windows virtual machines you would apply the relevant MS update mechanisms. There is no specific integrated function in RHV-M to update virtual machines or templates.
Centralized patching mechanism for Red Hat machines is possible via Satellite. RHV also shows errata information on updates for RHEL hosts and guests OS.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
|
|
=AU39
|
Limited (Update Manager)
With vSphere 5 update manager discontinued patching of guest operating systems. It does however provide upgrades of VMware Tools, upgrades of the virtual machine hardware for virtual machines and upgrades of virtual appliances.
|
Yes; Including RAM
Live VM snapshot with or without memory and live removal of snapshots is supported.
|
|
|
=AU40
|
Yes
VMware snapshots can be taken and committed online (including a snapshot of the virtual machine memory).
|
Yes
There is a API set for third-party tools that offer backup, restore, and replication.
|
|
Backup Integration API
Details
|
=AU41
|
Yes (vStorage API Data Protection)
vStorage API for Data Protection: Enables integration of 3rd party backup products for centralized backup.
|
No
There is no natively provided backup capability in RHV. Red Hat does provide the tools needed to provide a backup solution. This is possible via 3rd party partners integration (such as Veritas, Acronis, SEP, Commvault and vProtect via IBM Spectrum Protect).
|
|
Integrated Backup
Details
|
=AU42
|
Yes (vSphere Data Protection) - Replication of backup data, granular backup and scheduling
vSphere® Data Protection is a backup and recovery solution designed for vSphere environments. Powered by EMC Avamar, it provides agent-less, image-level virtual machine backups to disk. It also provides application-aware protection for business-critical Microsoft applications (Exchange, SQL Server, SharePoint) along with WAN-efficient, encrypted backup data replication. vSphere Data Protection is fully integrated with vCenter Server and vSphere Web Client.
|
No (native);
Yes (with Vendor Add-On: Satellite 6)
RHV-H or RHEL hosts can be installed using traditional methods either interactively (from ISO, USB flash media) or automated (PXE). There is however no integrated capability to deploy RHV centrally to bare metal hosts using the RHV management.
This is possible using Satellite 6 using Foreman. RHV allows bare metal provisioning via Satellite in a single UI.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
|
|
|
Deployment |
|
|
Automated Host Deployments
Details
|
=AU43
|
Yes (Auto Deploy)
Auto Deploy is now part of the vCenter Server Appliance.
Integration with VCSA 6.5 can benefit from navive HA or Backup
Configurable trough GUI interface of web client.
Can manage 300+ hosts
Post boot scripts allow for aditional automation.
|
Yes
RHV allows creation and management of templates. RHV also supports integration with a Glance image provider used in a OpenStack enviroment.
|
|
|
=AU44
|
Yes (Content Library)
Content Library – Provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Admins – collectively called “content” – that can be synchronized across sites and vCenter Servers.
|
No (native);
Yes (with Vendor Add-On: CloudForms, Ansible)
There is no integrated functionality in RHV that allows you to deploy a multi-vm construct from a single template.
CloudForms supports tiered VM Templates and a ordering portal to deploy them.
This is a Fee-based Add-On; Details - https://www.redhat.com/en/technologies/cloud-computing/cloudforms for details.
Ansible dynamic inventory allows deployment of n-tiered application using on Ansible playbooks and utilizing VMs as an inventory items to run those playbooks focused on application deployment.
|
|
Tiered VM Templates
Details
|
=AU45
|
Yes (vApp/OVF)
Open Virtualization Format (OVF)
vApp is a collection of virtual machines (VMs) and sometimes other vApps that host a multi-tier application, its policies and service levels.
|
Yes (limited native);
Advanced options with Vendor Add-On: Satellite
When adding host to a cluster it is automaticly configured to match storage, network and other settings in the RHV manager. State is also monitored for changed in network and storage that can have impact on service.
More complex configuration can be done via Satellite 6 using Foreman.
This is a Fee-based Add-On; Details - http://www.redhat.com/en/technologies/linux-platforms/satellite
|
|
|
=AU46
|
Yes (Host Profiles)
Host profiles eliminates per-host, manual, or UI-based host configuration and maintain configuration consistency using a reference configuration which can be applied or used to check compliance status.
|
No
While RHV supports different types of storage, there is no integrated ability in RHV that allows classification of storage (e.g. by performance or other properties).
|
|
|
=AU47
|
Yes (Storage Based Policy Management)
Using the Storage Based Policy Management (SPBM) framework, administrators can define different policies with different IO limits, and then assign VMs to those policies. This simplifies the ability to offer varying tiers of storage services and provides the ability to validate policy compliance.
The policy-driven control plane is the management layer of the VMware software-defined storage model, which automates storage operations through a standardized approach that spans across the heterogeneous tiers of storage in the virtual data plane.
Storage Policy-Based Management (SPBM) is VMware’s implementation of the policy-driven control plane which provides common management over:
- vSphere Virtual Volumes - external storage (SAN/NAS)
- Virtual SAN – x86 server storage
- Hypervisor-based data services – vSphere Replication or third-party solutions enabled by the vSphere APIs for IO Filtering.
|
Yes
RHV includes quota support and service level agreement (SLA) for storage I/O bandwidth, network interfaces and CPU QoS\shares:
- Quota provides a way for the Administrator to limit the resource usage in the System. Quota provides the administrator a logic mechanism for managing resources allocation for users and groups in the Data Center. This mechanism allows the administrator to manage, share and monitor the resources in the Data Center from the engine core point of view.
- vNIC profile allows the user to limit the inbound and outbound network traffic in virtual NIC level.
- CPU profile limits the CPU usage of a virtual machine.
- Disk profile limit the bandwidth usage to allocate the bandwidth in a better way on limited connections.
- CPU shares is a user defined number that represent a relative metric for allocating CPU capacity. It defines how often a virtual machine will get a time slice of a CPU when there is no CPU idle time.
- Host network QoS can define limits on network usage on the pysical NIC.
|
|
|
|
Other |
|
|
|
=AU48
|
Yes
vSphere supports hierarchical resource pools (parent and child pools) for CPU and memory resources on individual hosts and across hosts in a cluster. They allow for resource isolation and sharing between pools and for access delegation of resources in a cluster. Please note that DRS must be enabled for resource pool functionality if hosts are in a cluster. If DRS is not enabled the hosts must be moved out of the cluster for (local) resource pools to work.
|
V2V, P2V
Whilst there is no integrated capability to perform physical to virtual migrations in RHV itself, Red Hat provide p2v tools to customers to export existing physical machines to a virtual infrastructure whilst ensuring that relevant changes are made to the new guest like paravirtualization drivers.
RHV also provides ability to use virt-v2v tool via the manager to migrate workloads from VMWare vSphere in a simple and easy wizard based flow.
RHV provides the virt-v2v CLI tool as well, enabling you to convert and import virtual machines created on other systems such as Xen, KVM and VMware ESX.
|
|
|
=AU49
|
Yes (vCenter Converter)
VMware vCenter Converter transforms your Windows- and Linux-based physical machines and third-party image formats to VMware virtual machines.
http://www.vmware.com/products/converter.html
|
User Portal
RHVs web-based Power User Portal is positioned by Red Hat as an entry-level Infrastructure as a Service (IaaS) user portal.
It allows the user to: create, edit and remove virtual machines, manage virtual disks and network interfaces, assign user permissions to virtual machines, create and use templates to rapidly deploy virtual machines, monitor resource usage and high-severity events, create and use snapshots to restore virtual machines to a previous state. In conjunction with the quota functionality in RHV administrators can restrict resources consumed by the users (but there is no integrated request approval or granular resource assignment based on e.g. subsets of the resources through private clouds).
|
|
Self Service Portal
Details
|
=AU50
|
No (native)
Yes (with Vendor Add-On: vCloud Suite, vRealize Automation)
Self-Service Portal functionality is primarily provided by components in VMwares vCloud Suite or vRealize Automation - a comprehensive cloud portfolio with a single purchase enabled with a per-processor licensing metric.
http://www.vmware.com/products/vcloud-suite.html
http://www.vmware.com/products/vrealize-automation.html
|
RHV supports the most common server and technical workstation OSs as well as PPC guests, current support includes:
For X86_64 hosts:
- Microsoft Windows 10, Tier 1, 32\64 bit
- Microsoft Windows 7, Tier 1, 32\64 bit
- Microsoft Windows 8, Tier 1, 32\64 bit
- Microsoft Windows 8.1, Tier 1, 32\64 bit
- Microsoft Windows Server 2008, Tier 1, 32\64 bit
- Microsoft Windows Server 2008 R2, Tier 1, 64 bit
- Microsoft Windows Server 2012, Tier 1, 64 bit
- Microsoft Windows Server 2012 R2, Tier 1, 64 bit
- Microsoft Windows Server 2016, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 3, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 4, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 5, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 6, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 7, Tier 1, 32\64 bit
- SUSE Linux Enterprise Server 10, Tier 2, 32\64 bit
- SUSE Linux Enterprise Server 11, Tier 2, 32\64 bit
- SUSE Linux Enterprise Server 12, Tier 2, 32\64 bit
For PPC hosts:
- Red Hat Enterprise Linux 7, Tier 1, LE\BE
- Red Hat Enterprise Linux 6, Tier 1, BE
- SUSE Linux Enterprise Server 12, Teir 2, LE
- SUSE Linux Enterprise Server 11 SP4, Teir 2, BE
There is no orchestration tool/engine provided with RHV.
Red Hat CloudForms (fee-based vendor add-on) is used to provide orchestration.
This functionality is achieved with Red Hat Cloudforms (Fee-Based Add-ON), now part of RHCI. With CloudForms, resources are automatically and optimally used via policy-based workload and resource orchestration, ensuring service availability and performance. You can simulate allocation of resources for what-if planning and continuous insights into granular workload and consumption levels to allow chargeback, showback, and proactive planning and policy creation. For details: http://red.ht/1h7DR9T.
|
|
Orchestration / Workflows
Details
|
=AU51
|
Yes (vRealize Orchestrator)
vRealize Orchestrator is included with vCenter Server Standard and allows admins to capture often executed tasks/best practices and turn them into automated workflows (drag and drop) or use out of the box workflows. An increasing number of plug-ins is available to enable automation of tasks related to related products.
http://www.vmware.com/products/vrealize-orchestrator.html
|
sVirt, SELinux, iptables, VLANs, Port Mirroring
The RHV Hypervisor has various security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. SELinux and sVirt adds security policy in kernel for effective intrusion detection, isolation and containment (SELinux is essentially a set of patches to the Linux kernel and some utilities to incorporate a strong, flexible mandatory access control architecture into the major subsystems of the kernel. e.g. with SELinux you can give each qemu process a different SELinux label to prevent a compromised qemu from attacking other processes and also allows you to label the set of resources that each process can see , so that a compromised qemu can only attack its own disk images).
Advanced network security features like VLAN tagging and port mirroring are part of RHV, but there are no additional security-specific add-ons included with RHV (e.g. to address advanced fire-walling, edge security capabilities or Anti-Virus APIs).
|
|
|
=AU52
|
Yes (ESXi Firewall, vShield Endpoint, VM Encryption)
NEW
new in vSphere 6.7:
- encrypted vMotion across different vCenter instances as well as versions
- simplifies workflows for VM Encryption
- support Microsoft’s Virtualization Based Security technologies
- support for Trusted Platform Module (TPM) 2.0
- comprehensive built-in security for secure SDDC products such as vSAN, NSX and vRealize Suite
|
Agent-based, CIM, SNMP
CIM management is available in RHV-H and RHEL-H. It is possible to use OEM vendor supplied tools, e.g. hardware monitoring utilities / agents. Red Hat uses the open source libcmpiutil as the default CIM provider in RHV-H.
Red Hat Virtualization Manager can send Simple Network Management Protocol traps to one or more external SNMP managers. SNMP traps contain system event information; they are used to monitor your Red Hat Virtualization environment. The number and type of traps sent to the SNMP manager can be defined within the Red Hat Virtualization Manager.
|
|
Systems Management
Details
|
=AU53
|
vSphere’s REST API, PowerCLI, vSphere CLI, ESXCLI, Datacenter CLI
vSphere’s REST APIs have been extended to include VCSA and VM based management and configuration tasks. There’s also a new way to explore the available vSphere REST APIs with the API Explorer. The API Explorer is available locally on the vCenter server.
PowerCLI is now 100% module based, the Core module now supports cross vCenter vMotion by way of the Move-VM cmdlet.
The VSAN module has been bolstered to feature 13 different cmdlets which focus on trying to automate the entire lifecycle of VSAN.
ESXCLI, now features several new storage based commands for handling VSAN core dump procedures, utilizing VSAN’s iSCSI functionality, managing NVMe devices, and other core storage commands. NIC based commands such as queuing, coalescing, and basic FCOE tasks.
Datacenter CLI (DCLI), which is also installed as part of vCLI, can make use of all the new vSphere REST APIs!
|
RHV-H or RHEL with KVM - details here
With RHV 4.0 virtualization hosts must run version 7.2 or later of either: full Red Hat Enterprise Linux Hypervisor (RHEL-H) with KVM enabled or Red Hat Virtualization Hypervisor (RHV-H), a image-based purpose built hypervisor with minimized security footprint. RHV support both x86 and power deployments from a single x86 manager.
|
|
|
Network and Storage
|
|
|
|
|
|
|
Storage |
|
|
Supported Storage
Details
|
=AU65
|
DAS, NFS, FC, iSCSI, FCoE (HW&SW), vFRC, SDDC
NEW
vSphere 6.7 adds:
- Virtual SAN 6.7
- NVDIMM controllers
- Virtual SCSI targets per virtual SCSI adapter 64 (per Virtual Machine 256)
- Number of total paths on a server 4096
- LUNs per server 1024
- Volumes per host 1024
vSphere 6.5 adds:
- Virtual SAN 6.5
- Virtual Volumes 2.0 (VVOL)
- VMFS 6
Support for 4K Native Drives in 512e mode
SE Sparse Default
Automatic Space Reclamation
Support for 512 devices and 2000 paths (versus 256 and 1024 in the previous versions)
CBRC aka View Storage Accelerator
vSphere 6.0 adds:
- Virtual SAN 6.0
- Virtual Volumes (VVOL)
- NFS 4.1 client
- NFS and iSCSI IPV6 support
- Storage Based Policy Management (SPBM) now available in all vSphere editions
- SIOC IOPS Reservations
- vSphere Replication
- Support for 2000 virtual machines per vCenter Server
|
Yes (FC, FCoE, iSCSI)
NEW
Multipathing in RHV manager provides:
1) Redundancy (provides failover protection).
2) Improved Performance which spreads I/O operations over the paths, by default in a round-robin fashion but also supports other methods including Asynchronous Logical Unit Access (ALUA). This applies to block devices (FC, FCoE, iSCSI), although the equivalent functionality can be achieved with a sufficiently robust network setup for network attached storage.
|
|
|
=AU66
|
Yes (enhanced APD and PDL) PDL AutoRemove
vSphere uses natively integrated multi-path capability or can take advantage of vendor specific capabilities using vStorage APIs for Multipathing.
By default, ESXi provides an extensible multipathing module called the Native Multipathing Plug-In (NMP). Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and provides a default path selection algorithm based on the array type. The NMP associates a set of physical paths with a specific storage device, or LUN. The specific details of handling path failover for a given storage array are delegated to a Storage Array Type Plug-In (SATP). The specific details for determining which physical path is used to issue an I/O request to a storage device are handled by a Path Selection Plug-In (PSP). SATPs and PSPs are sub plug-ins within the NMP module. With ESXi, the appropriate SATP for an array you use will be installed automatically. You do not need to obtain or download any SATPs.
PDL AutoRemove: Permanent device loss (PDL) is a situation that can occur when a disk device either fails or is removed from the vSphere host in an uncontrolled fashion. PDL detects if a disk device has been permanently removed. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. With vSphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state.
|
Yes
For shared file systems RHV supports LVM for block storage and POSIX, NFS or GlusterFS for file storage.
|
|
Shared File System
Details
|
=AU67
|
Yes (VMFS v6)
VMwares clustered file system, allowing for concurrent access of multiple hosts for live migration, file based locking (to ensure data consistency), dynamic volume resizing etc.
new in VMFS 6
Support for 4K Native Drives in 512e mode
SE Sparse Default
Automatic Space Reclamation
Support for 512 devices and 2000 paths (versus 256 and 1024 in the previous versions)
CBRC aka View Storage Accelerator
|
Yes
Booting from SAN is possible.
|
|
|
=AU68
|
Yes (FC, iSCSI, FCoE and SW FCoE)
Boot from iSCSI, FCoE, and Fibre Channel boot are supported
|
Yes
The hypervisor can be installed onto USB storage devices or solid state disks. (The initial boot/install USB device must be a separate device from the installation target).
|
|
|
=AU69
|
Yes
Boot from USB is supported
|
RAW, Qcow2
RHV supports two storage formats: RAW and QCOW2.
In an NFS data center the Storage Pool manager (SPM) creates the virtual disk on top of a regular file system as a normal disk in preallocated (RAW) format. Where sparse allocation is chosen additional layers on the disk will be created in thinly provisioned Qcow2 (sparse) format.
For iSCSI and SAN (block), the SPM creates a Volume group (VG) on top of the Logical Unit Numbers (LUNs) provided. During the virtual disk creation, either a preallocated format (RAW) or a thinly provisioned Qcow2 (sparse) format is created.
Background:
QCOW (QEMU copy on write) decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. This enables advanced features like snapshots. Creating a new snapshot creates a new copy on write layer, either a new file or logical volume, with an initial mapping that points all logical blocks to the offsets in the backing file or volume. When writing to a QCOW2 volume, the relevant block is read from the backing volume, modified with the new information and written into the new snapshot QCOW2 volume. T hen the map is updated to point to the new place.
Benefits QCOW2 offers over using RAW representation include:
- Copy-on-write support (volume only represents changes to a disk image).
- Snapshot support (volume can represent multiple snapshots of the images history).
The RAW storage format has a performance advantage over QCOW2 as no formatting is applied to images stored in the RAW format (reading and writing images stored in RAW requires no additional mapping or reformatting work on the host or manager. When the guest file system writes to a given offset in its virtual disk, the I/O will be written to the same offset on the backing file or logical volume. Note: Raw format requires that the entire space of the defined image be preallocated (unless using externally managed thin provisioned LUNs from a storage array).
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-IO intensive virtual machines.
|
|
Virtual Disk Format
Details
|
=AU70
|
vmdk, raw disk (RDM)
VMware Virtual Machine Disk Format (vmdk) and RAW Disk Mapping (RDM) - essentially a raw disk mapped to a proxy (making it appear like a VMFS file system)
|
Default max virtual disk size is 8TB (but its configurable in RHV DB)
The default maximum supported virtual disk size is 8TB in RHV (but its configurable in RHV DB).
With virtio-scsi support, Red Hat also supports now 16384 logical units per target.
File level disk size remains unlimited by the hypervisor, the limits of the underlying filesystem do however apply.
|
|
|
=AU71
|
64TB
vSphere is increasing the maximum size of a virtual machine disk file (VMDK) from 2TB - 512 bytes to the new limit of 64TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB - 512 bytes to 64TB in physical compatibility and 62TB in virtual compatibility. Virtual machine snapshots also support this new size for delta disks that are created when a snapshot is taken of the virtual machine.
|
Yes
During the virtual disk creation, either a preallocated format (RAW) or a thinly provisioned Qcow2 (sparse) format can be specified.
A preallocated virtual disk has reserved storage of the same size as the virtual disk itself. The backing storage device (file/block device) is presented as is to the virtual machine with no additional layering in between. This results in better performance because no storage allocation is required during runtime. On SAN (iSCSI, FCP) this is achieved by creating a block device with the same size as the virtual disk. On NFS this is achieved by filling the backing hard disk image file with zeros. Pre-allocating storage on an NFS storage domain presumes that the backing storage is not Qcow2 formatted and zeroes will not be deduplicated in the hard disk image file. (If these assumptions are incorrect, do not select Preallocated for NFS virtual disks).
For sparse virtual disks backing storage is not reserved and is allocated as needed during runtime. This allows for storage over commitment under the assumption that most disks are not fully utilized and storage capacity can be utilized better. This requires the backing storage to monitor write requests and can cause some performance issues. On NFS backing storage is achieved simply by using files. On SAN this is achieved by creating a block device smaller than the virtual disks defined size and communicating with the hypervisor to monitor necessary allocations. This does not require support from the underlying storage devices.
|
|
Thin Disk Provisioning
Details
|
=AU72
|
Yes
Thin provisioning allowing for disk space saving through allocation of space based on usage (not pre-allocation).
VMFS6 - Automatic Space Reclamation
|
Yes (Limited)
NEW
It is possible to consume NPIV devices for creation of storage domains and use of directly attached LUNs. In future releases we plan to include support for passthrough of NPIV devices to the virtual machines.
|
|
|
=AU73
|
Yes (RDM only)
NPIV requires RDM (Raw Disk Mapping), it is not supported with VMFS volumes. NPIV requires supported switches (not direct storage attach).
|
Yes
In RHV by default, virtual machines created from templates use thin provisioning. In the context of templates, thin provisioning of vm means copy on write (aka linked clone or difference disk) rather than a growing file system that only takes up the storage space that it actually uses (usually referred to as thin provisioning). All virtual machines based on a given template share the same base image as the template and must remain on the same data domain as the template.
You can however specify to deploy the vm from template as clone - which means that a full copy of the vm will be deployed. When selecting to clone you can then select thin (sparse) or pre-allocated provisioning of the full clone. Deploying from template as clone results in independence from the base image but space savings associated with using copy on write approaches are lost.
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-IO intensive virtual machines.
|
|
|
=AU74
|
No (native)
Yes (with Vendor Add-On: vCloud Suite, vRealize Automation)
VMwares virtual image sharing technology (vComposer or linked clones) is supported with VMwares virtual desktop solution (Horizon View).
This functionality had been extended to vCloud Director, vRealize Automation, but is not a functionality included in the vSphere editions without vCD.
Both Horizon View and vCD (as part of the vCloud Suites) are fee-based Add-Ons.
|
No (native);
Yes (with Vendor Add-On: Red Hat Gluster Storage)
There is no native software based replication included in the base RHV product.
However, there is support for managing Red Hat Gluster Storage volumes and bricks using Red Hat Virtualization Manager. Red Hat Gluster Storage is a software-only, scale-out storage solution that provides flexible unstructured data storage for the enterprise.
Red Hat Storage Console (RHS-C) of Red Hat Storage Server (RHS) for On-Premise provides replication via the native capabilities of RHS, with integration in the RHV-M interface.
RHS-C extends RHV-M 3.x and oVirt Engine technology to manage Red Hat Trusted Storage Pools with management via the Web GUI, REST API and (future) remote command shell.
Note that Red Hat Gluster Storage is a fee-based add-on.
|
|
SW Storage Replication
Details
|
=AU75
|
Yes (vSphere Replication)
vSphere Replication is VMware’s proprietary hypervisor-based replication engine designed to protect running virtual machines from partial or complete site failures by replicating their VMDK disk files.
This version extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5.
|
FS-Cache
FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over the network and cache it on local disk.
This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS). User can use this feature with mount options for NFS\POSIX.
|
|
|
=AU76
|
Yes (vSphere Flash Read Cache)
vSphere 5.5 introduced the vSphere Flash Read Cache that enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource.
vSphere hosts can use the vSphere Flash Resource as vSphere Flash Swap Cache, which replaces the Swap to SSD feature previously introduced with vSphere 5.0. It provides a write-through cache mode that enhances virtual machines performance without the modification of applications and OSs.
At its core Flash Cache enables the offload of READ I/O from the shared storage to local SSDs, reducing the overall I/O requirements on your shared storage.
Documented maxima with vSphere 6.5:
- Virtual flash resource per host: 1
- Maximum cache for each virtual disk: 400GB
- Cumulative cache configured per host (for all virtual disks): 2TB
- Virtual disk size: 16TB
- Virtual host swap cache size: 4TB
- Flash devices (disks) per virtual flash resource: 8
|
Yes
This is possible via local datacenter feature, but limited to a single host with reduced management features.
|
|
|
=AU77
|
No (native)
Yes (with Vendor Add-On: vSAN 6.7)
VMware vSAN extend virtualization to storage with an integrated hyper-converged solution.
|
Yes (Limited)
RHVs REST API does allow storage actions and storage provisioning calls via software storage actions and a backup API can also be leveraged with array cloning & replication for DR. It doesnt have vendor specific offloading abilities.
|
|
Storage Integration (API)
Details
|
=AU78
|
Yes (VASA, VAAI and VAMP)
VMware provides various storage related APIs in order to enhance storage functionality and integration between storage devices and vSphere.
|
Yes (Quota, Storage I/O SLA)
RHV 4 includes quota support and Service Level Agreement (SLA) for storage I/O bandwidth:
- Quota provides a way for the Administrator to limit the resource usage in the system including vDisks. Quota provides the administrator a logic mechanism for
managing disks size allocation for users and groups in the Data Center.
- Disk profile limit the bandwidth usage to allocate the bandwidth in a better way on limited connections.
|
|
|
=AU79
|
Yes (Storage IO Control )
In vSphere 6.5 Storage IO Control has been reimplemented by leveraging the VAIO framework. You will now have the ability to specify configuration details in a VM Storage Policy and assign that policy to a VM or VMDK. You can also create a Storage Policy Component yourself and specify custom shares, limits and a reservation.
|
Various new enhancements + Neutron Integration (Tech Preview) - click for details
At present RHV allows to:
- Do simplified management network setup that includes host level management.
- Assign migration\management\VM\host networks roles.
- Create profile for virtual machine NIC with specific parameters.
- Network QoS on host NICs and on virtual NIC profiles.
- Multiple network gateways per host (define a gateway for each logical network on a host).
- Refresh and automtic sync host network configuration (allows the administrator to obtain and set updated network configuration).
- Improved bond support (add new bonds from the administration portal, in addition to the five predefined bonds for each host).
- Network labels to ease complex hypervisor networking configurations, comprising many networks.
- Predictable vNIC ordering inside guest OS for newly-created VMs.
- Hypervisors now recognize hotplugged network interfaces.
- Notifications in case of bond/NIC changing link state (e.g. link failure).
- Ability to configure custom properties on hypervisor network devices; specifically configuring advanced bridge and ethtool options.
- Dedicated network connectivity log on hypervisors to ease investigation in case of 'disaster'.
- Properly display arbitrarily-named hypervisor VLAN devices in the management console.
- Use SR-IOV NICs by enabling you to create virtual functions and assign them to VMs.
- Report total network use of a VM.
- Get info on out of sync hosts from a network definition aspect and allowing to sync them.
- Support for Cisco UCSM VM-FEX hook.
OpenStack Neutron integration can provide networking capabilities for consumption by hosts and virtual machines.
The integration includes:
• Advanced engine for network configuration.
• open vSwitch distributed virtual switching support.
• Ability to centralize network configurations with Red Hat Enterprise Linux OpenStack Platform (not included).
|
|
|
|
Networking |
|
|
Advanced Network Switch
Details
|
=AU80
|
Yes (vDS)
vSphere Distributed Switch (VDS) spans multiple vSphere hosts and aggregates networking to a centralized datacenter-wide level, simplifying overall network management (rather than managing switches on individual host level) allowing e.g. the port state/setting to follow the vm during a vMotion (Network vMotion) and facilitates various other advanced networking functions - including 3rd party virtual switch integration.
Each vCenter Server instance can support up to 128 VDSs, each VDS can manage up to 2000 hosts.
|
Yes
RHV support 4 bonding modes with easy management of their definition on the host.
Details:
Mode 1 (active-backup policy) sets all interfaces to the backup state while one remains active. Upon failure on the active interface, a backup interface replaces it as the only active interface in the bond. The MAC address of the bond in mode 1 is visible on only one port (the network adapter), to prevent confusion for the switch. Mode 1 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 2 (XOR policy) selects an interface to transmit packages to based on the result of an XOR operation on the source and destination MAC addresses multiplied by the modulo slave count. This calculation ensures that the same interface is selected for each destination MAC address used.
Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Virtualization.
Mode 4 (IEEE 802.3ad policy) creates aggregation groups for which included interfaces share the speed and duplex settings. Mode 4 uses all interfaces in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Virtualization.
Mode 5 (adaptive transmit load balancing policy) ensures the outgoing traffic distribution is according to the load on each interface and that the current interface receives all incoming traffic. If the interface assigned to receive traffic fails, another interface is assigned the receiving role instead. Mode 5 is
supported in Red Hat Virtualization.
|
|
|
=AU81
|
Yes (up to 32 NICs)
vSphere has integrated NIC teaming capabilities. To utilize NIC teaming, two or more network adapters must be uplinked to a virtual switch (standard or distributed).
The key advantages of NIC teaming are:
- Increased network capacity for the virtual switch hosting the team.
- Passive failover in the event one of the adapters in the team goes down
There are various NIC load balancing (e.g. based on originating port, source MAC or IP hash) and failover detection algorithms (link status, Beacon probing).
|
Yes
With RHV the Network Interfaces tab of the details pane shows VLAN information for the edited network interface. In the VLAN column newly created VLAN devices are shown, with names based on the network interface name and VLAN tag.
Background: RHV VLAN is aware and is able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs.
At the switch level, ports are assigned a VLAN designation. A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent to a single port, to be deciphered using software on the machine that receives the traffic.
|
|
|
=AU82
|
Yes
Support for VLANs, VLAN tagging with distributed or standard switch. Private VLANs (sub-VLANs) are supported with the virtual distributed switch only
|
Yes (via host hooks)
Currently PVLAN support in RHV is done via host hook.
Background: Hooks are scripts executed on the host when key events occur. The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal.
|
|
|
=AU83
|
Yes
Private VLANs (sub-VLANs) are supported with the virtual distributed switch.
|
Guests fully, hypervisors partially
RHV currently uses IPv4 for internal communications and does not use/support IPv6. Using IPv6 at the virtual machine level is fully supported though, provided that youre using a guest operating system thats compatible.
As of RHV-4.1, RHV can assign IPV6 addresses to host interfaces.
|
|
|
=AU84
|
Yes
vSphere supports IPv6 for all major traffic types.
|
Yes
RHV has added ability to passthorough hosts PCI and USB devices including GPUs and use of SR-IOV via VFIO and high speed PCI-express based SSD storage, as well as automatic NUMA (Non-uniform memory access) balancing moves tasks (which can be threads or processes) closer to the memory they are accessing
|
|
|
=AU85
|
Yes (SR-IOV and VMDirectPath)
vSphere 6.5 SR-IOV support for 1024 Virtual Functions.
|
Yes
The network management UI in RHV allows you to set the MTU for network interfaces (jumbo frames).
|
|
|
=AU86
|
Yes
vSphere supports jumbo frames for network traffic including iSCSI, NFS, vMotion and FT
|
TOE
Currently, RHV hypervisors support TOE. Due to the way that RHV provides networking access to the virtual machines, using technologies such as TSO/LRO/GRO are currently unsupported. With OVN-based overlay networks (tech-preview in 4.1), a NIC can carry VM traffic while having offloading set.
|
|
|
=AU87
|
Yes (TSO, NetQueue, iSCSI)
Supports TCP Segment Offloading, NetQueue (VMwares implementation of Intels VMDq and iSCSI HW offload (for a limited number of HBAs).
No TOE support (you can use TOE capable adapters in vSphere but the TOE function itself will not be used)
|
Yes (vNIC Profile)
RHV has the ability to control Network QoS using virtual NIC profiles through the RHV-M interface.
Users can limit the inbound and outbound network traffic on a virtual NIC level by applying profiles which define attributes such as port mirroring, quality of service (QoS) or custom properties.
|
|
|
=AU88
|
Yes (NetIOC)
vSphere 6.x Network I/O Control (NIOC) Version 3
- Ability to reserve bandwidth at a VMNIC
- Ability to reserve bandwidth at a vSphere Distributed Switch (VDS) Portgroup
Network I/O control enables you to specify quality of service (QoS) for network traffic in your virtualized environment. NetIOC requires the use of a virtual distributed switch (vDS). It allows to prioritize network by traffic type and the creation of custom network resource pools.
|
Yes (Port Mirroring)
RHV has port mirroring capabilities.
It is now possible to configure the virtual Network Interface Card (vNIC) of a virtual machine to run in promiscuous mode. This allows the virtual machine to monitor all traffic to other vNICs exposed by the host on which it runs. Port mirroring copies layer 3 network traffic on given a logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.
RHV also adds the ability to create a Virtual Network Interface Controller (VNIC) profile to toggle port monitoring. There are also the vendor-supplied UI plug-ins to RHV-M, e.g. the Nagios community plugin.
|
|
Traffic Monitoring
Details
|
=AU89
|
yes (Port Mirroring)
Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network-monitoring device connected to another switch port. Port mirroring is also referred to as Switch Port Analyzer (SPAN) on Cisco switches. Distributed Switch provides a similar port mirroring capability that is available on a physical network switch. After a port mirror session is configured with a destination -a virtual machine, a vmknic or an uplink port-the Distributed Switch copies packets to the destination.
|
Yes (virtio), Mem Balloon optimization and error messages
The virtio-balloon driver allows guests to express to the hypervisor how much memory they require. The balloon driver allows the host to efficiently allocate and memory to the guest and allow free memory to be allocated to other guests and processes. Guests using the balloon driver can mark sections of the guests RAM as not in use (balloon inflation). The hypervisor can free the memory and use the memory for other host processes or other guests on that host. When the guest requires the freed memory again, the hypervisor can reallocate RAM to the guest (balloon deflation).
This includes:
- Memory balloon optimization (Users can now enable virtio-balloon for memory optimization on clusters. All virtual machines on cluster level 3.2 and higher includes a balloon device, unless specifically removed. When memory balloon optimization is set, MoM will start ballooning to allow memory overcommitment, with the limitation of the guaranteed memory size on each virtual machine.
- Ballooning error messages (When ballooning is enabled for a cluster, appropriate messages now appear in the Events tab)
|
|
|
Hypervisor
|
|
|
|
|
|
|
General |
|
|
Hypervisor Details/Size
Details
|
=AU54
|
Virtual Hardware version 14
VMware developed/proprietary bare-metal hypervisor, which with vSphere 5 onwards is only available as ESXi (small foot-print without Console OS). The hypervisor itself is < 150MB. Device drivers are provided with the hypervisor (not with the Console OS or dom0/parent partition as with Xen or Hyper-V technologies).
ESX is based on binary translation (full virtualization) but also uses aspects of para-virtualization (device drivers, VMware tools and the VMI interface for para-virtualization) and supports hardware assisted virtualization aspects.
|
No limit stated
The RHV Hypervisor is not limited by a fixed technology (or marketed) restriction. Red Hat lists no Limit for the maximum ratio of virtual CPUs per host.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-Overcommitting_with_KVM.html
In reality the specifications of the underlying hardware, nature of the workload in the vm and the overall restriction of 240 logical CPUs per host will determine the limit. RHV has been publicly demonstrated (see SPECvirt) to run over 550 VMs with a mix of SMP vCPU VMs.
|
|
|
|
Host Config |
|
|
Max Consolidation Ratio
Details
|
=AU55
|
1024 VMs
NEW
vSphere 6.7 Maximums
- 64 hosts per cluster
- 8000 VMs per cluster
- 768 CPUs
- 16 TB of RAM
- 1024 VMs per host
|
x86: 288
PPC: 192
The maximum supported number of CPUs with hyper-threading enabled. Depends on the particular version of RHEL-H or RHV-H. For details please check https://access.redhat.com/articles/rhel-limits
|
|
|
=AU56
|
768 physical
NEW
Hosts will support up to 768 physical CPUs (Dependent on hardware at launch time).
|
Unlimited
there is no license restriction for the max number of cores per CPU
|
|
Max Cores per CPU
Details
|
=AU57
|
unlimited
unlimited (since vSphere 5)
|
x86: 12TB
PPC: 2TB
Max amount of physical RAM installed in host and recognized by RHV is 4TB (12000GB) for x86 hosts and 2TB (2000GB) for PPC hosts.
|
|
Max Memory / Host
Details
|
=AU58
|
16TB
NEW
vSphere 6.7 Maximums
- 64 hosts per cluster
- 8000 VMs per cluster
- 768 CPUs
- 16 TB of RAM
- 1024 VMs per host
|
240 vCPUs per VM
The maximum supported number of virtual CPUs per vm (please note that the actual number depends on the type of the guest operating system).
|
|
|
|
VM Config |
|
|
|
=AU59
|
128 vCPU
128 vCPU
Maximum number of virtual CPUs configurable for the vm and presented to the guest operating system - supported numbers vary greatly with specific guest OS version, please check!
|
4TB
Maximum amount of configured virtual RAM for an individual vm is 4TB (4000GB).
|
|
|
=AU60
|
6TB
6TB
amount of vRAM configurable in a virtual machine (presented to the guest OS)
|
Yes
RHV exposes serial console access through ssh. You can configure host UNIX domain sockets or named pipes to be attached to a virtual machine serial ports, using hooks.
|
|
|
=AU61
|
32 ports
32 ports
Virtual machine serial ports can connect to physical host port, output file, named pipes or network.
|
Yes
You can pass-through any host USB devices directly to a virtual machine. You can also use the SPICE protocol capabilities to redirect USB device from a client computer to a virtual machine.
|
|
|
=AU62
|
Yes (USB 1.x, 2.x and 3.x) with max 20 USB devices per vm
vSphere supports a USB (host) controller per virtual machine. USB 1.x, 2.x and 3.x supported. One USB host controller of each version 1.x, 2.x, or 3.x can be added at the same time.
A maximum of 20 USB devices can be connected to a virtual machine (Guest operating systems might have lower limits than allowed by vSphere)
|
Yes
RHV has the ability to hot-add network interface cards, virtual disk storage, vCPUs and memory. Hot unplug is not support currently for vCPUs and memory, but it is planned for future versions. Support is dependent on guest OS support.
|
|
|
=AU63
|
Yes (CPU, Mem, Disk, NIC, PCIe SSD)
vSphere adds the ability to perform hot-add and remove of SSD devices to/from a vSphere host.
VMware Hot add (Memory and CPU) and hot plug (NIC, disks) requires the guest OS to support these functions - please check for specific support.
|
Yes (RHV 4.2 full support)
Starting with RHEL 7.4 based hypervisors, vGPU (mdev) support got added to the Hosts.
RHV 4.1 still needs hooks to enable these devices, RHV 4.2 is planned to have native support for vGPU in RHV.
Please note that the host requires supported devices (NVidia Tesla grid cards) for being able to utilize the feature.
Intel GPU support is planned.
|
|
Graphic Acceleration
Details
|
=BI64
|
Yes (NVIDIA vGPU)
GRID vGPU is a graphics acceleration technology from NVIDIA that enables a single GPU (graphics processing unit) to be shared among multiple virtual desktops. When NVIDIA GRID cards (installed in an x86 host) are used in a desktop virtualization solution running on VMware vSphere® 6.x, application graphics can be rendered with superior performance compared to non-hardware-accelerated environments. This capability is useful for graphics-intensive use cases such as designers in a manufacturing setting, architects, engineering labs, higher education, oil and gas exploration, clinicians in a healthcare setting, as well as for power users who need access to rich 2D and 3D graphical interfaces.
|
DAS, iSCSI, NFS, GlusterFS, FC, POSIX;
Virtio SCSI support
RHV storage is logically grouped into storage pools, which are comprised of three types of storage domains: data (vm and snapshots) , export (temporary storage repository that is used to copy and move images between data centers and RHV instances), and ISO.
The data storage domain is the only one required by each data center and exclusive to a single data center. Export and ISO domains are optional, but require NFS or POSIX.
Storage domains are shared resources and can be implemented using NFS, GlusterFS, POSIX, iSCSI or the Fibre Channel Protocol (FCP). On NFS, all virtual disks, templates, and snapshots are simple files. On SAN (iSCSI/FCP), block devices are aggregated into a logical entity called a Volume Group (VG). This is done using the Logical Volume Manager (LVM) and presents high performance I/O.
Luns can be directly attached to VMs as disks, but some feature are not supported when this option is used like snapshotting.
Local storage can be used to create non-shared local datacenter which allow a single host.
|
|
|
|
Memory |
|
|
Dynamic / Over-Commit
Details
|
=AU90
|
Yes (Memory Ballooning)
vSphere uses several memory optimization techniques, mainly to over-commit memory and reclaim unused memory: Ballooning, memory compression and transparent page sharing. Last level of managing memory overcommit is hypervisor swapping (not desired).
When physical host memory is over-committed (e.g. the host has a total of 128GB of RAM but a total of 196GB are allocated to virtual machines), the memory balloon driver (vmmemctl) collaborates with the server to reclaim pages that are considered least valuable by the guest operating system. When memory is tight (i.e. all virtual machines are requesting their maximum memory allocation to be used), the guest operating system determines which pages to reclaim and, if necessary, swaps them to its own virtual disk.
|
Yes (KSM)
Kernel SamePage Merging (KSM) reduces references to memory pages from multiple identical pages to a single page reference. This helps with optimization for memory density.
|
|
Memory Page Sharing
Details
|
=AU91
|
Yes (Transparent Page Sharing)
vSphere uses several memory techniques: Ballooning, memory compression and transparent page sharing. Last level of managing memory overcommit is hypervisor swapping (not desired).
A good example is a scenario where several virtual machines are running instances of the same guest operating system, have the same applications or components loaded, or contain common data. In such cases, a host uses a proprietary transparent page sharing technique to securely eliminate redundant copies of memory pages. As a result, higher levels of over-commitment can be supported.
|
Yes
RHV has a feature called transparent huge pages, where the Linux kernel dynamically creates large memory pages (2MB versus 4KB) for virtual machines, improving performance for VMs that require them (newer OSs generations tend to benefit less from larger pages)
|
|
|
=AU92
|
Yes
Large Memory Pages for Hypervisor and Guest Operating System - in addition to the usual 4KB memory pages ESX also makes 2MB memory pages available.
|
Yes
RHV supports Intel EPT and AMD-RVI
|
|
HW Memory Translation
Details
|
=AU93
|
Yes
Yes, vSphere leverages AMD RVI and Intel EPT technology for the MMU virtualization in order to reduce the virtualization overhead associated with page-table virtualization
|
Yes
The Red Hat Virtualization Manager interface allows you to import and export virtual machines (and templates) stored in Open Virtual Machine Format (OVF).
This feature can be used in multiple ways:
- Moving virtual resources between Red Hat Virtualization environments.
- Move virtual machines and templates between data centers in a single Red Hat Virtualization environment.
- Backing up virtual machines and templates.
|
|
|
|
Interoperability |
|
|
|
=AU94
|
Yes
vCenter can export and import vm, virtual appliances and vApps stored in OVF. vApp is a container comprised of one or more virtual machines, which uses OVF to specify and encapsulate all its components and policies
|
Comprehensive
RHV takes advantage of the native hardware certification of the Redhat Enterprise Linux OS. The RHV Hypervisor (RHV-H) is certified for use with all hardware which has passed Red Hat Enterprise Linux certification except where noted in the Requirements chapter of the installation guide http://red.ht/1hOZEDA
|
|
|
=AU95
|
Very Comprehensive (see link)
vSphere has a very comprehensive and well documented set of hardware components.
For compatible systems and devices see http://www.vmware.com/resources/compatibility/search.php
|
Limited
RHV supports the most common server and technical workstation OSs as well as PPC guests, current support includes:
For X86_64 hosts:
- Microsoft Windows 10, Tier 1, 32\64 bit
- Microsoft Windows 7, Tier 1, 32\64 bit
- Microsoft Windows 8, Tier 1, 32\64 bit
- Microsoft Windows 8.1, Tier 1, 32\64 bit
- Microsoft Windows Server 2008, Tier 1, 32\64 bit
- Microsoft Windows Server 2008 R2, Tier 1, 64 bit
- Microsoft Windows Server 2012, Tier 1, 64 bit
- Microsoft Windows Server 2012 R2, Tier 1, 64 bit
- Red Hat Enterprise Linux 3, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 4, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 5, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 6, Tier 1, 32\64 bit
- Red Hat Enterprise Linux 7, Tier 1, 32\64 bit
- SUSE Linux Enterprise Server 10, Tier 2, 32\64 bit
- SUSE Linux Enterprise Server 11, Tier 2, 32\64 bit
- SUSE Linux Enterprise Server 12, Tier 2, 32\64 bit
For PPC hosts:
- Red Hat Enterprise Linux 7, Tier 1, LE\BE
- Red Hat Enterprise Linux 6, Tier 1, BE
- SUSE Linux Enterprise Server 12, Teir 2, LE
- SUSE Linux Enterprise Server 11 SP4, Teir 2, BE
|
|
|
=AU96
|
Very Comprehensive (see link)
Very comprehensive - vSphere 6.5 is compatible with various versions of: Asianux, Canonical, CentOS, Debian, FreeBSD, OS/2, Microsoft (MS-DOS, Windows 3.1, 95, 98, NT4, 2000, 2003, 2008, 2012 incl. R2, XP, Vista, Win7, 8), Netware, Oracle Linux, SCO OpenServer, SCO Unixware, Solaris, RHEL, SLES and Apple OS X server.
Details: http://www.vmware.com/resources/compatibility/search.php?action=base&deviceCategory=software
|
Yes (Limited);
Yes (with Vendor Add-On: Red Hat OpenShift Container Platform)
RHV offers the support for Red Hat Enterprise Linux Atomic Host as a guest.
The RHV guest agent is runing as a system contatiner and provides the managment system the notion of running containers on this VM.
A container managment platform support is offered with Red Hat OpenShift Container Platform which can be layered on top of RHV.
|
|
Container Support
Details
|
|
Yes
vSphere Integrated Containers
|
REST API, Python CLI, Hooks,
Python/Java/Ruby SDKs, Ansible
RHV exposes several interfaces for interacting with the virtualization environment. These interfaces are in addition to the user interfaces provided by the
Red Hat Virtualization Manager Administration, User, and Reports Portals. Some of the interfaces are supported only for read access or only when it has been explicitly requested by Red Hat Support.
Supported Interfaces (Read and Write Access):
- Representational State Transfer (REST) API: With the release of RHV-3 Red Hat introduced a new Representational State Transfer (REST) API. The REST API is useful for developers and administrators who aim to integrate the functionality of a Red Hat Virtualization environment with custom scripts or external applications that access the API via standard HTTP. The REST API exposed by the Red Hat Virtualization Manager is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Python Software Development Kit (SDK): This SDK provides Python libraries for interacting with the REST API. The Python SDK provided by the RHVm-sdk-python package is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Java Software Development Kit (SDK): This SDK provides Java libraries for interacting with the REST API. The Java SDK provided by the RHVm-sdk-java package is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Ruby Software Development Kit (SDK): This SDK provides Ruby libraries for interacting with the REST API. The Ruby SDK provided by the RHVm-sdk-java package is a fully supported interface for interacting with Red Hat Virtualization Manager.
- Ansible Modules; this modules provides an Ansible inteface for interacting with Red Hat Virtualization Manager.
- Linux Command Line Shell: The command line shell provided by the RHVm-cli package is a fully supported interface for interacting with the Red Hat Virtualization Manager.
- VDSM Hooks: The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Virtualization Hypervisor is not currently supported.
Additional Supported Interfaces (Read Access)
Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support:
- Red Hat Virtualization Manager History Database
- Libvirt on Virtualization Hosts
Unsupported Interfaces
Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support:
- The vdsClient Command
- Red Hat Virtualization Hypervisor Console
- Red Hat Virtualization Manager Database
|
|
|
=AU98
|
Web Services API/SDK, CIM, Perl, .NET, Java SDKs, Client Plug-In API, vSphere Clip, vMA
VMware provides several public API and Software Development Kits (SDK) products. You can use these products to interact with the following areas:
- host configuration, virtualization management and performance monitoring (vSphere Web Services API provides the basis for VMware management tools - available through the vSphere Web Services SDK). VMware provides language-specific SDKs (vSphere SDKs for Perl, .NET, or Java)
- server hardware health monitoring and storage management (CIM interface compatible with the CIM SMASH specification, storage management through CIM SMI-S and OEM/IHV packaged CIM implementations)
- extending the vSphere Client GUI (vSphere Client Plug-In API)
- access and manipulation of virtual storage - VMware Virtual Disk Development Kit (VDDK with library of C functions and example apps in C++)
- obtaining statistics from the guest operating system of a virtual machine (vSphere Guest SDK is a read-only programmatic interface for monitoring virtual machine statistics)
- scripting and automating common administrative tasks (CLIs that allow you to create scripts to automate common administrative tasks. The vSphere CLI is available for Linux and Microsoft Windows and provides a basic set of administrative commands. vSphere PowerCLI is available on Microsoft Windows and has over 200 commonly-used administrative commands.
Details Here: https://communities.vmware.com/community/vmtn/developer/
|
REST API
RHV provides its RESTful API for external integration into cloud platforms, for example, ManageIQs cloud management interface.
|
|
|
=AU99
|
vCloud API
vCloud API - provides support for developers who are building interactive clients of VMware vCloud Director using a RESTful application development style.
VMware provides a comprehensive vCloud API Programming Guide. vCloud API clients and vCloud Director servers communicate over HTTP, exchanging representations of vCloud objects. These representations take the form of XML elements. You use HTTP GET requests to retrieve the current representation of an object, HTTP POST and PUT requests to create or modify an object, and HTTP DELETE requests to delete an object.
The guide is intended for software developers who are building VMware Ready Cloud Services, including interactive clients of VMware vCloud Director. The guide discusses Representational State Transfer (REST) and RESTful programming conventions, the Open Virtualization Format Specification, and VMware Virtual machine technology.
|
RHCI: CloudForms, OpenStack, RHV; Satelite, OpenShift (Fee-Based Add-Ons)
Comment: Due to the variation in actual cloud requirements, different deployment model (private, public, hybrid) and use cases (IaaS, PaaS etc.) the matrix will only list the available products and capabilities. It will not list evaluations (green, amber, red) rather than providing the information that will help you to evaluate it for YOUR environment.
Overview:
IaaS (private and hybrid)
Red Hat offers Red Hat Cloud Infrastructure (RHCI) -a single-subscription offering that bundles and integrates the following products:
- RHV - Datacenter virtualization hypervisor and management for traditional (ENTERPRISE) workloads
- Cloud-enabled Workloads: RHEL OpenStack - scalable, fault-tolerant platform for developing a managed private or public cloud for CLOUD-ENABLED workloads
- Red Hat CloudForms - Cloud MANAGEMENT and ORCHESTRATION across multiple hypervisors and public cloud providers
- Red Hat Satellite - A system management platform that provides lifecycle management for Red Hat
Enterprise Linux for both host and tenant operating systems within Red Hat Cloud Infrastructure.
This includes provisioning, configuration management, software management, and subscription
management. - http://red.ht/1oKMZsP
- Red Hat Insights - automated resolution of critical issues for Red Hat products.
PaaS:
Red Hat also offers OpenShift (PaaS), as on-promise technology as well as available as online (public cloud) offering by Red Hat. Details here - http://red.ht/1LRn7ol
There are a number of public and hybrid (on-premise or cloud) offerings that Red Hat positions as complementary like Red Hat Storage Server (scale-out storage servers both on-premise and in the Amazon Web Services public cloud). Details are here: http://red.ht/1ug7XTY
|
|
|
Extensions
|
|
|
|
|
|
|
Cloud |
|
|
|
=AU100
|
VMware Cloud Foundation
VMware Cloud Foundation is the unified SDDC platform that brings together VMware’s vSphere, vSAN and NSX into a natively integrated stack to deliver enterprise-ready cloud infrastructure for the private and public cloud.
https://www.vmware.com/products/cloud-foundation.html
|
VDI Included in RHV; HTML5 support (Tech Preview)
There is one single SKU for RHV that includes server and technical workstation virtualization.
Red Hats Enterprise Virtualization includes an integrated connection broker as well as the ability to manage VDI users via external (LDAP-based) directory services. The same interface is used to manage both server and technical workstation images (unlike most other solutions like e.g. VMware View, Citrix Xentechnical workstation).
Please note that VDI is an additional charge to the server product and cannot be purchased separately (i.e. without purchasing RHV for servers).
Red Hat Virtualization for technical workstations (RHV-D) consists of:
- Red Hat Hypervisor
- Red Hat Virtualization Manager (RHV-M) as centralized management console with management tools that administrators can use to create, monitor, and maintain their virtual technical workstations (same interface as for server management)
- SPICE (Simple Protocol for Independent Computing Environments) - remote rendering protocol. There is initial support for the SPICE-HTML5 console client is offered as a technology preview. This feature allows users to connect to a SPICE console from their browser using the SPICE-HTML5 client.
- Integrated connection broker-a web-based portal from which end-users can log into their virtual technical workstations
Note: VDI related capabilities are NOT listed as Fee-Based Add-Ons (no purchase of additional VDI management software is required or licenses involved to enable the VDI management capability).
However, you will require relevant client access licensing to run virtual machines with Windows OSs, see http://bit.ly/1cBdgAm for details
|
|
|
|
Desktop Virtualization |
|
|
|
=AU101
|
VMware Horizon 7 (Vendor Add-On)
VMware Horizon 7
Deliver virtual or hosted desktops and applications through a single platform with VMware Horizon 7
http://www.vmware.com/products/horizon.html
|
Yes
This is possible via local datacenter feature, but limited to a single host with reduced management features.
|
|
|
|
1 |
|
|
|
=BI102
|
vSAN 6.7 (Vendor Add-On)
VMware vSAN 6.7 extend virtualization to storage with an integrated hyper-converged solution.
http://www.vmware.com/products/virtual-san.html
comparison: https://www.whatmatrix.com/comparison/SDS-and-HCI
|
3rd Party
At this time RHV focuses on the management of the virtual and cloud infrastructure.
Partner solutions offer insight into the services running on top of the virtual infrastructure, with integration into RHV.
|
|
|
|
2 |
|
|
Application Management
Details
|
=AU103
|
App Volumes
App Volumes is a portfolio of application and user management solutions for Horizon, Citrix XenApp and XenDesktop, and RDSH virtual environments.
https://www.vmware.com/products/appvolumes.html
|
sVirt & Security Partnerships
RHV includes sVirt, a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtual machines. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtual machine. To learn more about sVirt, visit this link: http://red.ht/1oZPMkf. Also Red Hats RHV partner ecosystem has security partnerships with SourceFire, Catbird, and other security-focused products and solutions.
|
|
|
|
3 |
|
|
|
=AU104
|
NSX (Vendor Add-On)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
Ansible 2.3 oVirt modules
Ansible 2.3 release includes integration with RHV in the form of oVirt Ansible modules.
workflows can now be automated by using playbooks based on those modules.
http://docs.ansible.com/ansible/list_of_cloud_modules.html#ovirt
|
|
|
|
4 |
|
|
Workflow / Orchestration
Details
|
=AU105
|
vRealize Orchestrator
vRealize Orchestrator is included with vCenter Server Standard and allows admins to capture often executed tasks/best practices and turn them into automated workflows (drag and drop) or use out of the box workflows.
http://www.vmware.com/products/vrealize-orchestrator.html
|
No - See Details
There is no natively provided Site Failover capability in RHV. Red Hat does provide the tools needed to provide a disaster recovery solution.
This is possible via 3rd party partners integration (such as Veritas, Acronis, SEP, Commvault and vProtect via IBM Spectrum Protect).
|
|
|
|
5 |
|
|
|
=AU106
|
Site Recovery Manager (Vendor Add-On)
VMware Site Recovery Manager
Perform frequent non-disruptive testing to ensure IT disaster recovery predictability and compliance. Achieve fast and reliable recovery using fully automated workflows and complementary Software-Defined Data Center (SDDC) solutions.
http://www.vmware.com/products/site-recovery-manager.html
|
Vendor Add-On: CloudForms
RHV includes enterprise reporting capabilities through a comprehensive management history database, which any reporting application utilizes to generate a range of reports at data center, cluster and host levels.
For charge-back, RHV has -party integrated solutions like e.g. IBMs Tivoli Usage and Accounting Manager (TUAM) which can convert the metrics RHVs enterprise reports provide into fiscal chargeback numbers.
Red Hat Cloudforms fee add-on or sold as part of Red Hat Cloud Infrastructure (RHCI) provides enterprises operational management tools including monitoring, chargeback, governance, and orchestration across virtual and cloud infrastructure such as Red Hat Virtualization, Amazon Web Services, Microsoft and VMware and OpenStack and provides the capability for cost allocation with usage and chargeback by determining who is using which resources to allocate costs, create and implement chargeback models.
|
|
|
|
6 |
|
|
|
=AU107
|
vRealize Business (Vendor Add-On)
VMware vRealize Business Enterprise is an IT financial management (ITFM) tool that provides transparency and control over the costs and quality of IT services, enabling the CIO to align IT with the business and to accelerate IT transformation.
http://www.vmware.com/products/vrealize-business.html
|
Vendor Add-Ons: Load Balancer, High Performance
- Red Hat has networking related products like the Load-Balancer Add-On for RHEL (http://www.redhat.com/f/pdf/rhel/RHEL6_Add-ons_datasheet.pdf) and the RHEL High-Performance Network Add-On (delivers remote direct memory access - RDMA- over Converged Ethernet - RoCE) that can add value to virtualization and cloud environments.
- Common SDN solution integration is available with Neutron OVS integration.
- Cisco UCS (VM-FEX) integration
|
|
|
|
7 |
|
|
Network Extensions
Details
|
=AU108
|
NSX (Vendor Add-on)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
NSX (Vendor Add-on)
VMware NSX is the network virtualization platform for the Software-Defined Data Center.
http://www.vmware.com/products/nsx.html
NSX embeds networking and security functionality that is typically handled in hardware directly into the hypervisor. The NSX network virtualization platform fundamentally transforms the data center’s network operational model like server virtualization did 10 years ago, and is helping thousands of customers realize the full potential of an SDDC.
With NSX, you can reproduce in software your entire networking environment. NSX provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.
|
|