|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Extensive platform support
- + Native file and object services
- + Manageability
|
- + Fast streamlined deployment
- + Strong VMware integration
- + Policy-based management
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Complex solution design
- - No QoS
- - Complex dedup/compr architecture
|
- - Single hypervisor and server hardware
- - No bare-metal support
- - Very limited native data protection capabilities
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2011
NEW
Nutanix was founded early 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, Virtual Computing Platform (VCP), in 2011. The core of the Nutanix solution is the Nutanix Operating System (NOS). In 2015 Nutanix rebranded its solution to Xtreme Computing Platform (XCP), mainly because Nutanix developed its own hypervisor, Acropolis Hypervisor (AHV), that is based on KVM. In 2015 Nutanix also rebranded its operating system to Acropolis (AOS). In 2016 Nutanix rebranded its solution to Enterprise Cloud Platform (ECP). In september 2018 Nutanix rebranded Acropolis File Services (AFS) to Nutanix Files, Acropolis Block Services (ABS) to Nutanix Volumes, Object Storage Services to Nutanix Buckets and Xi Cloud DR Services to Nutanix Leap.
At the end of October 2020 the company had a customer install base of approximaltely 18,000 customers worldwide and there were more than 6,100 employees working for Nutanix.
|
Name: VxRail
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: feb 2016
VCE was founded late 2009 by VMware, Cisco and EMC. The company is best known for its converged solutions VBlock and VxBlock. VCE started to ship its first hyper-converged solution, VxRack, late 2015, as part of the VMware EVO:RAIL program. In February 2016 VCE launced VxRail on Quanta server hardware. After completion of the Dell/EMC merger however, VxRail became part of the Dell EMC portfolio and the company switched to Dell server hardware.
VMware was founded early 2002 and began to ship its first Software Defined Storage solution, Virtual SAN (vSAN), in 2014. The vSAN solution is fully integrated into the vSphere Hypervisor platform. In 2015 VMware released major updates in the second itiration of the product and continues to improve the software ever since.
In August 2017 VxRail had an install base of approximately 3,000 customers worldwide.
At the end of May 2019 the company had a customer install base of more than 20,000 vSAN customers worldwide. This covers both vSAN and VxRail customers. At the end of May 2019 there were over 30,000 employees working for VMware worldwide.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
AOS 5.19: dec 2020
AOS 5.18: aug 2020
AOS 5.17: may 2020
AOS 5.16: jan 2020
AOS 5.11: aug 2019
AOS 5.10: nov 2018
AOS 5.9: oct 2018
AOS 5.8: jul 2018
AOS 5.6.1: jun 2018
AOS 5.6: apr 2018
AOS 5.5: dec 2017
AOS 5.1.2 / 5.2*: sep 2017
AOS 5.1.1.1: jul 2017
AOS 5.1: may 2017
AOS 5.0: dec 2016
AOS 4.7: jun 2016
AOS 4.6: feb 2016
AOS 4.5: oct 2015
NOS 4.1: jan 2015
NOS 4.0: apr 2014
NOS 3.5: aug 2013
NOS 3.0: dec 2012
NEW
5th Generation software. Nutanix currently has the most all-round package when it comes to advanced functionality, when comparing ECP to other SDS/HCI platforms.
Nutanix has adopted an LTS/STS life cycle approach towards AOS releases:
5.5, 5.10, 5.15 (LTS)
5.6, 5.8, 5.9, 5.11, 5.16, 5.17, 5.18 (STS)
LTS are released anually, maintained for 18 months and supported for 24 months.
STS are released quarterly, maintained for 3 months and supported for 6 months.
*5.2 is a IBM Power Systems-only release; for all other platforms version 5.6 applies.
Release dates of AOS Add-on components:
Nutanix Files 3.7.1: sep 2020
Nutanix Files 3.7: jul 2020
Nutanix Files 3.6.1: dec 2019
Nutanix Files 3.6: oct 2019
Nutanix Files 3.5.2: aug 2019
Nutanix Files 3.5.1: jul 2019
Nutanix Files 3.5: mar 2019
AFS 3.0.1: jun 2018
AFS 2.2: aug 2017
AFS 2.1.1: jun 2017
AFS 2.1: apr 2017
AFS 2.0: jan 2017
Nutanix File Analytics 2.0: aug 2019
Nutanix Objects 3.1: nov 2020
Nutanix Objects 3.0: oct 2020
Nutanix Objects 2.2: jun 2020
Nutanix Objects 2.1: may 2020
Nutanix Objects 1.1: nov 2019
Nutanix Objects 1.0.1: oct 2019
Nutanix Objects 1.0: aug 2019
Nutanix Karbon 2.1: jul 2020
Nutanix Karbon 2.0: feb 2020
Nutanix Karbon 1.0.4: dec 2019
Nutanix Karbon 1.0.3: oct 2019
Nutanix Karbon 1.0.2: oct 2019
Nutanix Karbon 1.0.1: may 2019
Nutanix Karbon 1.0: mar 2019
Nutanix Calm 3.1: oct 2020
Nutanix Calm 3.0: jun 2020
Nutanic Calm 2.9.7: jan 2020
Nutanix Calm 2.9.0: nov 2019
Nutanix Calm 2.7.0: aug 2019
Nutanix Calm 2.6.0: feb 2019
Nutanix Calm 2.4.0: nov 2018
Nutanix Calm 5.9: oct 2018
Nutanix Calm 5.8: jul 2018
Nutanix Calm 5.7: dec 2017
Nutanix Era 2.0: sep 2020
Nutanix Era 1.3: jun 2020
Nutanix Era 1.2: jan 2020
Nutanix Era 1.1: jul 2019
Nutanix Era 10.: dec 2018
NOS = Nutanix Operating System
AOS = Acropolis Operating System
LTS = Long Term Support
STS = Short Term Support
AFS = Acropolis File Services
|
GA Release Dates:
VxRail 7.0.100 (vSAN 7.0 U1: nov 2020
VxRail 7.0 (vSAN 7.0): apr 2020
VxRail 4.7.410 (vSAN 6.7.3): dec 2019
VxRail 4.7.300 (vSAN 6.7.3): sep 2019
VxRail 4.7.212 (vSAN 6.7.2): jul 2019
VxRail 4.7.200 (vSAN 6.7.2): may 2019
VxRail 4.7.100 (vSAN 6.7.1): mar 2019
VxRail 4.7.001 (vSAN 6.7.1): dec 2018
VxRail 4.7.000 (vSAN 6.7.1): nov 2018
VxRail 4.5.225 (vSAN 6.6.1): oct 2018
VxRail 4.5.218 (vSAN 6.6.1): aug 2018
VxRail 4.5.210 (vSAN 6.6.1): may 2018
VxRail 4.5 (vSAN 6.6): sep 2017
VxRail 4.0 (vSAN 6.2): dec 2016
VxRail 3.5 (vSAN 6.2): jun 2016
VxRail 3.0 (vSAN 6.1): feb 2016
NEW
7th Generation VMware software on 14th Generation Dell server hardware.
VxRail is fueled by vSAN software. vSANs maturity has been increasing ever since the first iteration by expanding its range of features with a set of advanced functionality.
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
Depends on specific model/subtype/resource configuration
|
Per Node
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Core + Flash TiB (AOS)
Per Node (AOS, Prism Pro/Ultimate)
Per Concurrent User (VDI)
Per VM (ROBO)
Per TB (Files)
Per VM (Calm)
Per VM (Xi Leap)
In September 2018 Nutanix introduced a 'per core + per Flash TB' (capacity) licensing model next to the existing a 'per node' (appliance) licensing model for AOS. This means software license cost is tightly coupled with the number of physical cores (compute), as well as the amount of flash TBs (storage) in the Nutanix nodes acquired.
In May 2019 Nutanix introduced 'per concurrent user' licensing for VDI use cases and 'per VM' licensing for ROBO use cases. Both VDI and ROBO bundle AOS, AHV and Prism. Both must run on dedicated clusters. ROBO is designed for sites running typically up to 10 VMs.
Capacity-based and VDI-based software licensing are sold in 1 to 7 year terms.
ROBO-based software licensing is sold in 1 to 5 year terms.
Appliance-based software licensing is sold for the lifetime of the hardware and is non-transferable.
AOS Editions: Starter, Pro, Ultimate
Prism Central Editions: Starter, Pro
AOS Editions:
Starter limits functionality, for example: No IBM Power, Cluster size restricted to 12, Replication Factor restricted to 2; lacks post-process deduplication, post-process compression, Availability Domains, Self Service Restore, Cloud Connect, VSS Integration, Intelligent VM Placement, Virtual Network Configuration, Host Profiles.
Ultimate exclusively offers VM Flash Mode, Multiple Site DR (1-to many, many-to 1), Metro Availability, Disaster Recovery with NearSync or Sync Replication, On-premises Leap, Data-at-Rest Encryption and Native KMS. All except VM Flash Mode can be purchased as an add-on license for Pro edition.
Prism Central Editions:
Prism Starter is included with every edition of Acropolis for single and multiple site management. It enables registration and management of multiple Prism Element clusters, upgrade Prism Central with 1-click through Life Cycle Manager (LCM), and monitor and troubleshoot managed clusters. Prism Pro is available as an add-on subscription. Pro adds customizable dashboards, capacity, planning, and analysis tools, advanced search capabilities, low-code/no-code automation, and reporting. Prism Ultimate adds application discovery and monitoring, budgeting/chargeback and cost metering for resources, and a SQL Server monitoring content pack. Every Prism Central deployment includes a 90 day trial version of this license tier.
AOS Add-ons require separate licensing:
Nutanix Files:
Nutanix Files is licensed separately and is sold under two different capacity licenses. Nutanix Files Add-on License for HCI is for Nutanix Files running on mixed mode clusters. Nutanix Files Dedicated License is for Nutanix Files running on dedicated clusters.
Nutanix Calm:
Nutanix Calm is licensed separately and is sold as an annual subscription on a per virtual-machine (VM) basis. Calm licenses are required only for VMs managed by Calm, running in either the Nutanix Enterprise cloud or public clouds. Nutanix Calm is sold in 25 VM subscription license packs. Both Prism Starter and Prism Pro include perpetual entitlement for the first 25 VMs managed by Calm.
Nutanix Microsegmentation:
Nutanix Microsegmentation is licensed seperately and is sold as an annual subscription on a per node basis. Licenses are needed for all nodes in a cluster where microsegmentation functionality will be used. This option requires a Nutanix cluster managed by Prism Central and using the AHV virtualization solution. Licenses are sold in 1 to 5 year subscription terms. Prism Central with Starter license is required to manage microsegmentation policies.
Xi Services:
Nutanix provides several subscription Plans: Pay-As-You-Go, 1 year and 3 year.
Xi Leap:
Xi Leap is licensed separately and is sold as an annual subscription on a per virtual-machine (VM) basis. Each VM protected by Xi Leap falls in one of three pricing levels: Basic Protection (RPO=24+ hours;2 snapshots,1TB/VM), Advanced Protection (RPO=4+ hours; 1 week of snapshots,2TB/VM), Premium Protection (RPO=1+ hours,1 month of snapshots,5TB/VM). Allowed capacity includes space allocated by snapshots.
Xi Frame:
Xi Frame is licensed separately ad is sold as an annual scubscriptuon on a named user or concurrent user basis.
|
Per Node
NEW
Every VxRail 7.0.100 node comes bundled with:
- Dell EMC VxRail Manager 7.0.100
- VxRail Manager Plugin for VMware vCenter
- VMware vCenter Server Virtual Appliance (vCSA) 7.0 U1
- VMware vSphere 7.0 U1
- VMware vSAN 7.0 U1
- VMware vRealize Log Insight 8.1.1.0
- ESRS 3.46
As of VxRail 3.5 VMware vSphere licenses have to be purchased separately. VxRail nodes come pre-installed with VMware vSphere 6.7 U3 Patch01 and require a valid vSphere license key to be entered. VMware vSphere Data Protection (VDP) 6.1 is included as part of the vSphere license and is downloadable through the VxRail Manager.
VMware vSAN licenses have to be purchased separately as well. As of VxRail 4.7 there is a choice of either vSAN 6.7 U3 Standard, Advanced or Enterprise licenses.
Dell EMC VxRail 7.0.100 supports VMware Cloud Foundation (VCF) 4.1. VMware Cloud Foundation (VCF) is a unified SDDC platform that brings together VMware ESXi, VMware vSAN, VMware NSX, and optionally, vRealize Suite components, VMware NSX-T, VMware Enterprise PKS, and VMware Horizon 7 into one integrated stack.
Dell EMC VxRail 7.0 does not support VMware vLCM; vLCM is disabled in vCenter.
Dell EMC VxRail 7.0 does not support appliances based on the Quanta hardware platform.
Dell EMC VxRail 7.0 does not support RecoverPoint for Virtual Machines (RP4VM).
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
Subscriptions: Basic, Production, Mission Critical
Most notable differences:
- Basic offers 8x5 support, Production and Mission Critical offer 24x7 support.
- Basic and Production target 2/4/8 hours response times depending severity level, whereas Mission Critical targets 1/2/4 hours reponse times.
- Basic and Production provide Next Business Day hardware replacement, whereas Mission Critical provides 4 Hour hardware replacement.
- Mission Critical exclusively offers direct routing to senior level engineers.
|
Per Node
Dell EMC offers two types of VxRail Appliance Support:
- Enhanced provides 24x7 support for production environments, including around-the-clock technical support, next business day onsite response, proactive remote monitoring and resolution, and installation of non customer replaceable units.
- Premium provides mission critical support for fastest resolution, including 24x7 technical support and monitoring, priority onsite response for critical issues, installation of operating environment updates, and installation of all replacement parts.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Nutanix is stack-oriented.
With the ECP platform Nutanix aims to provide all functionality required in a Private Cloud ecosystem through a single platform.
|
Hypervisor
Compute
Storage
Network (limited)
Data Protection (limited)
Management
Automation&Orchestration
VMware is stack-oriented, whereas the VxRail platform itself is heavily storage-focused.
With the vSAN/VxRail platforms VMware aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 25, 40 GbE
Nutanix hardware models include redundant ethernet connectivity using SFP+ or Base-T. Nutanix recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
1, 10, 25 GbE
VxRail hardware models include redundant ethernet connectivity using SFP+ or Base-T. Dell EMC recommends at least 10GbE to avoid the network becoming a performance bottleneck.
VxRail 4.7 added automatic network configuration support for select Dell top-of-rack (TOR)
switches.
VxRail 4.7.211 added support for Qlogic and Mellanox NICs, as well as SmartFabric support for Dell EMC S5200 25Gb TOR switches.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
High
Nutanix ECP is able to meet many different use-cases, but each requires specific design choices in order to reach an optimal end-state. This is not limited to choosing the right building blocks and the right software edition, but extends to the use (or non-use) of some of its core data protection and data efficiency mechanisms. In addition, the end-users hypervisor choice prohibits the use of some advanced functionality as these are only available on Nutanix own hypervisor, AHV. As Nutanix continues to add functionality to its already impressive array of capabilities, designing the solution could grow even more complex over time.
|
Medium
Dell EMC VxRail was developed with simplicity in mind, both from a design and a deployment perspective. VMware vSANs uniform platform architecture, running at the core of VxRail, is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities either natively or by leveraging features already present in the VMware hypervisor, vSphere, on a per-VM basis. As there is no tight integration involved, especially with regard to data protection choices need to made whether to incorporate 1st party of 3rd party solutions into the overall technical design.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
Login VSI (may 2017)
ESG Lab (feb 2017; sep 2020)
SAP (nov 2016)
ESG Lab (Sep 2020)
Title: Nutanix Architecture and Performance Optimization'
Workloads: Synthetic, High-perf database, OLTP, Postgres Analytics
Benchmark Tools: Nutanix X-Ray (FIO), Silly Little Oracle Benchmark (SLOB), Pgbench
Hardware: Nutanix All-NVMe NX-8170-G7, 4-node cluster, AOS 5.18
Login VSI (May 2017)
Title: 'Citrix XenDesktop on AHV'
Workloads: Citrix XenDesktop VDI
Benchmark Tools: Login VSI (VDI)
Hardware: Nutanix All-flash NX-3460-G5, 6-node cluster, AOS 5.0.2
ESG Lab (Feb 2017)
Title: 'Performance Analysis: Nutanix'
Workloads: MSSQL OLTP, Oracle OLTP, MS Exchange, Citrix XenDesktop VDI
Benchmark Tools: Benchmark Factory (MSSQL), Silly Little Oracle Benchmark (Oracle), Jetstress (MS Exchange), Login VSI (Citrix XenDesktop)
Hardware: Nutanix All-flash NX-3460-G5, 4 node-cluster, AOS 5.0
SAP (Nov 2016)
Title: 'SAP Sales and Distribution (SD) Standard Application Benchmark'.
Workloads: SAP ERP
Benchmark Tools: SAPSD
Hardware: All-Flash Nutanix NX8150 G5, single -node, AOS 4.7
|
StorageReview (dec 2018)
Principled Technologies (jul 2017, jun 2017)
StorageReview (Dec 2018)
Title: 'Dell EMC VxRail P570F Review'
Workloads: MySQL OLTP, MSSQL OLTP, Generic profiles
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL), Vdbench (generic)
Hardware: All-Flash Dell EMC VxRail P570F, vSAN 6.7
Principled Technologies (Jul 2017)
Title: 'Handle more orders with faster response times, today and tomorrow'
Workloads: MSSQL OLTP
Benchmark Tools: DS2 (MSSQL)
Hardware: All-Flash Dell EMC VxRail P470F, 4-node cluster, VxRail 4.0 (vSAN 6.2)
Principled Technologies (Jun 2017)
Title: 'Empower your databases with strong, efficient, scalable performance'
Workloads: MSSQL OLTP
Benchmark Tools: DS2 (MSSQL)
Hardware: All-Flash Dell EMC VxRail P470F, 4-node cluster, VxRail 4.0 (vSAN 6.2)
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Community Edition (forever)
Hyperconverged Test Drive in GCP
Proof-of-Concept (POC)
Partner Driven Demo Environment
Xi Services Free Trial (60-days)
AOS Community Edition (CE) is freely downloadble after registering online and offers limited platform support (Acopolis Hypervisor = AHV) and scalability (4 nodes). AOS CE can be installed on all commodity hardware platforms that meet the hardware requirements. AOS CE use is not time-restricted. AOS CE is not for production environments.
A small running setup of AOS Community Edition (CE) can also be accessed instantly in Google Cloud Platform (GCP) by registering at nutanix.com/test-drive-hyperconverged-infrastructure. The Test Drive is limited to 2 hours.
Ravellos Smart Labs on AWS/GCE provides nested virtualization and offers a blueprint to run AOS CE in the public cloud. Using Ravello Smart Labs requires a subscription.
Nutanix also offers instant online access to live demo environments for partners to educate/show their customers.
Nutanix Xi Services include a Free Plan that provides a free trial for 60 days. At the end of the 60-day trail end-user organisations can choose to switcg to a Paid Plan, or choose to decide later. A Free Plan includes either DR for 100 VMs (100GB each) or running VMs (2vCPU, 4GB RAM each), and includes profesional support.
AWS = Amazon Web Services
GCE = Google Compute Engine
|
Proof-of-Concept (POC)
vSAN: Free Trial (60-days)
vSAN: Online Lab
VxRail 7.0 runs VMware vSAN 7.0 software at its core. vSAN Evaluation is freely downloadble after registering online. Because it is embedded in the hypervisor, the free trial includes vSphere and vCenter Server. vSAN Evaluation can be installed on all commodity hardware platforms that meet the hardware requirements. vSAN Evaluation use is time-restricted (60-days). vSAN Evaluation is not for production environments.
VMware also offers a vSAN hosted hands-on lab that lets you deploy, configure and manage vSAN in a contained environment, after registering online.
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: Nutanix ECP is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Nutanix ECP can also serve in a dual-layer model by providing storage to non-Nutanix hypervisor hosts, bare metal hosts and Windows clients (Please view the compute-only scale-out option for more information).
|
Single-Layer
Single-Layer: VMware vSAN is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
VMware vSAN can partially serve in a dual-layer model by providing storage also to other vSphere hosts within the same cluster that do not contribute storage to vSAN themselves or to bare metal hosts. However, this is not a primary use case and also requires the other vSphere hosts to have vSAN enabled (Please view the compute-only scale-out option for more information).
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Nutanix, customer deployments can be executed in hours instead of days.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Dell EMC, customer deployments can be executed in hours instead of days.
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
The Nutanix Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the Nutanix storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow. AOS 5.5 Controller VMs were running CentOS-7.3 with Python 2.7.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Kernel Integrated
Virtual SAN is embedded into the VMware hypervisor. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.0U1A-7.0U1
Microsoft Hyper-V 2012R2/2016/2019*
Microsoft CPS Standard
Nutanix Acropolis Hypervisor (AHV)
Citrix XenServer 7.0.0-7.1.0CU2**
NEW
Nutanix currently supports 4 major hypervisor platforms where many others support only 1 or 2. Nutanix offers its own hypervisor called Acropolis Hypervisor (AHV), which is based on Linux KVM. Using different hypervisors and or hypervisor-clusters within the same Nutanix cluster is supported.
Nutanix has official support for Microsoft Cloud Platform System (CPS), which bundles Windows Server 2012 R2, System Center 2012 R2 and Windows Azure Pack for easier hybrid cloud configuration. Nutanix offers the CPS Standard version pre-installed on nodes.
*Hyper-V 2016 is not supported on NX platform nodes with IvyBridge or SandyBridge processors (motherboard designator starts with X9). Hyper-V 2016 requires G5 hardware and is not supported on G3 and G4 hardware.
**Citrix Hypervisor 8.x is not supported on Nutanix server nodes by Citrix.
AOS 5.1 introduced general availability (GA) support for the Citrix XenServer hypervisor for use with XenApp and XenDesktop in Nutanix clusters.
AOS 5.2 exclusively introduced general availability (GA) support for the AHV hypervisor for use on IBM Power Systems. Only Linux is currently supported as Guest OS.
AOS 5.5 introduced support for Microsoft Hyper-V 2016 and provides 1-click non-disruptive upgrades from Hyper-V 2012 R2. AOS 5.5 also introduces support for virtual hard disks (.vhdx) that are 2 TB or greater in size in Hyper-V clusters.
AOS 5.17 introduces support for Microsoft Hyper-V 2019.
|
VMware vSphere ESXi 7.0 U1
NEW
VMware Virtual SAN is an integral part of the VMware vSphere platform; As such it cannot be used with any other hypervisor platform.
Dell EMC VxRail and vSAN support a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
NFS
SMB3
iSCSI
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
vSAN (incl. WSFC)
VMware uses a propietary protocol for vSAN.
vSAN 6.1 and upwards support the use of Microsoft Failover Clustering (MSFC). This includes MS Exchange DAG and SQL Always-On clusters when a file share witness quorum is used. The use of a failover clustering instance (FCI) is not supported.
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 introduced native support for SCSI-3 Persistent Reservations (PR), which enables Windows Server Failover Clusters (WSFC) to be directly deployed on native vSAN VMDKs. This capability enables migrations from legacy deployments on physical RDMs or external storage protocols to VMware vSAN.
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Microsoft Windows Server 2008R2/2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2
SLES 11/12
Oracle Linux 6.7/7.2
AIX 7.1/7.2 on POWER
Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-cases)
Nutanix Volumes, previously Acropolis Block Services (ABS), provides highly available block storage as iSCSI LUNs to clients. Clients can be non-Nutanix servers external to the cluster or guest VMs internal or external to the cluster, with the cluster block storage configured as one or more volume groups. This block storage acts as the iSCSI target for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators from within the client operating systems.
AOS 5.5 introduced support for Windows Server 2016.
IBM AIX: PowerHA cluster configurations are not supported as AIX requires a shared drive for cluster configuration information and that drive cannot be connected over iSCSI.
Nutanix Volumes supports exposing LUNs to ESXi clusters in very specific use cases.
|
N/A
Dell EMC VxRail does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
Block storage acts as one or more targets for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators from within the client operating systems.
ABS does not require multipath I/O (MPIO) configuration on the client but it is compatible with clients that are currently using or configured with MPIO.
|
N/A
Dell EMC VxRail does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Built-in (native)
Nutanix provides its own software plugins for container support (both Docker and Kubernetes).
Nutanix also developed its own container platform software called 'Karbon'. Karbon provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
Nutanix Karbon is not a hard requirement for running Docker containers and Kubernetes on top of ECP, however it does make it easier to use and consume.
Nutanix Karbon can only be used in combination with Nutanix native hypervisor AHV. VMware vSphere and Microsoft Hyper-V are not supported at this time. As Nutanix Karbon leverages Nutanix Volumes, it is not available for the Starter edition.
|
Built-in (Hypervisor-based, vSAN supported)
VMware vSphere Docker Volume Service (vDVS) technology enables running stateful containers backed by storage technology of choice in a vSphere environment.
vDVS comprises of Docker plugin and vSphere Installation Bundle which bridges the Docker and vSphere ecosystems.
vDVS abstracts underlying enterprise class storage and makes it available as docker volumes to a cluster of hosts running in a vSphere environment. vDVS can be used with enterprise class storage technologies such as vSAN, VMFS, NFS and VVol.
vSAN 6.7 U3 introduces support for VMware Cloud Native Storage (CNS). When Cloud Native Storage is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker EE 1.13+
Node OS CentOS 7.5
Kubernetes 1.11-1.14
Nutanix software plugins support both Docker and Kubernetes.
Nutanix Docker Volume plugin (DVP) supports Docker 1.13 and higher.
Nutanix Karbon 1.0.3 supported the following OS images:
- Node OS CentOS 7.5.1804-ntxnx-0.0, CentOS 7.5.1804-ntxnx-0.1
- Kubernetes v1.13.10, v1.14.6, v1.15.3
The current version of Nutanix Karbon is 2.1
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume plugin (certified)
The Nutanix Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables Nutanix data volumes to persist beyond the lifetime of both a container or a container host. Nutanix leverages Acropolis Block Services (ABS) to provide storage to containers through in-guest iSCSI connections. This effectively means that the hypervisor layer is bypassed.
The Nutanix Docker Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the online Docker Store. The plug-in is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a Nutanix storage cluster.
The Nutanix Docker Volume plugin (DVP) is supported with AOS 4.7 and later.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The Nutanix native plug-ins are container-host centric and as such can be used across all Nutanix-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, Nutanix AHV) as well as on bare metal platforms.
Nutanix Karbon can only be used in combination with Nutanix native hypervisor AHV. VMware vSphere and Microsoft Hyper-V are not supported at this time.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
CentOS 7
Red Hat Enterprise Linux (RHEL) 7.3
Ubuntu Linux 16.04.2
The Nutanix Docker Volume Plugin (DVP) has been qualified for the mentioned Linux OS versions. However, the plug-in may also work with older OS versions.
Container hosts running the Windows OS are not (yet) supported.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes
Nutanix Karbon 2.1 supports Kubernetes.
|
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS).
When Cloud Native Storage (CNS) is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS, vVols) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
VCP = vSphere Cloud Provider
CSI = Container Storage Interface
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume plugin
The Nutanix Container Storage Interface (CSI) Volume Driver for Kubernetes uses Nutanix Volumes and Nutanix Files to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree Nutanix CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using Nutanix Files and Nutanix Volumes storage.
When Nutanix Files is used for persistent storage, applications on multiple pods can access the same storage and also have the benefit of multi-pod read-and-write access.
The Nutanix CSI Volume Driver requires Kubernetes v1.13 or later and AOS 5.6.2 or later. When using Nutanix Volumes, Kubernetes worker nodes must have the iSCSI package installed.
Nutanix also developed its own container platform software called 'Karbon'. Karbon provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
|
Kubernetes Volume Plugin
The VMware vSphere Container Storage Interface (CSI) Volume Driver for Kubernetes leverages vSAN block storage and vSAN file shares to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree VMware vSphere CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using vSAN block storage and vSAN native files shares dynamically provisioned by VMware vSAN File Services.
The VMware vSphere CSI Volume Driver requires Kubernetes v1.14 or later and VMware vSAN 6.7 U3 or later. vSAN File Services requires VMware vSAN/vSphere 7.0.
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, vVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop (certified)
Citrix Cloud (certified)
Parallels RAS
Xi Frame
Nutanix has published Reference Architecture whitepapers for VMware Horizon and Citrix XenDesktop platforms.
Nutanix Acropolis Hypervisor (AHV) is qualified as Citrix Ready. The Citrix Ready Program showcases verified products that are trusted to enhance Citrix solutions for mobility, virtualization, networking and cloud platforms. The Citrix Ready designation is awarded to third-party partners that have successfully met test criteria set by Citrix, and gives customers added confidence in the compatibility of the joint solution offering.
Nutanix qualifies as a Citrix Workspace Appliance: Nutanix and Citrix have worked side-by-side to automate the provisioning and integration of the entire Citrix stack from cloud service to on-premises applications and desktops. Nutanix Prism automatically provisions Citrix Cloud Connectors VMs for instantly connecting the on-premises Nutanix cluster to the XenApp and XenDesktop Service, and registers all nodes in the Nutanix cluster as a XenApp and XenDesktop Service Resource Location.
Parallels Remote Application Server (RAS) is a Nutanix-ready desktop virtualization solution. A Parallels RAS and Nutanix on VMWare reference architecture white paper was published in july 2016.
In May 2019 Nutanix introduced Xi Frame as a new option to use Frame Desktop-as-a-Service (DaaS) with apps, desktops, and user data hosted on Nutanix on-premises infrastructure. Xi Frame only supports the Nutanix AHV hypervisor with AOS 5.10 or later. At this time Windows 10, Windows Server 2016 and Windows Server 2019 guest OS are supported. Linux support (Ubuntu, CentOS) will be added in the second half of 2019.
|
VMware Horizon
Citrix XenDesktop
Dell EMC has published Reference Architecture whitepapers for both VMware Horizon and Citrix XenDesktop platforms.
Dell EMC VxRail 4.7.211 supports VMware Horizon 7.7
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
VMware Horizon 7.2: Load bearing is based on Login VSI tests performed on hybrid Lenovo ThinkAgile HX3320 Series using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.15: Load bearing is based on Login VSI tests performed on hybrid Lenovo ThinkAgile HX3320 Series using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepaper, dated July 2019.
|
VMware: up to 160 virtual desktops/node
Citrix: up to 140 virtual desktops/node
VMware Horizon 7.7: Load bearing number is based on Login VSI tests performed on hybrid VxRail V570F appliances using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.15: Load bearing number is based on Login VSI tests performed on hybrid VxRail V570F-B appliances using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding reference architecture whitepapers.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Super Micro (Nutanix branded)
Super Micro (source your own)
Dell EMC (OEM)
Lenovo (OEM)
Fujitsu (OEM)
HPE (OEM)
IBM (OEM)
Inspur (OEM)
Cisco UCS (Select)
Crystal (Rugged)
Klas Telecom (Rugged)
Many (CE only)
When end-user organizations order a Nutanix solution from Nutanix channel partners they get the Nutanix software on Nutanix branded hardware. Nutanix is sourcing this hardware from SuperMicro. End-user organizations also have the option to source their own SuperMicro hardware and buy licensing and support from Nutanix.
Dell and Nutanix reached an OEM agreement in 2015. Lenovo and Nutanix reached an OEM agreement in 2016. IBM and Nutanix reached an OEM agreement in 2017. Customers and prospects should note that these hardware platforms should not be mixed in one cluster (technically possible but not supported).
In July 2016 Nutanix and Crystal Group partnered together to provide Enterprise Cloud solutions for extreme environments in energy, mining, hospitality, military, government and more. Combining the highly reliable, ruggedized Crystal RS2616PS18 Rugged 2U server platform and award-winning Nutanix Enterprise Cloud Platform for use in tactical environments.
As of August 2016 Nutanix completed independent validation of running Nutanix software on Cisco Unified Computing System (UCS) C-Series servers. Nutanix for Cisco UCS rack-mount servers is available through select Cisco and Nutanix partners worldwide.
In February 2017 Nutanix and Klas forged a partnership Nutanix to transform tactical data center solutions for government and military operations. The Klas Telecom Voyager Tactical Data Center system running the Nutanix Enterprise Cloud Platform allows data center operations to be carried out in the field via a single, airline carry-on-sized case.
As of July 2017 Nutanix completed independent validation of running Nutanix software on Cisco Unified Computing System (UCS) B-Series servers. Nutanix for Cisco UCS blade servers is available through select Cisco and Nutanix partners worldwide.
As of July 2017 Nutanix completed independent validation of running Nutanix software on HPE Proliant servers. Nutanix for HPE Proliant rack-mount servers is available through select HPE and Nutanix partners worldwide.
In September 2017 Nutanix released a version specifically for the IBM Power platform.
In May 2019 Fujitsu announced the availability of Fujitsu XF-series, combining Nutanix software with Fujitsu PRIMERGY servers.
In October 2019 HPE announced the availability of HPE ProLiant DX solution and HPE GreenLake for Nutanix.
The Nutanix Community Edition (CE) can be run on almost any x86 hardware but only have community support. Nutanix CE can also be run on AWS and Google using Ravello Systems for testing and training purposes.
A Nutanix node can run from AWS, backed with S3 storage as a backup target.
|
Dell
Dell EMC uses a single brand of server hardware for its VxRail solution. Since completion of the Dell/EMC merger, Dell EMC has shifted from Quanta to Dell PowerEdge server hardware. This coincides with the VxRail 4.0 release (dec 2016).
In november 2017 Dell refreshed its VxRail hardware base with 14th Generation Dell PowerEdge server hardware.
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
5 Native Models (SX-1000, NX-1000, NX-3000, NX-5000, NX-8000)
15 Native Model sub-types
Different models are available for different workloads.
Nutanix Native G6 and G7 Models (Super Micro):
SX-1000 (SMB, max 4 nodes per block and max 2 blocks)
NX-1000 (ROBO)
NX-3000 (Mainstream; VDI GPU)
NX-5000 (Storage Heavy; Storage Only)
NX-8000 (High Performance)
Dell EMC XC Core OEM Models (hardware support provided through Dell EMC; software support provided through Nutanix):
XC640-4 (ROBO 3-node cluster)
XC640-4i (ROBO 1-node cluster)
XC640-10 (Compute Heavy, VDI)
XC740xd-12 (Storage Heavy)
XC740xd-12C (Storage Heavy - up to 80TB of cold tier expansion)
XC740xd-12R (Storage Heavy - up to 80TB single-node replication target)
XC740xd-24 (Performance Intensive)
XC740xd2 (Storage Heavy - up to 240TB for fule and object workloads)
XC940-24 (Memory and Performance Intensive)
XC6420 (High-Density)
XCXR2 (Remote, harsh environments)
Lenovo ThinkAgile HX OEM Models (VMware ESXi, Hyper-V and Nutanix AHV supported):
HX 1000 Series (ROBO)
HX 2000 Series (SMB)
HX 3000 Series (Compute Heavy, VDI)
HX 5000 Series (Storage Heavy)
HX 7000 Series (Performance Intensive)
Fujitsu XF OEM Models (VMware ESXi and Nutanix AHV supported):
XF1070 4LFF (ROBO)
XF3070 10SFF (General Virtualized Workloads)
XF8050 24SFF (Compute Heavy Workloads)
XF8055 12LFF ((Stprage Heavy Workloads)
HPE ProLiant DX OEM Models:
DX360-4-G10
DX380-8-G10
DX380-12-G10
DX380-24-G10
DX560-24-G10
DX2200-DX170R-G10-12LFF
DX2200-DX190R-G10-12LFF
DX2600-DX170R-G10-24SFF
DX4200-G10-24LFF
IBM POWER OEM Models (AHV-only):
IBM CS821 (Middleware, DevOps, Web Services)
IBM CS822 (Storage Heavy Analytics, Open Source Databases)
Cisco UCS C-Series Models:
C220-M5SX (VDI, Middleware, Web Services)
C240-M5L (Storage Heavy, Server Virtualization)
C240-M5SX (Exchange, SQL, Large Databases)
C220-M4S (VDI, Middleware, Web Services)
C240-M4L (Storage Heavy, Server Virtualization)
C240-M4SX (Exchange, SQL, Large Databases)
Cisco UCS B-Series Models:
B200-M4 in 5108 Blade Chassis within 6248UP/6296UP/6332 Fabrics
HPE Proliant Models (ESXi, Hyper-V and Nutanix AHV supported):
DL360 Gen10 8SFF (VDI, Middleware, Web Services)
DL380 Gen10 2SFF + 12LFF (Storage Heavy, Server Virtualization)
DL380 Gen10 26SFF (High Performance, Exchange, SQL, Large Databases)
HPE Apollo Models (ESXi, Hyper-V and Nutanix AHV supported):
XL170r Gen 10 6SFF (VDI, Middleware, Web Services, Storage Heavy, Server Virtualization, High Performance, Exchange, SQL, Large Databases)
All models support hybrid and all-flash disk configurations.
All models can be deployed as storage-only nodes. These nodes run AHV and thus do not require additional hypervisor licenses.
Citrix XenServer is not supported on the NX-6035C-G5 (storage-only or light-compute) model.
LFF = Large Form Factor (3.5')
SFF = Small Form Factor (2.5')
|
5 Dell (native) Models (E-, G-, P-, S- and V-Series)
Different models are available for different workloads and use cases:
E Series (1U-1Node) - Entry Level
G Series (2U-4Node) - High Density
V Series (2U-1Node) - VDI Optimized
P Series (2U-1Node) - Performance Optimized
S Series (2U-1Node) - Storage Dense
E-, G-, V- and P-Series can be acquired as Hybrid or All-Flash appliance. S-Series can only be acquired as Hybrid appliance.
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
Native:
1 (NX1000, NX3000, NX6000, NX8000)
2 (NX6000, NX8000)
4 (NX1000, NX3000)
3 or 4 (SX1000)
nodes per chassis
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1 or 4 nodes per chassis
The VxRail Appliance architecture uses a combination of 1U-1Node, 2U-1Node and 2U-4Node building blocks.
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
Nutanix now allows mixing of all-flash (SSD-only) and hybrid (SSD+HDD) nodes in the same cluster. A minimum of 2 all-flash (SSD-only) nodes are required in mixed all-flash/hybrid clusters.
|
Yes
Dell EMC allows mixing of different VxRail Appliance models within a cluster, except where doing so would create highly unbalanced performance. The first 4 cluster nodes do not have to be identical anymore (previous requirement). All G Series nodes within the same chassis must be identical. No mixing is allowed between hybrid and all-flash nodes within the same storage cluster. All nodes within the same cluster must run the same version of VxRail software.
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: up to 10 options
Depending on the hardware model Nutanix offers one or multiple choices on CPU per Node: 8, 12, 16, 20, 24, 28, 32, 36, 44 cores. None of the hardware platforms offer all of the above processor configurations. Nutanix current hardware platforms provide up to 5 different CPU confgurations.
The NX-1000 series (NX-1065-G7 and NX-1175-G7 submodels), the NX-3000 series (NX-3060-G7 submodel ) and the NX-8000 series (NX-8035-G7 and NX8170-G7 submodels) currently support 2nd generation Intel Xeon Scalable processors (Cascade Lake). The NX-5000 series stil lacks a submodel that supports 2nd generation Intel Xeon Scalable (Cascade Lajke) processors.
Nutanix still provides multiple native models and submodels that offer a choice of previous generation Intel Xeon Scalable processors (Skylake).
Lenovo ThinkAgile HX (Nutanix OEM) models were the first to ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors. In July 2019 Nutanix native models (G7) and Dell EMC XC (Nutanix OEM) models also started to ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
AOS 5.17 introduces support for AMD processors across hypervisors (ESXi/Hyper-V/AHV). The first hardware partner to release an AMD platform for AOS is HPE with more to follow.
|
Flexible
VxRail offers multiple CPU options in each hardware model.
E-Series can have single or dual socket, and up to 28 cores/CPU.
G-Series can have single or dual socket, and up to 28 cores/CPU.
P-Series can have dual or quad socket, and up to 28 cores/CPU.
S-Series can have single or dual socket, and up to 28 cores/CPU.
V-Series can have dual socket, and up to 28 cores/CPU.
VxRail on Dell PowerEdge 14G servers are equiped with Intel Xeon Scalable processors (Skylake and Cascade Lake).
Dell EMC VxRail 4.7.211 introduced official support for the 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
|
|
Flexible
|
Flexible: up to 10 options
Depending on the hardware model Nutanix offers one or multiple choices on memory per Node: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB, 512GB, 640GB, 768GB, 1TB, 1.5TB or 3.0TB. None of the model sub-types offers all of the above memory configurations. Nutanix current hardware platforms provide up to 10 different memory confgurations.
|
Flexible
The amount of memory is configurable for all hardware models. Depending on the hardware model Dell EMC offers multiple choices on memory per Node, maxing at 3TB for E-, P-, S-, V-series and 2TB for G-Series. VxRail Appliances use 16GB RDIMMS, 32GB RDIMMS, 64GB LRDIMMS or 128GB LRDIMMS.
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco)
Fixed: Number of disks (hybrid, most all-flash)
Some All-Flash models offer 2 or 3 options for the number of SSDs to be installed in each appliance.
AOS supports the replacement of existing drives by larger drives when they become available. Nutanix facilitates these upgrades.
|
Flexible: number of disks (limited) + capacity
A 14th generation VxRail appliance has 24 disk slots.
E-Series supports 10x 2.5' SAS drives per node (up to 2 disk groups: 1x flash + 4x capacity drives each).
G-Series supports 6x 2.5' SAS drives per node (up to 1 disk group: 1x flash + 5x capacity drives each).
P-Series supports 24x 2.5' SAS drives per node (up to 4 disk groups: 1x flash + 5x capacity drives each).
V-Series supports 24x 2.5' SAS drives per node (up to 4 disk groups: 1x flash + 5x capacity drives each).
S-Series supports 12x 2.5' + 2x 3,5' SAS drives per node (up to 2 disk groups: 1x flash + 6 capacity drives each).
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: up to 4 add-on options
Depending on the hardware model Nutanix by default includes 1GbE or 10GbE network adapters and one or more of the following network add-ons: dual-port 1 GbE, dual-port 10 GbE, dual-port 10 GBase-T, quad-port 10 GbE, dual-port 25 GbE, dual-port 40GbE.
AOS 5.9 introduces support for Remote Direcy Memory Access (RDMA) support. RDMA provides a node with direct access to the memory subsystems of other nodes in the cluster, without needing the CPU-bounded network stack of the operating system. RDMA allows low-latency data transfer between memory subsystems, and so helps to improve network latency and lower CPU use.
With two RDMA-enabled NICs per node installed, the following platforms support RDMA:
- NX-3060-G6
- NX-3155G-G6
- NX-3170-G6
- NX-8035-G6
- NX-8155-G6
RDMA is not supported for the NX-1065-G6 platform as this platform includes only one NIC per node.
|
Flexible (1, 10 or 25 Gbps)
E-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
G-Series supports 2x 25 GbE (SFP28) or 2x 10GbE (RJ45/SFP+) per node.
P-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
S-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
V-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) per node.
1GbE configurations are only supported with 1-CPU configurations.
Dell EMC VxRail 4.7.300 provides more network design flexibility in creating VxRail clusters across multiple racks. It also provides the ability to expand a cluster beyond one rack using L3 networks and L2 networks.
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla (specific appliance models only)
Currently Nutanix and OEM partners support the following GPUs in a single server:
NVIDIA Tesla M60 (2x in NX-3155G-G5; 1x in NX-3175-G5; 2x in HX3510-G; 3x in XC740xd-24)
NVIDIA Tesla M10 (1x in NX-3170-G6; 2x in NX-3155G-G5; 1x in NX-3175-G5; 2x in XC740xd-24)
NVIDIA Tesla P40 (1x in NX-3170-G6; 2x in NX-3155G-G5; 1x in NX-3175-G5;3x in XC740xd-24)
NVIDIA Tesla V100 (AHV clusters running AHV-20170830.171 and later only)
Nutanix and OEM partners do not support Intel and/or AMD GPUs at this time.
AOS 5.5 introduced vGPU support for guest VMs in an AHV cluster.
|
NVIDIA Tesla
Dell EMC offers multiple GPU options in the VxRail V-series Appliances.
Currently the following GPUs are provided as add-on in PowerEdge Gen14 server hardware (V-model only):
NVIDIA Tesla M10 (up to 2x in each node)
NVIDIA Tesla M60 (up to 3x in each node)
NVDIA Tesla P40 (up to 3x in each node)
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
Memory
Storage
GPU
AOS supports the replacement of existing drives by larger drives when they become available. Nutanix facilitates these upgrades.
Empty drive slots in Cisco and Dell hardware models can be filled with additional physical disks when needed.
AOS 5.1 introduced general availability (GA) support for hot plugging virtual memory and vCPU on VMs that run on top of the AHV hypervisor from within the Self-Service Portal only. This means that the memory allocation and the number of CPUs on VMs can be increased while the VMs are powered on. However, the number of cores per CPU socket cannot be increased while the VMs are powered on.
AOS 5.11 with Foundation 4.4 and later introduces support for up to 120 Tebibytes (TiB) of storage per node. For these larger capacity nodes, each Controller VM requires a minimum of 36GB of host memory.
|
Memory
Storage
Network
GPU
At the moment CPUs are not upgradeable in VxRail.
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only (NFS; SMB3)
Storage-only
Storage+Compute: Existing Nutanix clusters can be expanded by adding additional Nutanix nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because Nutanix leverages file-level protocols (NFS/SMB), storage can be presented to hypervisor hosts not participating in the Nutanix cluster. This is also beneficial to migrations, since it allows for online storage vMotions between Nutanix and non-Nutanix storage platforms.
Storage-only: Any Nutanix appliance model can be designated as a storage-only node. The storage-only node is part of the Nutanix storage cluster but does not actively participate in the hypervisor cluster. The storage-only node is always installed with AHV, which economizes on vSphere and/or Hyper-V licenses.
AOS 5.10 introduces the ability to add a never-schedulable node if you want to add a node to increase data storage on a Nutanix AHV cluster, but do not want any VMs to run on that node. In this way compliance and licensing requirements of virtual applications that have a physical CPU socket-based licensing model, can be met when scaling out a Nutanix AVH cluster.
|
Compute+storage
Storage+Compute: Existing VxRail clusters can be expanded by adding additional VxRail nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: VMware does not allow non-VxRail hosts to become part of a VxRail cluster. This means that installing and enabling the vSAN VMkernel on hosts in a VxRail cluster that is not contributing storage, so vSAN datastores can be presented to these hypervisor hosts as well, is not an option at this time.
Storage-only: N/A; A VxRail node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
3-Unlimited nodes in 1-node increments
The hypervisor cluster scale-out limits still apply eg. 64 hosts for VMware vSphere and Microsoft Hyper-V in a single cluster. Nutanix AHV clusters have no scaling limit.
SX1000 clusters scale to a maximum of 4 nodes per block and a maximum of 2 blocks is allowed, making for a total of 8 nodes.
|
3-64 storage nodes in 1-node increments
At minimum a VxRail deployment consists of 3 nodes. From there the solution can be scaled one or more nodes at a time. The first 4 cluster nodes must be identical.
Scaling beyond 32 nodes no longer requires a Request for Product Qualification (RPQ). However, an RPQ is required for Stretched Cluster implementations
If using 1GbE only, a storage cluster cannot expand beyond 8 nodes.
For the maximum node configuration hypervisor cluster scale-out limits still apply: 64 hosts for VMware vSphere.
VxRail 4.7 introduces support for adding multiple nodes in parallel, speeding up cluster expansions.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
3 Node minimum (data center)
1 or 2 Node minimum (ROBO)
1 Node minimum (backup)
Nutanix smallest deployment for Remote Office and Branch Office (ROBO) environments is a 1 node cluster based on the NX-1175S-G5 hardware model that is a good fit for small scale remote office and branche office (ROBO) environments. The Starter Edition license is often best suited for this type of deployment.
Nutanix also provides a single-node configuration for off-cluster on-premises backup purposes by leveraging native snapshots. The single-node supports compression and deduplication, but cannot be used as a generic backup target. It uses Replication Factor 2 (RF2 ) to protect data against magnetic disk failures.
AOS 5.6 introduces support for 2-node ROBO environments. These deployments require a Witness VM in a 3rd site as tie-breaker. This Witness is available for VMware vSphere and AHV. The same Witness VM is also used in Stretched Cluster scenarios based on VMware vSphere.
Nutanix smallest deployment for data center environments contains 3 nodes in a Nutanix Xpress 2U SX-1000 configuration that is specifically designed for small scale environment. The Starter Edition license is often best suited for this type of deployment.
|
2 Node minimum
VxRail 4.7.100 introduced support for 2-node clusters:
- The deployment is limited to VxRail Series E-Series nodes.
- Only 1GbE and 10GbE are supported. Inter-cluster VxRail traffic utilizes a pair of network cables linke between the physical nodes.
- A customer-supplied external vCenter is required that does not reside on the 2-node cluster.
- A Witness VM that monitors the health of the 2-node cluster is required and does not reside on the 2-node cluster.
2-node clusters are not supported when using the VxRail G410 appliance.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Distributed File System (ADSF)
Nutanix is powered by the Acropolis Distributed Storage Fabric (ADSF), previously known as Nutanix Distributed File System (NDFS).
|
Object Storage File System (OSFS)
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
Data follows the VM, so if a VM is moved to another server/node, when data is read that data is copied to the node where the VM resides. New/changed data is always written locally and to one or more remote locations for protection.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example. Apart from the read-cache VxRail only uses data locality in stretched clusters to avoid high latency.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
The software takes ownership of the unformatted physical disks available in the host.
|
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
The software takes ownership of the unformatted physical disks available inside the host.
VMware vSAN 7.0 U1 introduces the HCI Mesh concept. With VMware HCI Mesh a vSAN cluster can leverage the storage of remote vSAN clusters for hosting VMs without sacrificing important features such as HA and DR. Up to 5 remote vSAN datastores can be mounted by a single vSAN cluster. HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols. It is full software-based and does not require any specialized hardware.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Hybrid (Flash+Magnetic)
All-Flash
|
Hybrid (Flash+Magnetic)
All-Flash
Hybrid hosts cannot be mixed with All-Flash hosts in the same VxRail cluster.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD
Dell: SD or SSD
Lenovo: DOM, SD or SSD
Cisco: SD or SSD
|
SSD
Each VxRail Gen14 node contains 2x 240GB SATA M.2 SSDs with RAID1 protection to host the VMware vSphere hypervisor software.
|
|
|
|
Memory |
|
|
|
DRAM
|
DRAM
|
DRAM
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read Cache
|
Read Cache
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Configurable
The memory of the Virtual Storage Controllers (VSCs) can be tuned to allow for a larger read cache by allocating more memory to these VMs. The amount of read cache is assigned dynamically; everything that is not used by the OS is used for read caching. Depending on other active internal processes, the amount of read cache can be larger or smaller.
|
Non-configurable
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
Nutanix supports NVMe drives in: Nutanix Community Edition (NCE), NX-3060-G6, NX-3170-G6, NX-8035-G6 and NX-8155-G6, as well as respective OEM models.
AOS 5.17 introduced All-NVMe drive support.
AOS 5.18 introduces Nutanix Blockstore that streamlines the I/O stack, taking the file system out of the equation altogether. With Blockstore Nutanix ECP effectively bypasses the OS (=Linux) kernel context switches and thus manages to accelerate AOS storage processing. Leveraging Nutanix Blockstore together with NVMe/Optane storage media delivers significant IOPS and latency improvements with no changes to the applications.
|
SSD; NVMe
VxRail Appliances support a variety of SSDs.
Cache SSDs (SAS): 400GB, 800GB, 1.6TB
Cache SSDs (NVMe): 800GB, 1.6TB
Capacity SSDs (SAS/SATA): 1.92TB, 3.84TB, 7.68TB
Capacity SSDs (NVMe): 960GB, 1TB, 3.84TB, 4TB
VxRail does not support mixing SAS/SATA SSDs in the same disk group.
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Read/Write Cache
Storage Tier
|
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
In all VxRail configurations 1 separate SSD per disk group is used for caching purposes. The other disks in the disk group are used for persistent storage of data.
For All-flash configurations, the flash device(s) in the cache tier are used for write caching only (no read cache) as read performance from the capacity flash devices is more than sufficient.
Two different grades of flash devices are used in an All-flash VxRail configuration: Lower capacity, higher endurance devices for the cache layer and more cost effective, higher capacity, lower endurance devices for the capacity layer. Writes are performed at the cache layer and then de-staged to the capacity layer, only as needed.
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
Hybrid: 1-4 SSDs per node
All-Flash: 3-24 SSDs per node
NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node
Hybrid SSDs: 480GB, 960GB, 1.92TB, 3.84TB, 7.68TB
All-Flash SSDs: 480GB, 960GB, 1.92TB, 3.84TB, 7.68TB
NVMe: 1.6TB, 2TB, 4TB
|
Hybrid: 4 Flash devices per node
All-Flash: 2-24 Flash devices per node
Each VxRail node always requires 1 high-performance SSD for write caching.
In Hybrid VxRail configurations the high-performance SSD in a disk group is also used for read caching. Per disk group 3-5 HDDs can be used as persistent storage (capacity drives).
In All-Flash VxRail configurations the high-performance SSD in a disk group is only used for write caching. Per node 1-5 SSDs can be used for read caching and persistent storage (capacity drives).
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
Hybrid: SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
Hybrid: SAS or SATA
VxRail Appliances support a variety of HDDs.
SAS 10K: 1.2TB, 1.8TB, 2.4TB
SATA 7.2K: 2.0TB, 4.0TB
VMware vSAN supports the use of 512e drives. 512e magnetic hard disk drives (HDDs) use a physical sector size of 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities.
VMware vSAN 6.7 introduces support for 4K native (4Kn) mode.
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Persistent Storage
|
Persistent Storage
|
Persistent Storage
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
2-20 SATA HDDs per node
Hybrid HDDs: 1TB, 2TB, 4TB, 6TB, 8TB, 12TB
|
3-5 SAS/SATA HDDs per disk group
In the current configurations there is a choice between 1.2/1.8/2.4TB 10k SAS drives and 2.0/4.0TB 7.2k NL-SAS drives.
The current configuration maximum for a single host/node is 4 disk groups consisting of 1 NVMe drive + 5 HDDs for hybrid configurations or 1 NVMe drive + 5 capacity SSDs for all-flash configurations = total of 24 drives per host/node.
Since a single VxRail G-Series chassis can contain up to 4 nodes, theres a total of 6 drives per node.
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Flash Layer (SSD; NVMe)
Nutanix supports NVMe drives in: Nutanix Community Edition (NCE), Native G6 and G7 hardware models, as well as respective OEM models.
All mentioned Nutanix models are available in Hybrid (SSD+Magnetic), All-Flash (SSD-only) and All-Flash with NVMe (NVMe+SSD) configurations.
|
Flash Layer (SSD, NVMe)
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nutanix implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining replicas is the default method for protecting data that is written to the Nutanix cluster. An implementation of erasure coding, called EC-X by Nutanix, can be optionally enabled for protecting data once its write-cold (not overwritten for a cerain amount of time). RF and EC-X are enabled on a per-container basis, but are applied at the individual VM/file level.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on a different node in the cluster. The latter happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired replication factor.
Erasure Coding: Nutanix EC-X was introduced in AOS 5.0 and is used for protecting write cold data only. Nutanix EC-X provides more efficient protection than Nutanix RF, as it does not use full copies of data extents. Instead, EC-X uses parity for protecting data extents and distributes both data and parity across all nodes in the cluster. The amount of nodes within a Nutanix cluster determines the EC-X stripe size. When a disk fails, it is marked offline and data needs to be rebuild in-flight using the parity as data is being read, incurring a performance penalty. At the same time, data re-replication is initiated in order to restore the desired EC-X protection. Nutanix EC-X requires at least a 4-node setup. When Nutanix EC-X is enabled for the write cold data, it automatically uses the same resiliency level as is already in use for the write hot data, so 1 parity block per stripe for data with RF2 protection enabled and 2 parity blocks per stripe for data with RF3 protection enabled. Nutanix EC-X is enabled on the Storage Container (=vSphere Datastore) level.
AOS 5.18 introduces EC-X support for Object Storage containers.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduces the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nutanix implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining replicas is the default method for protecting data that is written to the Nutanix cluster. An implementation of erasure coding, called EC-X by Nutanix, can be optionally enabled for protecting data once its write-cold (not overwritten for a cerain amount of time). RF and EC-X are enabled on a per-container basis, but are applied at the individual VM/file level.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on a different node in the cluster. The latter happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired replication factor.
Erasure Coding: Nutanix EC-X was introduced in AOS 5.0 and is used for protecting write cold data only. Nutanix EC-X provides more efficient protection than Nutanix RF, as it does not use full copies of data extents. Instead, EC-X uses parity for protecting data extents and distributes both data and parity across all nodes in the cluster. The amount of nodes within a Nutanix cluster determines the EC-X stripe size. When a disk fails, it is marked offline and data needs to be rebuild in-flight using the parity as data is being read, incurring a performance penalty. At the same time, data re-replication is initiated in order to restore the desired EC-X protection. Nutanix EC-X requires at least a 4-node setup. When Nutanix EC-X is enabled for the write cold data, it automatically uses the same resiliency level as is already in use for the write hot data, so 1 parity block per stripe for data with RF2 protection enabled and 2 parity blocks per stripe for data with RF3 protection enabled. Nutanix EC-X is enabled on the Storage Container (=vSphere Datastore) level.
AOS 5.18 introduces EC-X support for Object Storage containers.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduces the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Block Awareness (integrated)
Nutanix intelligent software features include Block Awareness. A 'block' represents a physical appliance. Nutanix refers to a “block” as the chassis which contains either one, two, or four 'nodes'. The reason for distributing roles and data across blocks is to ensure if a block fails or needs maintenance the system can continue to run without interruption.
Nutanix Block Awareness can be broken into a few key focus areas:
- Data (The VM data)
- Metadata (Cassandra)
- Configuration Data (Zookeeper)
Block Awareness is automatic (always-on) and requires a minimum of 3 blocks to be activated, otherwise node awareness will be defaulted to. The 3-block requirement is there to ensure quorum.
AOS 5.8 introduced support for erasure coding in a cluster where block awareness is enabled. In previous versions of AOS block awareness was lost when implementing erasure coding. Minimums for erasure coding support are 4 blocks in an RF2 cluster and 6 blocks in an RF3 cluster. Clusters with erasure coding that have less blocks than specified will not regain block awareness after upgrading to AOS 5.8.
|
Failure Domains
Block failure protection can be achieved by assigning nodes in the same appliance to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Rack Fault Tolerance
Rack Fault Tolerance is the ability to provide rack level availability domain. With rack fault tolerance, redundant copies of data are made and placed on the nodes that are not in the same rack. When rack fault tolerance is enabled, the cluster has rack awareness and the guest VMs can continue to run with failure of one rack (RF2) or two racks (RF3). The redundant copies of guest VM data and metadata exist on other racks when one rack fails.
AOS 5.17 introduces Rack Fault Tolerance support for Microsoft Hyper-V. Now all three hypervisors (ESXi, Hyper-V and AHV) have node, block, and rack level failure domain protection.
|
Failure Domains
Rack failure protection can be achieved by assigning nodes within the same rack to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
RF2 (2N) (primary): 100%
RF3 (3N) (primary): 200%
EC-X (N+1) (secondary): 20-50%
EC-X (N+2) (secondary): 50%
EC-X (N+1): The optimal and recommended stripe size is 4+1 (20% capacity overhead for data protection). The minimum 4-node cluster configuration has a stripe size is 2+1 (50% capacity overhead for data protection).
EC-X (N+2): The optimal and recommended stripe size is 4+2 (50% capacity overhead for data protection). The minimum 6-node cluster configuration also has a stripe size is 4+2.
Because Nutanix Erasure Coding (EC-X) is a secondary feature that is only used for write cold data, with regard to an EC-X enabled Storage Container (=vSphere Datastore) the overall capacity overhead for data protection is always a combination of RF2 (2N) + EC-X (N+1) or RF3 (3N) + EC-X (N+2).
From AOS 5.18 onwards Nutanix Erasure Coding X (EC-X) is enabled for Object Storage containers.
|
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
RAID5: The stripe size used by vSAN for RAID5 is 3+1 (33% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID5 is 4 nodes.
RAID6: The stripe size used by vSAN for RAID6 is 4+2 (50% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID6 is 6 nodes.
RAID5/6 can only be leveraged in vSAN All-flash configurations because of I/O amplification.
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
Disk scrubbing (software)
While writing data checksums are created and stored. When read again, a new checksum is created and compared to the initial checksum. If incorrect, a checksum is created from another copy of the data. After succesful comparison this data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Disk Scrubbing is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
Read integrity checks
Disk scrubbing (software)
End-to-end checksum provides automatic detection and resolution of silent disk errors. Creation of checksums is enabled by default, but can be disabled through policy on a per VM (or virtual disk) basis if desired. In case of checksum verification failures data is fetched from another copy.
The disk scrubbing process runs in the background.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
Built-in (native)
Nutanix calls its native snapshot/backup feature Time Stream.
|
Built-in (native)
VMware vSAN uses the 'vSANSparse' snapshot format that leverages VirstoFS technology as well as in-memory metadata cache for lookups. vSANSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local + Remote
|
Local
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
GUI: 1-15 minutes (nearsync replication); 1 hour (async replication)
NearSync and Continuous (Metro Availability) remote replication are only available in the Ultimate edition.
Nutanix Files 3.6 introduces support for NearSync. NearSync can be configured to take snapshots of a file server in 1-minute intervals.
AOS 5.9 introduced one-time snapshots of protection domains that have a NearSync schedule.
|
GUI: 1 hour
vSAN snapshots are invoked using the existing snapshot options in the VMware vSphere GUI.
To create a snapshot schedule using the Dell EMCnter (Web) Client: Click on a VM, then inside the Monitoring tab select Tasks & Events, Scheduled Tasks, 'Take Snapshots…'.
A single snapshot schedule allows a minimum frequency of 1 hour. Manual snapshots can be taken at any time.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM
|
Per VM
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
Built-in (native)
Nutanix calls its native snapshot feature Time Stream.
Nutanix calls its cloud backup feature Cloud-Connect.
By combining Nutanix native snapshot feature with its native remote replication mechanism, backup copies can be created on remote Nutanix clusters or within the public cloud (AWS/Azure).
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
Apart from the native features, Nutanix ECP can be used in conjunction with external data protection solutions like VMwares free-of-charge vSphere Data Protection (VDP) backup software, as well as any hypervisor compatible 3rd party backup application. VDP is part of the vSphere license and requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between Nutanix ECP and VMware VDP.
|
External (vSAN Certified)
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and was expected to go live in the first half of 2019. vSAN 7.0 also did not introduce native data protection.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
To local single-node
To local and remote clusters
To remote cloud object stores (Amazon S3, Microsoft Azure)
Nutanix calls its native snapshot feature Time Stream.
Nutanix calls its cloud backup feature Cloud-Connect.
Nutanix provides a single-node cluster configuration for on-premises off-production-cluster backup purposes by leveraging native snapshots. The single-node supports compression and deduplication, but cannot be used as a generic backup target. It uses Replication Factor 2 (RF2 ) to protect data against magnetic disk failures.
Nutanix also provides a single-node cluster configuration in the AWS and Azure public clouds for off-premises backup purposes. Data is moved to the cloud in an already deduplicated format. Optionally compression can be enabled on the single-node in the cloud.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
NearSync to remote clusters: 1-15 minutes*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Nutanix calls its cloud backup feature Cloud-Connect.
*The retention of NearSync LightWeight snapshots (LWS) is relatively low (15 minutes), so these snapshots are of limited use in a backup scenario where retentions are usually a lot higher (days/weeks/months).
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Nutanix provides the option to enable Microsoft VSS integration when configuring a backup policy. This ensures application-consistent backups are created for MS Exchange and MS SQL database environments.
AOS 5.9 introduced Application-Consistent Snapshot Support for NearSync DR.
AOS 5.11 introduces support for Nutanix Guest Tools (NGT) in VMware ESXi environments next to native AHV environments. This allows for the use of native Nutanix Volume Shadow Copy Service (VSS) instead of Microsofts VSS inside Windows Guest VMs. The Nutanix VSS hardware provider enables integration with native Nutanix data protection. The provider allows for application-consistent, VSS-based snapshots when using Nutanix protection domains. The Nunatix Guest Tools CLI (ngtcli) can also be used to execute the Nutanix Self-Service Restore CLI.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire VM or Single File (local snapshots)
Self-service single-file restore is available for VMware ESXi and AHV (Acroplis Hypervisor based on KVM) environments. It is limited to NTFS basic disks in Windows VMs and requires the installation of the Nutanix Guest Tools (NGT) inside the protected VMs. The Self-Service Restore functionality has been enhanced with an in-guest user interface that allows application administrators to manage their own snapshots and perform single file restores. Prism can be used to manage and configure NGT.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire VM: GUI
Single File: GUI, nCLI
The VM Administrator can initiate a restore from within the Guest VM using the Self-Service Restore (SSR) GUI or through nCLI commands. At this time only Async-DR workflow is supported for the self-service restore feature.
AOS 5.18 introduces self-service restore (=file-level restore) capabilities to Nutanix Leap.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (native)
DataCore SANsymphonys remote replication function, Asynchronous Replication, is called upon when secondary copies will be housed beyond the reach of Synchronous Mirroring, as in distant Disaster Recovery (DR) sites. It relies on a basic IP connection between locations and works in both directions. That is, each site can act as the disaster recovery facility for the other. The software operates near-synchronously, meaning that it does not hold up the application waiting on confirmation from the remote end that the update has been stored remotely.
|
Built-in (native)
Nutanix provides native DR and replication capabilities.
|
Built-in (Stretched Clusters only)
External
VMware vSAN does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like VMwares free-of-charge vSphere Replication (VR) or any vSphere compatible 3rd party remote replication application (eg. Zerto VR).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between VMware vSAN and VMware vSphere VR.
As of vSAN 7.0 vSphere Replication objects are visible in the vSAN capacity view. Objects are recognized as vSphere replica type, and space usage is accounted for under the Replication category.
|
|
Remote Replication Scope
Details
|
To remote sites
To MS Azure Cloud
On-premises deployments of DataCore SANsymphony can use Microsoft Azure cloud as an added replication location to safeguard highly available systems. For example, on-premises stretched clusters can replicate a third copy of the data to MS Azure to protect against data loss in the event of a major regional disaster. Critical data is continuously replicated asynchronously within the hybrid cloud configuration.
To allow quick and easy deployment a ready-to-go DataCore Cloud Replication instance can be acquired through the Azure Marketplace.
MS Azure can serve only as a data repository. This means that VMs cannot be restored and run in an Azure environment in case of a disaster recovery scenario.
|
To remote sites
To AWS and MS Azure Cloud
To Xi Cloud (US and UK only)
AWS and MS Azure can only serve as data repositories. This means that VMs cannot be restored and run in an AWS/Azure environment in case of a disaster recovery scenario.
Xi Cloud: The Nutanix DRaaS offering is called ' Xi Leap' is All protected Nutanix VMs can be failed over and run in Xi Cloud and then failed back once the on-prem resources are restored. Xi Leap is currently available in three regions worldwide: US-EAST (Northern Virginia), US-WEST (San Francisco Bay Area) and UK (London). Each region comprises multiple fault-tolerant zones known as Availability Zones.
Both Nutanix AHV and VMware ESXi are suported for Xi Leap.
|
VR: To remote sites, To VMware clouds
vSAN allows for replication of VMs to a different vSAN cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
|
|
Remote Replication Cloud Function
Details
|
Data repository
All public clouds can only serve as data repository when hosting a DataCore instance. This means that VMs cannot be restored and run in the public cloud environment in case of a disaster recovery scenario.
In the Microsoft Azure Marketplace there is a pre-installed DataCore instance (BYOL) available named DataCore Cloude Replication.
BYOL = Bring Your Own License
|
Data repository (AWS/Azure)
DR-site (Xi Cloud)
AWS and MS Azure can only serve as data repositories. This means that VMs cannot be restored and run in an AWS/Azure environment in case of a disaster recovery scenario.
|
VR: DR-site (VMware Clouds)
Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
|
Remote Replication Topologies
Details
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to-many, many-to-1
Multiple Site DR (1-to-many, many-to-1) is only available in the Ultimate edition.
AOS 5.17 introduces multi-site disaster recovery. Multi-Site DR combines Nutanix Metro Availablity, NearSync and Asynchronous replication with a DR orchestration framework. This enables recovery from the simultaneous failure of two or more datacenters.
|
VR: Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
|
Remote Replication Frequency
Details
|
Continuous (near-synchronous)
SANsymphony Asynchronous Replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes). End-users can inject custom consistency checkpoints based on CDP technology which has no minimum time slot/frequency.
|
Synchronous to remote cluster: continuous
NearSync to remote clusters: 20 seconds*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Xi Cloud: 1 hour
AOS 5.5 introduced NearSync replication that leverages light-weight snapshots (LWS). The NearSync feature provides the ability to protect data with up to 1 minute RPO, minimizing data loss in case of a disaster. NearSync allows for a short time to recover, thus providing a low RTO. NearSync places no restrictions on latency or distance and works with all supported hypervisors. NearSync is a best-effort mechanism: whenever for example the network can’t sustain the low RPO, replication will temporarily transition out of near-sync. NearSync requires all changes to be handled in SSD, so a percentage of SSD space is reserved to be used by NearSync when it’s enabled.
In AOS 5.9+ Asynchronous and NearSync schedules can coexist in a protection domain.
AOS 5.17 enhanced NearSync RPO from 1 minute to 20 seconds. AOS 5.17 also introduced support for NearSync Replication (1-15 minutes RPO) with Xi Leap. Protection Policies can now be configured to use a NearSync replication schedule between two on-prem AHV/ESXi clusters or an on-prem AHV cluster and Xi Cloud Services.
AOS 5.17 introduced Synchronous replication (0 RPO) between two on-prem AHV clusters when using Xi Leap, Nutanix own built-in disaster recovery service offering. This includes replication of VM data, metadata, and associated policies. This means all VM attributes and associated security and orchestration policies are preserved in case of a failover. At this time only an unplanned failover can be performed when leveraging Synchronous replication between two on-prem AHV clusters when using Xi Leap. Before AOS 5.17 AHV did not provide any support for Synchronous replication.
NearSync and Continuous (Metro Availability) remote replication are only available in the Ultimate edition.
Nutanix Files supports asynchronous replication with a minimum interval of 1 hour. Nutanix Files 3.6 introduced support for NearSync.
Nutanix Files 3.6.1 introduces Async disaster recovery (DR) with a 1-hour RPO on physical nodes up 80TB.
|
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
The 'Stretched Cluster' feature is only available in the Enterprise edition.
|
|
Remote Replication Granularity
Details
|
VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
VM
iSCSI LUN
VMs and Volume Groups can be replicated by placing them in a Protection Domain.
iSCSI LUNs can be part of a Volume Group.
VMs can be placed in a Consistency Group.
|
VR: VM
|
|
Consistency Groups
Details
|
Yes
SANsymphony provides the option to use Virtual Disk Grouping to enable end-users to restore multiple Virtual Disks to the exact same point-in-time.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
Yes
VMs can be configured as part of a Consistency Group in order to guarantee all VMs in the group to reflect exactly the same point-in-time and thus guaranteeing crash-consistent consistency across VMs.
Application consistency can only be achieved for a single VM, because only 1 VM can be part of a consistency group when selecting application consistent snapshots.
Nutanix supports a maximum of 20 entities (VMs, volume groups) in a single consistency group.
|
VR: No
Protection is on a per-VM basis only.
|
|
|
VMware SRM (certified)
DataCore provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). DataCore SRA 2.0 (SANsymphony 10.0 FC/iSCSI) shows official support for SRM 6.5 only. It does not support SRM 8.2 or 8.1.
There is no integration with Microsoft Azure Site Recovery (ASR). However, SANsymphony can be used with the control and automation options provided by Microsoft System Center (e.g. Operations Manager combined with Virtual Machine Manager and Orchestrator) to build a DR orchestration solution.
|
VMware SRM (certified)
Xi Leap (native; ESXi/AHV; US/UK/EU/JP)
NEW
VMware SRM: Nutanix provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). Nutanix SRA 2.5 (AOS 5.10 and 5.11) shows official support for SRM versions 8.2, 8.1 and 6.5.
Nutanix AOS 5.11 introduced support for NearSync replication. A NearSync schedule can now be created on a protection domain for SRA replication. The NearSync schedule can coexist with an asynchronous schedule.
Xi Leap: In November 2018 Nutanix launched Xi Leap, its own built-in disaster recovery service (DRaaS) offering. Xi Leap allows end-user organisations to protect their applications and data running in Nutanix on-premises environments by replicating them to Xi Cloud by setting up protection policies. From the Prism management console, admins select virtual machines for protection and drag and drop them into the cloud environment. Disaster Recovery orchestration is provided by recovery plans, where boot sequencing of virtual machines can be controlled, re-IP can be configured and custom scripts can be included. The goal is to provide one-click failover and failback functionality. Xi Leap allows end-user organisations to verify and proof their DR readiness by offers non-disruptive testing and cleanup.
Network connectivity and common management between on-premises and cloud environments are preserved with Xi Leap, allowing end-user organisations to manage the source and target sites as a single environment.
Xi Leap supports Single Sign On (SSO) by integrating with Active Directory Federation Services (ADFS), as well as data at-rest and data in-flight encryption (AES-256).
Xi Leap is currently bound to the following maximums:
- 200 VMs per recovery plan
- 20 categories per recovery plan
- 20 stages in a recovery plan
- 15 categories per stage
- 5 recovery plans executed in parallel
Currently Xi Leap is currently available in 6 regions worldwide: US-EAST (Northern Virginia), US-WEST (San Francisco Bay Area), UK (London), EU-Germany, EU-Italy and AP-Japan. Each region comprises multiple fault-tolerant zones known as Availability Zones.
Xi Leap is supported for AHV and ESXi on-premises hypervisors (AOS 5.11 required).
Xi Leap subcription plans need to be paid for separately.
AOS 5.17 introduced more flexibility and granularity in Nutanix Recovery Plans, including in-guest custom script execution and IP address configuration during the recovery process. The latter enables automated disaster recovery without the need for stretched networks.
AOS 5.18 introduced self-service restore (=file-level restore) capabilities for Xi Leap. AOS 5.18 also introduced Cross-Hypervisor Disaster Recovery (CHDR) Support for NearSync Replication in Leap. This means that CHDR enables recovering VMs from AHV clusters to ESXi-based Nutanix clusters or VMs from ESXi-based Nutanix clusters to AHV clusters.
AOS 5.19 introduces multi-site replication capabilities for Xi Leap. Protection policies can now be created to replicate the recovery points to one or more recovery availability zones. Recovery points can be replicated to at most 2 different AHV or ESXi clusters at the same or different availability zones. To maintain the efficiency of replication only 1 recovery availability zone can be onfigured for NearSync replication schedules (1–15 minutes RPO) and Synchronous (0 RPO) replication schedules.
AOS 5.19 also introduces cross-cluster live migration capabilities This allows a live migration of guest VMs protected with a Synchronous replication schedule. Live migration offers zero downtime of protected guest VMs during a planned failover event to the recovery availability zone.
|
VMware SRM (certified)
VMware Interoperability Matrix shows official support for SRM 8.3.
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
DataCore SANsymphony is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/kb/2149740.
|
vSphere: Yes
Hyper-V: Yes
AHV: No
Nutanix calls this feature Metro Availability. The solution is bi-directionally active/passive in nature. What this means is, that active containers can exist on both sites, each with a passive mirror in the other site.
Metro Availability and synchronous replication are supported across different hardware vendors (NX, Dell, or Lenovo) from AOS 5.1 onwards. Note that mixing of nodes from different vendors in the same cluster are not supported (for example, not supported: NX and non-NX, Dell and non-Dell, Lenovo and non-Lenovo, Cisco UCS and non-Cisco UCS, and so on.) Asynchronous replication across different hardware vendors continues to be supported.
Data is compressed during data transfers.
Although Nutanix has been claiming VMware vSphere Metro Storage Cluster (vMSC) support since NOS 4.1, it is not officialy listed as such in the online VMware Storage Compatibility Guide. According to Nutanix no certification program is open for vMSC, so no new vendors are allowed to join.
Stretched Cluster is only available in the Ultimate edition and can be purchased as an add-on license for the Pro edition.
|
VMware vSphere: Yes (certified)
There is read-locality in place preventing sub-optimal cross-site data transfers.
vSAN 7.0 introduces redirection for all VM I/O from a capacity-strained site to the other site, untill the capacity is freed up. This feature improves the uptime of VMs.
|
|
|
2+sites = two or more active sites, 0/1 or more tie-breakers
Theoretically up to 64 sites are supported.
SANsymphony does not require a quorum or tie-breaker in stretched cluster configurations, but can be used as an optional component. The Virtual Disk Witness can provide a tie-breaker role if for instance redundant inter site paths are not implemented. The tie-breaker node (server or device) must be other than the two nodes presenting a virtual disk. Access to the Virtual Disk Witness is leading for storage node behavior.
There are 3 ways to configure the stretched cluster without any tie-breakers:
1. Default: in a split-brain scenario both sides stay active allowing upper infrastructure layers (OS/database/application) to make a decision (eg. clustering principles). In any case SANsymphony prevents a merge when there is a risk to data integrity, and the end-user has to make the choice on how to proceed next (which side is true)
2. Select one side to go inaccessible
3. Select both sides to go inaccessible.
|
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
Hyper-V: 3-sites = two active sites + tie-breaker in 3rd site
The use of the Metro Availability Witness automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM on a third-party server or a Nutanix cluster within a separate failure domain.
The Metro Availability Witness is available for VMware vSphere environments and Microsoft Hyper-V 2016 environments (AOS 5.9 and later).
|
3-sites: two active sites + tie-breaker in 3rd site
NEW
The use of the Stretched Cluster Witness Appliance automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM within a third site.
vSAN 6.7 introduced the option to configure a dedicated VMkernel NIC for witness traffic. This enhances data security because witness traffic is isolated from vSAN data traffic.
vSAN 7.0 U1 introduces the vSAN Shared Witness. This feature allows end-user organizations to leverage a single Witness Appliance for up to 64 stretched clusters. This is especially useful in scenarios where many edge locations are involved. The size of the Witness Appliance determines the maximum number of cluster and components that can be managed.
|
|
|
<=5ms RTT (targeted, not required)
RTT = Round Trip Time
In truth the user/app with the least tolerated write latency defines the acceptable RTT or distance. In practice
|
<=5ms RTT / <400 KMs
|
<=5ms RTT
|
|
|
<=32 hosts at each active site (per cluster)
The maximum is per cluster. The SANsymphony solution can consist of multiple stretched clusters with a maximum of 64 nodes each.
|
No set max. # Nodes; Mixing hardware models allowed
Nutanix has not set a hard node limitation on Stretched Cluster Limitations. This means you are allowed to apply Stretched Clustering regardless of the number of nodes. It is currently unknown what the largest running implementation of Nutanix Stretched Clustering is in a live customer production environment.
|
<=15 nodes at each active site
For Dell EMC VxRail the minimum stretched cluster configuration is 3+3+1, meaning 3 nodes on the first site, 3 nodes on the second site and 1 tie-breaker VM on a third site. The maximum stretched cluster configuration is 15+15+1, meaning 15 nodes on the first site, 15 nodes on the second site and 1 tie-breaker VM on a third site.
|
|
SC Data Redundancy
Details
|
Replicas: 1N-2N at each active site
DataCore SANsymphony provides enhanced stretched cluster availability by offering local fault protection with In Pool Mirroring. With In Pool Mirroring you can choose to mirror the data inside the local Disk Pool as well as mirror the data across sites to a remote Disk Pool. In the remote Disk Pool data is then also mirrored. All mirroring happens synchronously.
1N-2N: With SANsymphony Stretched Clustering, there can be either 1 instance of the data at each site (no In Pool Mirroring) or 2 instances of the data a each site (In Pool RAID-1 Mirroring).
|
Replicas: 1N at each active site
Erasure Coding (optional): Nutanix EC-X at each active site
Nutanix Stretched Clustering works with Replication Factor 2 (RF2), meaning that there is only one instance of the data (1N) available at each of the active sites.
When using Nutanix EC-X, data is protected across cluster nodes within each active site.
|
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
VMware vSAN 6.6 and up provide enhanced stretched cluster availability with Local Fault Protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. You can define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site.
In the case of stretched clustering, selecting 0 replicas means that there is only one instance of the data available at each of the active sites.
Local Fault Protection is only available in the Enterprise edition of vSAN.
|
|
|
Data Services
|
|
|
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
Software (integration)
NEW
SANsymphony provides integrated and individually selectable inline deduplication and compression. In addition, SANsymphony is able to leverage post-processing deduplication and compression options available in Windows 2016/2019 as an alternative approach.
|
Software
|
All-Flash: Software
Hybrid: N/A
Deduplication and compression are only available in the VxRail All-Flash ('F') appliances.
|
|
Dedup/Compr. Function
Details
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency (full) and Performance (limited)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most storage solutions place emphasis on efficiency.
Nutanix ECP provides two deduplication methods:
1. Inline Performance Deduplication: performs near-line deduplication in order to provide for space savings on the performance tier (RAM/SSD), thus allowing more hot data to be stored here.
2. MapReduce Deduplication: performs post-process deduplication in order to provide for space savings on the capacity tier (HDD), thus allowing for more cold data to be stored here.
Nutanix ECP provides two compression methods:
1. Inline Compression: performs immediate compression on random writes in order for space savings on the performance tier (SSD), thus allowing for more hot data to be stored here.
2. MapReduce Compression: performs post-process deep compression in order to provide for additional space savings on the capacity tier (SSD/HDD), thus allowing for more cold data to be stored here.
|
Efficiency (Space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
|
Dedup/Compr. Process
Details
|
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
DataCore SANSymphony 10 PSP12 and above leverage both inline deduplication and compression, as well as post process deduplication and compression techniques.
With inline deduplication incoming writes first hit the memory cache of the primary host and are replicated to the cache of a secondary host in an un-deduplicated state. After the blocks have been written to both memory caches, the primary host acknowledges the writes back to the originator. Each host then destages the written blocks to the persistent storage layer. During destaging, written blocks are deduplicates and/or compressed.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
Perf. Tier: Inline (dedup post-ack / compr pre-ack)
Cap. Tier: Post-process
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
From AOS 5.0 and onwards compression is performed inline (pre-ack) for random writes, so before they are written to SSD. From AOS 5.1 onwards post-process compression is enabled by default. From AOS 5.18 onwards inline compression is enabled by default for new storage containers.
|
All-Flash: Inline (post-ack)
Hybrid: N/A
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
|
Dedup/Compr. Type
Details
|
Optional
NEW
By default, deduplication and compression are turned off. For both inline and post-process, deduplication and compression can be enabled.
For inline deduplication and compression the feature can be turned on per node. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization. Either deduplication or compression or both can be selected per individual vDisk. Pools can host both capacity optimized and non-capacity optimized vDisks at the same time. The optional capacity optimization settings can be added/changed/removed during operation for each vDisk.
For post-processing the feature can be enabled per pool. All vDisks in that pool would be deduplicated and compressed. Each pool is an independent deduplication domain. This means only data in the pool is capacity optimized, but not across pools. Additionally, for post-processing capacity optimization can be scheduled so admins can decide when deduplication should run.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
Dedup Inline: Optional
Dedup Post-Process: Optional
Compr. Inline: Optional
Compr. Post-Process: Optional
Inline compression is turned on by default, post-process compression is turned on by default, and deduplication is turned off by default. Deduplication and compression can be enabled/disabled separately or together. This provides maximum flexibility.
|
All-Flash: Optional
Hybrid: N/A
NEW
Compression occurs after deduplication and just before the data is actually written to the persisent data layer.
In vSAN 7.0 U1 and onwards there are three settings to choose from: 'None', 'Compression Only' or 'Deduplication'.
When choosing 'Compression only' deduplication is effectively disabled. This optimizes storage performance, resource usage as well as availability. When using 'Compression only' a single disk failing no longer impacts the entire disk group.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
|
Dedup Inline: memory and flash layers
Dedup Post-process: persistent data layer (adaptive)
Compr. Inline: flash and persistent data layers
Compr. Post-process: persistent data layer (adaptive)
Nutanix ADSF allows data reducation technologies across all tiers of storage including memory, performance and capacity tiers as well as All-Flash.
Adaptive: With post-process deduplication, data is intelligently qualified first. Data candidates with low or no matches are not deduplicated. This avoids metadata bloat due to non-dedupable candidates and unnecessary use of CPU and memory resources.
|
Persistent data layer
Deduplication and compression are not used for optimizing read/write cache.
|
|
Dedup/Compr. Radius
Details
|
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
For inline deduplication and compression raw physical disks are added to a capacity optimization pool. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization.
The post-processing capability provided through Windows Server 2016/2019 is highly scalable and can be used with volumes up to 64 TB and files up to 1 TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
Storage Container
Both Nutanix reduction techniques (deduplication and compression) can be applied on individual storage containers.
A storage container is a logical part of the Storage Pool and contains a group of VM or files (vDisks). Storage containers typically have a 1-to-1 mapping with an NFS datastore (vSphere) or SMB share (Hyper-V).
|
Disk Group
Deduplication and compression is a cluster-wide setting and is performed within each disk group. Redundant copies of a block within the same disk group are reduced to one copy. However, redundant blocks across multiple disk groups are not deduplicated.
|
|
Dedup/Compr. Granularity
Details
|
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
With inline deduplication and compression, the data is organized in 128 KB segments. Depending on the optimization setting, a write into such a segment first gets compressed (when compression is selected) and then a hash is generated. If the hash is unique, the 128 KB segment is written back and the hash is added to the deduplication hash-table. If the hash is not unique, the segment is referenced in the deduplication hash table and discarded. The smallest chunk in the segment can be 4 KB.
For post-processing the system leverages deduplication in Windows Server 2016/2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
16 KB fixed block size
Nutanix ECP Elastic Dedupe Engine fingerprints data during ingest at a 16K-block granularity using a SHA-1 hash. Intel acceleration is leveraged for the SHA-1 computation which accounts for very minimal CPU overhead. Fingerprinting is only performed on data ingest and is then stored persistently as part of the written block’s metadata. Fingerprinting during data ingest is performned on data with an I/O size of 64K or greater. In cases where fingerprinting is not done during ingest (e.g. smaller I/O sizes), fingerprinting can be done as a background process.
NOTE: Initially a 4K granularity was used for fingerprinting, however Nutanix internal testing revealed that 16K granularity offers the best blend of deduplication with reduced metadata overhead. Deduplicated data is pulled into the unified cache at a 4K granularity.
|
4 KB fixed block size
vSANs deduplication algorithm utilizes a 4K-fixed block size.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Microsoft provides the Deduplication Evaluation Tool (DDPEVAL) to assess the data in a particular volume and predict the dedup ratio.
|
N/A
|
N/A
Enabling deduplication and compression can reduce the amount of storage consumed by as much as 7x (7:1 ratio).
|
|
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense. When end-users want to isolate new workloads and corresponding data on new nodes, data rebalancing is not used.
|
Full
Data is automatically redistributed evenly across all nodes in the cluster when a node is either added or removed.
|
Full
Data can be redistributed evenly across all nodes in the cluster when a node is either added or removed.
For VMware vSAN data redistribution happens is two ways:
1. Automated: when physical disks are between 30% and 80% full and a node is added to the vSAN cluster, a health alert is generated that allows the end-user to execute an automated data rebalancing run. For this VMware uses the term 'proactive'.
2. Automatic: when physical disks are more than 80% full, vSAN executes an data rebalancing fully automatically, without requiring any user intervention. For this VMware uses the term 'reactive'.
As data is written, all nodes in the cluster service RF copies even when no VMs are running on the node which ensures data is being distributed evenly across all nodes in the cluster.
VMware vSAN 6.7 U3 includes proactive rebalancing enhancements. All rebalancing activities can be automated with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks.
|
|
|
Yes
DataCore SANsymphonys Auto-Tiering is a real-time intelligent mechanism that continuously positions data on the appropriate class of storage based on how frequently the data is accessed. Auto-Tiering leverages any combination of Flash and traditional disk technologies, whether it is internal or array based, with up to 15 different storage tiers that can be defined.
As more advanced storage technologies become available, existing tiers can be modified as necessary and additional tiers can be added to further diversify the tiering architecture.
|
N/A
The Nutanix storage architecture does not include multiple persistent storage layers, but rather consists of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
N/A
The VMware vSAN storage architecture does not include multiple persistent storage layers, but rather consist of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
|
|
|
Performance |
|
|
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
DataCore SANsymphony iSCSI and FC are fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero
Note: DataCore SANsymphony does not support Thick LUNs.
DataCore SANsymphony is also fully qualified for Microsoft Hyper-V 2012 R2 and 2016/2019 ODX and UNMAP/TRIM.
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
vSphere: VMware VAAI-NAS (full), RDMA
Hyper-V: SMB3 ODX; UNMAP/TRIM
AVH: Integrated
Nutanix is fully qualified for all VMware vSphere VAAI-NAS capabilities that include: Native SS for LC, Space Reserve, File Cloning and Extended Stats.
Nutanix is also fully qualified for MS Hyper-V 2012R2 / 2016 / 2019 ODX and TRIM.
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
RDMA is a network protocol that enables offloading storage processes from the server CPU. RDMA is strongly recommended when implementing S2D solution. It enables reading the hosts memory thus bypassing the OS. The result is a reduction of CPU usage, a decrease of network latency and an increase in throughput.
VAAI = Vmware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
RDMA = Read Direct Memory Access
|
vSphere: Integrated
VMware vSAN is an integral part of the VMware vSphere platform and as such is not a separate storage platform.
VMware vSAN 6.7 adds TRIM/UNMAP support: space that is no longer used can be automatically reclaimed, reducing the overall capacity needed for running workloads.
|
|
|
IOPs and/or MBps Limits
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical clients/hosts.
2. Ability to set guarantees to ensure service levels for mission-critical clients/hosts.
SANsymphony currently supports only the first method. Although SANsymphony does not provide support for the second method, the platform does offer some options for optimizing performance for selected workloads.
For streaming applications which burst data, it’s best to regulate the data transfer rate (MBps) to minimize their impact. For transaction-oriented applications (OLTP), limiting the IOPs makes most sense. Both parameters may be used simultaneously.
DataCore SANsymphony ensures that high-priority workloads competing for access to storage can meet their service level agreements (SLAs) with predictable I/O performance. QoS Controls regulate the resources consumed by workloads of lower priority. Without QoS Controls, I/O traffic generated by less important applications could monopolize I/O ports and bandwidth, adversely affecting the response and throughput experienced by more critical applications. To minimize contention in multi-tenant environments, the data transfer rate (MBps) and IOPs for less important applications are capped to limits set by the system administrator. QoS Controls enable IT organizations to efficiently manage their shared storage infrastructure using a private cloud model.
More information can be found here: https://docs.datacore.com/SSV-WebHelp/quality_of_service.htm
In order to achieve consistent performance for a workload, a separate Pool can be created where selected vDisks are placed. Alternatively 'Performance Classes' can be assigned to differentiate between data placement of multiple workloads.
|
IOPs Limits (maximums)
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. In general there are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Nutanix ECP Storage QoS provides administrators granular control to manage the performance of VMs and ensure that the system delivers consistent performance for all workloads. Administrators can limit the IOPS for individual VMs. IOPS is the number of requests the storage layer can serve in a second. Throttling IOPS on VMs is used to prevent noisy VMs from over-utilizing the storage system resources. Nutanix ECP Storage QoS is availabe in the Ultimate and Pro edition.
Nutanix AOS 5.0 enhanced performance reliability by introducing separate internal Read and Write I/O Queues so write-intensive workloads (or write bursts) will not starve out read operations, and vice-versa. This behavior is non-tunable.
|
IOPs Limits (maximums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
The vSAN software inside VxRail currently supports only the first method and focusses on IOPs. 'MBps Limits' cannot be set. It is also not possible to guarantee a certain amount of IOPs for any given VM.
|
|
|
Virtual Disk Groups and/or Host Groups
SANsymphony QoS parameters can be set for individual hosts or groups of hosts as well as for groups of Virtual Disks for fine grained control.
In a VMware VVols (=Virtual Volumes) environment a vDisk corresponds 1-to-1 to a virtual disk (.vmdk). Thus virtual disks can be placed in a Disk Group and a QoS Limit can then be assigned it. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and QoS Limits can be applied on the virtual disk level. The 1-to-1 allignment is realized by installing the DataCore Storage Management Provider in SCVMM.
|
Per VM
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. In general there are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Nutanix ECP Storage QoS provides administrators granular control to manage the performance of VMs and ensure that the system delivers consistent performance for all workloads. Administrators can limit the IOPS for individual VMs. IOPS is the number of requests the storage layer can serve in a second. Throttling IOPS on VMs is used to prevent noisy VMs from over-utilizing the storage system resources. Nutanix ECP Storage QoS is availabe in the Ultimate and Pro edition.
Nutanix AOS 5.0 enhanced performance reliability by introducing separate internal Read and Write I/O Queues so write-intensive workloads (or write bursts) will not starve out read operations, and vice-versa. This behavior is non-tunable.
|
Per VM/Virtual Disk
Quality of Service (QoS) for vSAN is normalized to a 32KB block size, and treats reads the same as writes.
|
|
|
Per VM/Virtual Disk/Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
In SANsymphony 'Flash Pinning' can be achieved using one of the following methods:
Method #1: Create a flash-only pool and migrate the individual vDisks that require flash pinning to the flash-only pool. When using a VVOL configuration in a VMware environment, each vDisk represents a virtual disk (.vmdk). This method guarantees all application data will be stored in flash.
Method #2: Create auto-tiering pools with at least 1 flash tier. Assign the Performance Class “Critical” to the vDisks that require flash pinning and place them in the auto-tiering pool. This will effectively and intelligently put as much of the data that resides in the vDisk in the flash tier as long that the flash tier has enough space available. Therefore this method is on a best-effort basis and dependent on correct sizing of the flash tier(s).
Methods #1 and #2 can be uses side-by-side in the same DataCore environment.
|
VM Flash Mode: Per VM/Virtual Disk/iSCSI LUN
Nutanix 'VM Flash Mode' can be used to assign specific data in hybrid configurations to exist solely on flash storage and as such never have that data destaged to the magnetic layer. By pinning data to the flash storage layer low latencies can be better guaranteed regardless of what part of the data is being read (hot or cold) and at what time the read takes place.
Flash Mode can be enabled for individual VMs as well as baremetal iSCSI LUNs.
Nutanix 'VM Flash Mode' is only available in the Ultimate edition.
|
Cache Read Reservation: Per VM/Virtual Disk
With VxRail the Cache Read Reservation policy for a particular VM can be set to 100% to allow all data to also exist entirely on the flash layer. The difference with Nutanix 'VM Flash Mode' is, that with 'VM Flash Mode' persistent data of the VM resides on Flash and is never destaged to spinning disks. In contrast, with VxRails Cache Read Reservation data exists twice: one instance on persistent magnetic disk storage and one instance within the SSD read cache.
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
SANsymphony 10.0 PSP9 introduced native encryption when running on Windows Server 2016/2019.
|
Built-in (native)
|
Built-in (native)
|
|
Data Encryption Options
Details
|
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: In SANsymphony deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: SANsymphony provides software-based data-at-rest encryption that is XTS-AES 256bit compliant.
|
Hardware: Self-encrypting drives (SEDs)
Software: AOS encryption; Vormetric VTE (validated), Gemalto (verified)
Hardware:
Data-at-rest encryption is compatible with all hypervisor platforms: VMware vSphere, Microsoft Hyper-V, AHV. Hardware encryption is only available with the Ultimate edition.
AOS 5.8 introduced a Dual Encryption mechanism that protects the data on the clusters using both SEDs and AOS Software based encryption. Dual Encryption configuration requires an external key manager to store the keys.
Software:
AOS 5.5 introduced built-in software-based data-at-rest AES-256 encryption. Because it is 100% software, the built-in encryption works with standard drives, so does not require SED hardware. AOS 5.8 introduces the Cluster Native Key Management Server (KMS) which can manage the encryption keys on the cluster locally, without the need of an external KMS. Nutanix Acropolis Data Encryption (ADE) supports VMware vSphere, Microsoft Hyper-V and AHV hypervisors. Nutanix Acropolis Data Encryption (ADE) is only available in the Ultimate edition.
AOS 5.8 introduced a Dual Encryption mechanism that protects the data on the clusters using both SEDs and AOS Software based encryption. Dual Encryption configuration requires an external key manager to store the keys.
AOS 5.9 introduced Background Encryption. Software encryption can be enabled on clusters or containers having existing data. Switching between native key management server (KMS) and external KMS is supported.
Vormetric Transparent Encryption (VTE) and Vormetric Key Management (VKM) have been validated as Nutanix Ready for Networking and Security. Nutanix has also been verified for use with Gemalto SafeNet KeySecure. SafeNet KeySecure manages the encryption keys to Nutanix SEDs.
|
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
Hardware: vSAN does no longer support self-encrypting drives (SEDs).
Software: vSAN supports native data-at-rest encryption of the vSAN datastore. When encryption is enabled, vSAN performs a rolling reformat of every disk group in the cluster. vSAN encryption requires a trusted connection between vCenter Server and a key management server (KMS). The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. In contrast, vSAN native data-in-transit encryption does not require a KMS server. vSAN native data-at-rest and data-in-transit encryption are only available in the Enterprise edition.
vSAN encryption has been validated for the Federal Information Processing Standard (FIPS) 140-2 Level 1.
VMware has also validated the interoperability of HyTrust DataControl software encryption with its vSAN platform.
|
|
Data Encryption Scope
Details
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: SANsymphony provides encryption for data-at-rest; it does not provide encryption for data-in-transit. Encryption can be enabled per individual virtual disk.
|
Hardware: Data-at-rest
Software AOS: Data-at-rest
Software VTE/Gemalto: Data-at-rest + Data-in-transit
Hardware: Nutanix ECP Self-encrypting drives (SEDs) provide encryption for data-at-rest.
Software: Nutanix AOS encryption provides enhanced security for data on a drive, but it does not secure data in transit. In contrast, Vormetric VTE and Gemalto encyrption solutions do provide both encryption for data-at-rest and encryption for data-in-transit.
|
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
Hardware: N/A
Software: VMware vSAN 7.0 U1 encryption provides enhanced security for data on a drive as well as data in transit. Both are optional and can be enabled seperately. HyTrust DataControl encryption also provides encryption for data-at-rest and data-in-transit.
|
|
Data Encryption Compliance
Details
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (AOS, VTE)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: No
Software: No
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
|
Hardware: No
Software AOS: No
Software VTE/Gemalto: Yes
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software AOS: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Vormetric/Gemalto: Because Vormetric and Gemalto are end-to-end solutions, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
Hardware: N/A
Software vSAN: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Hytrust: Because HyTrust DataControl is an end-to-end solution, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
|
|
|
Test/Dev |
|
|
|
Yes
Support for fast VM cloning via VMware VAAI and Microsoft ODX.
|
Yes
|
No
Dell EMC VxRail does not include fast cloning capabilities.
Cloning operations actually copy all the data to provide a second instance. When cloning a running VM on VxRail, all the VMDKs on the source VM are snapshotted first before cloning them to the destination VM.
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
ESXi to AHV (integrated)
AHV to ESXi (integrated)
Hyper-V to AHV (external)
Nutanix supports in-place hypervisor conversion through the Prism web console that allows converting a cluster from using ESXi hosts to using AHV hosts. Guest VMs are converted to the hypervisor target format, and cluster network configurations are stored and then restored as part of the conversion process.
Nutanix supports offline conversion of Hyper-V VHDX disk files to KVM RAW disk files using qemu.
AOS for AHV on IBM Power Systems does not yet offer hypervisor migration support.
|
Hyper-V to ESXi (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
|
|
|
|
File Services |
|
|
|
Built-in (native)
SANsymphony delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. SANsymphony is capable of simultaneously handling highly-available block and file level services.
Raw storage is provisioned from within the SANsymphony GUI to the Microsoft file services layer, similar to provisioning Storage Spaces Volumes to the file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
More information can be found under: https://www.datacore.com/products/features/high-availability-nas-cluster-file-sharing.aspx
|
Built-in (native)
NEW
Nutanix native fileserving feature is called Nutanix Files (previously Acropolis File Services or AFS). Today Nutanix Files is a 3rd generation solution. The current release is 3.7.1.
With Nutanix Files, Nutanix offers a software-defined scale-out file serving solution for Windows environments with a single namespace. The solution is highly available, scalable, and supports Active Directory and LDAP integration, Windows previous versions, user and share quotas, as well as Access Based Enumeration (ABS). Nutanix Files can be used to host home directories, department shares and user profiles.
Nutanix Files 3.7.1 introduces limited Local User Support for SMB shares. Local users can be added to a file server using the command line or the Microsoft Management Console (MMC) Local Users and Groups snap-in. Nutanix Files 3.7.1 only supports native-SMB limited local users on SMB shares. Local groups are not supported.
Nutanix Files 3.7.1 introduces Continuous Availability of SMB shares, preventing client disconnection from a current session during a loss of service. Nutanix Files uses persistent file handles to facilitate continuously available (CA) shares. Persistent file handles improve the SMB caching mechanism of multi-user writes to facilitate continuous availability.
Nutanix Files 3.7 supports Nutanix nodes with up to 240TB of HDD storage. The volume groups that underpin the standard and distributed shares of Nutanix Files can now scale up to double the size, 280TB.
Nutanix Files 3.7 introduced greater flexibility by allowing creation of a customized namespace. Namespace customization comes from the new ability to mount shares as subdirectories within existing shares.
Nutanix Files 3.7 introduced the ability to change the block size based on either a more random or more sequential I/O profile. For random workloads a maximum block size of 16KB can be specified, and for sequential workloads a block size up to 1MB can be specified. Nutanix internal testing has shown sequential workload performance improvements up to 25 percent, and random workload benefits up to 45 percent.
Nutanix Files 3.7 introduced non-GA support for SMB 3.0 transparant failover, also called continuous availability (CA) shares. CA shares allow for durable and persistent file handles to minimize the impact of any storage disruption during failure or upgrade events. CA shares are intended for applications like Citrix App Layering and Fslogix that demand non-disruptive operations. SMB 3.0 transparent failover is a Technical Preview feature with the Nutanix Files 3.7 release.
Nutanix Files 3.6 introduced Windows Server 2019 support, File Blocking, SMB Message encryption, Durable SMB File Handles, Multi-byte Support for Share Root (NFS), AOS NearSync Disaster Recovery, Scale-up recommendation (vCPU and RAM).
Nutanix Files 3.6 supports a maximum size of 140TB for a general file share and a maximum size of 200TB per FSVM for a home share. Nutanix Files supports a maximum of 4,000 connections per FSVM (12 vCPU; 64-90GB RAM).
Nutanix Files 3.6 supports SMB 2.0/2.1/3.0 as well as NFS v3/v4 protocols to provide file shares/exports for Windows, Apple Mac, Linux and UNIX clients. Nutanix Files can be leveraged in VMware vSphere and AHV environments only, so there is no support for Hyper-V. AOS for AHV on IBM Power Systems also does not offer Nutanix Files support.
Nutanix Files 3.6 provides SMB 3.0 basic protocol support, so without specific SMB 3.0 features. Nutanix Files supports both NFSv3 and NFS v4, however Nutanix Files does not support the UDP protocol or Kerberos for NFS v3.
NFS v4 exports can be either distributed or non-distributed. A distributed export ('sharded') means the data is spread across all Nutanix Fle Services VMs (FSVMs) to help improve performance and resiliency. A non-distributed export ('non-sharded') means all data is contained in a single FSVM.
Nutanix Files 3.6 supports the following AOS capabilities: Erasuse Coding, Software Encryption. Nutanix Files also supports self-encrypting drives (SEDs). Deduplication is not recommended.
AFS 3.0 introduced an API that allows third-party developers to implement backup server change file tracking. This means that the API allows the backup application to record and collect information about any changes to the files in each snapshot sent to the backup server, thus providing a log of all file changes across snapshots.
AFS 3.0 introduced an API that allows third-party developers to implement file activity monitoring in their applications. Thsi means that the API allows an application to collect information about every action on each file in a file server and thus supports audit logging (eg. to a syslog server) and Global Name Space.
Nutanix Files is an add-on and thus requires a separate capacity-based license for all editions.
|
Built-in (native)
External (vSAN Certified)
NEW
vSAN 7.0 U1 has integrated file services. vSAN File Services leverages scale-out architecture by deploying an Agent/Appliance VM (OVF templates) on individual ESXi hosts. Within each Agent/Appliance VM a container, or “protocol stack”, is running. The 'protocol stack' creates a file system that is spread across the VMware vSAN Virtual Distributed File System (VDFS), and exposes the file system as an NFS file share. The file shares support NFSv3, NFSv4.1, SMBv2.1 and SMBv3 by default. A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects. The minimum number of containers that need to be deployed is 3, the maximum 32 in any given cluster. vSAN 7.0 Files Services are deployed through the vSAN File Service wizard.
vSAN File Services currenty has the following restrictions:
- not supported on 2-node clusters,
- not supported on stretched clusters,
- not supported in combination with vLCM (vSphere Lifecycle Manager),
- it is not supported to mount the NFS share from your ESXi host,
- no integration with vSAN Fault Domains.
The alternative to vSAN File Services is to provide file services through Windows guest VMs (SMB) and/or Linux guest VMs (NFS) on top of vSAN. These file services can be made highly available by using clustering techniques.
Another alternative is to use virtual storage appliances from a third-party to host file services on top of vSAN. The following 3rd party File Services partner products are certified with vSAN 6.7:
- Cohesity DataPlatform 6.1
- Dell EMC Unity VSA 4.4
- NetApp ONTAP Select vNAS 9.5
- Nexenta NexentaStor VSA 5.1.2 and 5.2.0VM
- Panzura Freedom Filer VSA 7.1.9.3
However, none of the mentioned platforms have been certified for vSAN 7.0 or 7.0U1 (yet).
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
Windows clients
Apple Mac clients
Linux clients
NEW
Nutanix Files 3.7.1 supports the following client platforms and versions for SMB:
- Windows 7/8/8.1/10
- Windows Server 2008/2008R2/2012//2012R2/2016/2019
- Apple MacOS 10.12/10.13/10.14/10.15
Nutanix Files 3.7.1 supports the following client platforms and versions for NFSv3:
- Linux CentOS/RHEL 6.x/7.x/8.x
- Linux Ubuntu 16.04/18.04.3/19.10
- Apple MacOS 10.12.6 (may result in I/O disruption)
- Windows 10
- Windows Server 2008/2008R2/2012//2012R2/2016
Nutanix Files 3.7.1 supports the following client platforms and versions for NFSv4.1:
- Linux CentOS/RHEL 6.5 and later/7.x/8.x
- Linux Ubuntu 16.04/18.04.3/19.10
|
Windows clients
Linux clients
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
SMB
NFS
NEW
Nutanix Files 3.7.1 supports SMB v2.0, SMB v2.1 and SMB v3.0 (basic protocol support without specific SMB 3.0 features. Nutanix Files 3.7.1 does not support mounting SMB shares on Linux clients. Use multi-protocol shares instead.
Nutanix Files 3.7.1 support NFSv3 and NFSv4.1. Nutanix Files 3.7.1 does not support the UDP protocol or Kerberos for NFSv3.
|
SMB
NFS
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
|
Fileserver Quotas
Details
|
Share Quotas, User Quotas
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
Share Quotas, User Quotas
Nutanix Files, previously Acropolis File Services (AFS), offers support for share and user quotas.
Nutanix Files can be used in ESXi and AHV environments only (so no support for Hyper-V).
Nutanix Files requires a separate license for all editions.
|
Share Quotas
vSAN 7.0 File Services supports share quotas through the following settings:
- Share warning threshold: When the share reaches this threshold, a warning message is displayed.
- Share hard quota: When the share reaches this threshold, new block allocation is denied.
|
|
Fileserver Analytics
Details
|
Partial
Because SANsymphony leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
Yes
The GA release of Nutanix File Analytics (2.0) was introduced in August 2019 and is supported in conjunction with Nutanix Files 3.5.2 and above.
Nutanix File Analytics 2.0 includes the following capabilities:
- Data on Data: capacity trends, data age, file distribution and activity data; the time can be changed when viewing the data in the dasboard.
- Audit Trails: search option for auditing a specific user, file, or directory; activity data can be compiled into dynamic graphs or can be exported; data can also be filtered by operation type and date range.
- Anomaly Detection: creation of custom anomaly policies and anomaly alerts, monitoring anomaly trends for top users/folders and operation type.
|
Partial
vSAN 7.0 File Services provide some analytics capabilities:
- Amount of capacity consumed by vSAN File Services file shares,
- Skyline health monitoring with regard to infrastructure, file server and shares.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
S3-compatible
NEW
Nutanix Objects was introduced with the release of AOS 5.11. Nutanix Objects is compatible with Amazon’s Simple Storage Service API (S3 API) to simplify integration with applications. Nutanix Buckets presents a single namespace in the object storage instance and supports the ability to create different object policies as required for different application scenarios. Any component can be scaled out independently to match the workload demands. The architecture is designed with scalability and ease of upgrade in mind. Today Nutanix Objects is a 3rd generation solution. The current release is 3.1.
Nutanix Objects can be deployed on an existing cluster alongside VMs, files, and blocks, or standalone. Objects natively inherits Nutanix DSF (Distributed Storage Fabric) capabilities like erasure coding, compression and deduplication.
Nutanix Objects can be leveraged for the following use cases:
- Long Term Retention & Backup (simple, scalable and cost-effective active archive solution.
- Data Preservation & Compliancy (data in a non-rewritable and non-erasable format per SEC Rule 17a-4 in scalable compliant archive).
Nutanix Objects 3.0 introduces ESXi support, meaning that Object Stores can be deployed on ESXi clusters. It also introduces Object Replication, meaning that objects in a source bucket can be replicated to a destination bucket in a different Object Store Instance. Furthermote Objects 3.0 adds support for Objects Lock API Operations, meaning that object lock policies can be applied on individual objects within a bucket using the S3 supported APIs.
Nutanix Objects 2.2 introduced support for the PKCS#8 standard as well as a notification feature for Object Store. It also enhanced the quota policy for users.
Nutanix Objects 2.1 introduced support for scaling out, quota policies for users, enhanced API access key management, listing of shared Buckets. It also added Static Website policy and CORS policy for Buckets.
Nutanix Objects 1.1 introduced support for Object Store deployments in sites without internet access.
|
N/A
Dell EMC VxRail does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
Versioning
Nutanix Objects provides object versioning, creating copies of objects so data is automatically protected from accidental overwriting or deleting.
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
WORM
Nutanix Objects provides WORM (Write Once Read Many) in order to meet technical regulatory requirements and to enhance security postures with software or hardware data at rest encryption to a level of FIPS 140-2 compliance. WORM policies can be enabled on the bucket level.
|
N/A
Dell EMC VxRail does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
|
|
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
SANsymphonys graphical user interface (GUI) is highly configurable to accommodate individual preferences and includes guided wizards and workflows to simplify administration. All actions available from the GUI may also be scripted with PowerShell Commandlets to orchestrate workflows with other tools and applications.
|
Centralized
Nutanix integrates all features and functions available in the ECP platform into a single management framework (Prism). Because of this, the management experience is identical across all hypervisor platforms.
|
Centralized
Management of the vSAN software, capacity monitoring, performance monitoring and efficiency reporting can be performed through the vSphere Web Client interface.
Other functionality such as backups and snapshots are also managed from the vSphere Web Client Interface.
|
|
|
Single-site and Multi-site
|
Prism: Single-site
Prism Central: Multi-site
Single cluster management is performed through the Nutanix Prism interface. Centralized management of multicluster environments is performed through Nutanix Prism Central (up to 12.500 VMs across all clusters). In addition Prism Pro features are accessed through the Prism Central interface.
AOS 5.8 introduces support for SAML user authentication for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between parties, in particular between an identity provider such as ADFS or OKTA and a service provider, which in this case is Prism Central. Limitations in this version:
- Only one identity provider can be configured.
- The role mapping is restricted to individual users; groups are not supported.
- Session timeouts are based on Prism Central only; the identify provider is not queried for session expiry.
|
Single-site and Multi-site
Centralized management of multicluster environments can be performed through the vSphere Web Client by using Enhanced Linked Mode.
Enhanced Linked Mode links multiple Dell EMCnter Server systems by using one or more Platform Services Controllers. Enhanced Linked Mode enables you to log in to all linked Dell EMCnter Server systems simultaneously with a single user name and password. You can view and search across all linked Dell EMCnter Server systems. Enhanced Linked Mode replicates roles, permissions, licenses, and other key data across systems. Enhanced Linked Mode requires the Dell EMCnter Server Standard licensing level, and is not supported with Dell EMCnter Server Foundation or Dell EMCnter Server Essentials.
|
|
GUI Perf. Monitoring
Details
|
Advanced
SANsymphony has visibility into the performance of all connected devices including front-end channels, back-end channels, cache, physical disks, and virtual disks. Metrics include Read/write IOPs, Read/write MBps and Read/Write Latency at all levels. These metrics can be exported to the Windows Performance Monitoring (Perfmon) utility where other server parameters are being tracked.
The frequency at which performance metrics can be captured and reported on is configurable, real-time down to 1 second intervals and long term recording at 2 minutes granularity.
When a trend analysis is required, an end-user can simply enable a recording session to capture metrics over a longer period of time.
|
Advanced
Nutanix provides extended performance metrics on multiple levels of the infrastructure, ranging from cluster-level to VM-level.
For each individual VM Nutanix has added an addtional tab in Prism where Average IO Latency, Block Size Distribution and Random vs Sequential for Reads and Writes, as well as the Read Source (DRAM/SSD/HDD) for Reads are displayed graphically.
Nutanix has also added end-to-end network performance visualization, but currently this is only available when using AHV (so no vSphere or Hyper-V support). Network Visualization is added as a separate Network page in Prism Element and Prism Central. Network Visualization extends from the VM to the virtual NICs, to physical NICs to physical switch ports.
AOS 5.5 provides enhancements to Prism Central analytics. Behavioral analytics functionality has been added to the predictive analytics functionality to provide anomaly based alerts and alarms. This is different from the existing mechanism where thresholds are used. Thresholds are typically user defined and based on tacit knowledge. Anomaly detection is valuable as it indicates when KPIs (eg. CPU utilization) have a significant deviation from the norm.
Nutanixs behavioral analytics functionality requires the Prism Pro edition which is a separate add-on subscription.
|
Advanced
NEW
Performance information can be viewed on the cluster level, the Host level and the VM level. Per VM there is also a view on backend performance. Performance graphs focus on IOPS, MB/s and Latency of Reads and Writes. Statistics for networking, resynchronization, and iSCSI are also included.
End-users can select saved time ranges in performance views. vSAN saves each selected time range when end-users run a performance query.
There is also a VMware vRealize Operations (vROps) Management Pack for vSAN that provides additional options for monitoring, managing and troubleshooting vSAN.
The vSphere 6.7 Client includes an embedded vRealize Operations (vROps) plugin that provides basic vSAN and vSphere operational dashboards. The vROps plugin does not require any additional vROps licensing. vRealize Operations within vCenter is only available in the Enterprise and Advanced editions of vSAN.
vSAN observer as of vSAN 6.6 is deprecated but still included. In its place, vSAN Support Analytics is provided to deliver more enhanced support capabilities, including performance diagnostics. Performance diagnostics analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance Diagnostics requires participation in the Customer Experience Improvement Program (CEIP).
vSAN 6.7 U3 introduced a vSAN CPU metric through the performance service, and provides a new command-line utility (vsantop) for real-time performance statistics of vSAN, similar to esxtop for vSphere.
vSAN 7.0 introduces vSAN Memory metric through the performance service and the API for measuring vSAN memory usage.
vSAN 7.0 U1 introduces vSAN IO Insight for investigating the storage performance of individual VMs. vSAN IO Insight generates the following performance statistics which can be viewed from within the vCenter console:
- IOPS (read/write/total)
- Throughput (read/write/total)
- Sequential & Random Throughput (sequential/random/total)
- Sequential & Random IO Ratio (sequential read IO/sequential write IO/sequential IO/random read IO/random write IO/random IO)
- 4K Aligned & Unaligned IO Ratio (4K aligned read IO/4K aligned write IO/4K aligned IO/4K unaligned read IO/4K unaligned write IO/4K unaligned IO)
- Read & Write IO Ratio (read IO/writeIO)
- IO Size Distribution (read/write)
- IO Latency Distribution (read/write)
|
|
|
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
DataCore offers deep integration with VMware vSphere and Microsoft Hyper-V, as well as their respective systems management tools, vCenter and System Center.
SCVMM = Microsoft System Center Virtual Machine Manager
|
VMware: Prism (subset)
Microsoft: SCCM (SCOM and SCVMM)
AHV: Prism
Nutanix Files: Prism
Xi Frame: Prism Central
Prism is the name of Nutanix central management platform used for all hypervisors as well as file services. Prism is an integral part of the AOS platform and as such resilient to failures.
Prism offers a subset of the most frequently used VMware management operations (VM Create, VM Update, VM Delete, VM Power On/Off Operations, Launch Console, Clone). Prism is not meant as a replacement alternative for the VMware vCenter console, as it covers only a subset.
Nutanix Acropolis Hypervisor (AHV) is based on KVM and offers a full-service managing environment on top of this.
Prism Central is a hard technical requirement for Xi Frame.
|
VMware vSphere Web Client (integrated)
VxRail 4.7.300 adds full native vCenter plug-in support for all core day to day operations including physical views and Life Cycle Management (LCM).
VxRail 4.7.100 and up provide a VxRail Manager Plugin for VMware vCenter. The plugin replaces the VxRail Manager web interface. Also full VxRail event details are presented in vCenter Event Viewer.
vSAN 6.7 provides support for the HTML5-based vSphere Client that ships with vCenter Server. vSAN Configuration Assist and vSAN Updates are available only in the vSphere Web Client.
vSAN 6.6 and up provide integration with the vCenter Server Appliance. End-users can create a vSAN cluster as they deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables end-users to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster.
vSAN 6.6 and up also support host-based vSAN monitoring. This means end-users can monitor vSAN health and basic configuration through the ESXi host client. This also allows end-users to correct configuration issues at the host level.
|
|
|
|
Programmability |
|
|
|
Full
Using DataCores native management console, Virtual Disk Templates can be leveraged to populate storage policies. Available configuration items: Storage profile, Virtual disk size, Sector size, Reserved space, Write-trough enabled/disabled, Storage sources, Preferred snapshot pool, Accelerator enabled/disabled, CDP enabled/disabled.
Virtual Disk Templates integrate with System Center Virtual Machine Manager (SCVMM), VMware Virtual Volumes (VVol) and OpenStack. Virtual Disk Templates are also fully supported by the REST-API allowing any third-party integration.
Using Virtual Volumes (VVols) defined through DataCore’s VASA provider, VMware administrators can self-provision datastores for virtual machines (VMs) directly from their familiar hypervisor interface. This is possible even for devices in the DataCore pool that don’t natively support VVols and never will, as SANsymphony can be used as a storage-virtualization layer for these devices/solutions. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
Using Classifications and StoragePools defined through DataCore’s Storage Management Provider, Hyper-V administrators can self-provision virtual disks and pass-through LUNS for virtual machines (VMs) directly from their familiar SCVMM interface.
|
Partial (Protection)
Multiple VMs can be grouped together in a Nutanix protection domain enabling them to be operated upon as a single entity with the same RPO. This is useful when trying to protect complex applications such as Microsoft SQL Server-based applications or Microsoft Exchange. The main advantage of using a protection domain approach of grouping VMs versus the traditional SAN approach of consolidating different VMs on to a single LUN is VM portability.
|
Full
Storage Policy-Based Management (SPBM) is a feature of vSAN that allows administrators to create storage profiles so that virtual machines (VMs) dont need to be individually provisioned/deployed and so that management can be automated. The creation of storage policies is fully integrated in the vSphere GUI.
|
|
|
REST-APIs
PowerShell
The SANsymphony REST-APIs library includes more than 200 new representational state transfer (REST) operations, so automation can be leveraged more extensively. RESTful interfaces are used by products such as Lenovo XClarity, Cisco Embedded Resource Manager and Dell OpenManage to manage infrastructure in the enterprise.
SANsymphony provides its own Powershell cmdlets.
|
REST-APIs
PowerShell
nCLI
REST-APIs, PowerShell CMDlets and Nutanix CLI (nCLI) commands can be used for automating storage related operational tasks.
|
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
Dell EMC VxRail 4.7.100 and up include RESTful API enhancements.
Dell EMC VxRail 4.7.300 and up provide full VxRail manager functionality using RESTful API.
|
|
|
OpenStack
OpenStack: The SANsymphony storage solution includes a Cinder driver, which interfaces between SANsymphony and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage.
Datacore SANsymphony programmability in VMware vRealize Automation and Microsoft System Center can be achieved by leveraging PowerShell and the SANsymphony specific cmdlets.
|
OpenStack
VMware vRealize Automation (vRA)
Nutanix Calm
Nutanix offers third-party drivers for OpenStack and include Acropolis compute, image, volume and network.
In June 2017 Nutanix Calm v1.0 was released as a follow-up the acqusition of Calm.io. Nutanix Calm adds native application orchestration and lifecycle management to Nutanix ECP. Calm decouples application management from the underlying infrastructure, thus enabling applications to be easily deployed into private or public cloud environments. The addition of advanced application management to the Nutanix platform turns common tasks into repeatable automations accessible to all IT teams, without giving up control across the infrastructure stack.
Nutanix Calm blueprints can be published directly for end user consumption through the Nutanix Marketplace, giving application owners and developers the ability to request IT services that can then be instantly provisioned.
Nutanix Calm provides role-based governance that limits user operations based on permissions. All activities and changes are centrally logged for end-to-end traceability and debugging, aiding security teams with key compliance initiatives.
Nutanix Calms latest version, 2.7.0, was released in August 2019 and is compatible with AOS 5.10/5.11 and Prism Central 5.10.6. Nutanix Calm 2.7.0 supports Nutanix AHV and VMware vSphere hypervisors. Nutanix Calm does not support Hyper-V.
Nutanix Calm requires a separate license.
|
OpenStack
VMware vRealize Automation (vRA)
OpenStack integration is achieved through VMware Integrated OpenStack v2.0
vRealize Automation 7 integration is achieved through vRA 7.0.
|
|
|
Full
The DataCore SANsymphony GUI offers delegated administration to secondary users through fine-grained Role-based Access Control (RBAC). The administrator is able to define Virtual Disk ownership as well as privileges associated with that particular ownership. Owners must have Virtual Disk privileges in an assigned role in order to perform operations on the virtual disk. Access can be very refined. For example, one owner may have the privilege to create a snapshot of a virtual disk, but not have the ability to serve or unserve the same virtual disk. Privilege sets define the operations that can be performed. For instance, in order for an owner to perform snapshot, rollback, or replication operations, they would require those privilege sets in an assigned role.
|
AHV only
The Prism Self-Service Portal (SSP) is a AHV-only feature (so not supported for vSphere and Hyper-V). SSP is integrated into Prism Central and enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity. SSP offers hands-off administration with fine-grained permissions and resources quotas through role based access controls (RBAC). The full range of self-service portal features is managed from the Prism Central web console user interface. Prism SSP can connect to the Active Directory in order to enable Single Sign-On (SSO) as well as to provide users and groups when configuring RBAC.
AOS 5.1 first introduced general availability (GA) support for hot plugging virtual memory and vCPU on VMs that run on top of the AHV hypervisor from within the Self-Service Portal. This means that the memory allocation and the number of CPUs on VMs can be increased while the VMs remain powered on. However, the number of cores per CPU socket cannot be increased while the VMs are powered on.
AOS 5.5 and above allow the full range of self-service portal features to be managed from the Prism Central web console user interface.
Self-service portal features include:
- VM Management: creating/updating*/deleting a VM, performing VM operations.
- Task Management: viewing the status of a task, restarting a failed task.
Updating a VM: the VM name, number of assigned vCPUs, or memory size of the VM cannot be changed, however disks and NICs can be added or deleted.
VM operations: launch console, power on/off, manage categories, (un)quarantine VM, add to catalog.
Prism Self-Service Portal (SSP) is included in both the Prism Starter and Prism Pro edition. Using Prism Central is a hard requirement. Prism Element is no longer supported as of AOS 5.5.
|
N/A (not part of VxRail license bundle)
VMware vSAN does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license.
|
|
|
|
Maintenance |
|
|
|
Unified
All storage related features and functionality are built into the DataCore SANsymphony platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
Integration with 3rd party systems (e.g. OpenStack, vSphere, System Center) are delivered seperately but are free-of-charge.
|
Unified
A few minor components aside (eg. SRA), all storage related features and functionality are built into the Nutanix ECP platform. This type of consolidation means, that only one product needs to be installed/upgraded and minimal dependencies exist with other software.
|
Partially Distributed
For a number of features and functions the vSAN software inside the VxRail appliances relies on other components that need to be installed and upgraded next to the core vSphere platform. Examples are Avamar Virtual Edition (AVE), vSphere Replication (VR) and RecoverPoint for VMs (RPVM). As a result some dependencies exist with other software.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
Each SANsymphony update is packaged in an installation Wizard which contains a fully guided upgrade process. The upgrade process checks all system requirements and performs a system health before starting the upgrade process and before moving from one node to the next.
The user can also decide to upgrade a SANsymphony cluster manually and follow all steps that are outlined in the Release Notes.
|
Rolling Upgrade (1-by-1)
The upgrade of AOS in a Nutanix cluster can be performed via Prism using a 1-Click approach. Upgrades are non-disruptive in the sense that when a VSC is taken offline to be upgraded, availability is still provided by the other VSCs. Nutanix provides a step-by-step upgrade guide that includes a number of pre- and post checks.
AOS 5.17 introduces support for upgrading AHV hosts through the Nutanix Life Cycle Manager (LCM).
|
Rolling Upgrade (1-by-1)
The Dell EMC VxRail Manager software provides one-click, non-disruptive patches and upgrades for the entire solution stack.
End-user organizations no longer need professional services for stretched cluster upgrades when upgrading from VxRail 4.7.300 to the next release.
vSAN 7.0 native File Services upgrades are also performed on a rolling basis. The file shares remain accessible during the upgrade as file server containers running on the virtual machines which are undergoing upgrade fail over to other virtual machines. During the upgrade some interruptions might be experienced while accessing the file shares.
|
|
FW Upgrade Execution
Details
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
DataCore provides integrated firmware-control for FC-cards. This means the driver automatically loads the required firmware on demand.
|
1-Click
The 1-Click upgrade for BIOS and Baseboard Management Controller (BMC) firmware feature is available for AHV and ESXi hypervisors. Hardware must run on at least NX G4 (Haswell) platforms.
|
1-Click
The Dell EMC VxRail Manager software provides one-click, non-disruptive patches and upgrades for the entire solution stack.
VxRail 7.0 does not support vSphere Lifecycle Manager (vLCM); vLCM is disabled in VMware vCenter.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
No
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer unified support for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Yes (Nutanix; Dell; Lenovo; IBM)
Nutanix provides unified support for the entire native solution. This means Nutanix is the single point-of-contact for any storage software (Nutanix AOS) and server hardware (Super Micro) related issues.
When buying Nutanix on Dell server hardware, joint support is offered through Dell ProSupport with software assist from Nutanix.
When buying Nutanix on Lenovo server hardware (Converged HX Series), technical support for the entire solution is delivered by the Lenovo services organization.
When buying Nutanix on IBM Power hardware, technical support for the entire solution is delivered by the IBM services organization.
|
Yes
Dell EMC VxRail provides unified support for the entire native solution. This means Dell EMC is the single point-of-contact for both software and hardware issues. Prerequisite is that the separate VMware vSphere licenses have not been acquired through an OEM vendor.
|
|
Call-Home Function
Details
|
Partial (HW dependent)
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer call-home for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Full
Nutanix call-home function is called 'Pulse' and is fully integrated into the platform. Enabling the feature is simple and straightforward and requires very few clicks.
Pulse provides diagnostic system data to Nutanix support teams in order to deliver proactive, context-aware support for Nutanix solutions. The information is collected unobtrusively and automatically from a Nutanix cluster, with no impact to system performance. Nutanix support teams get the data required for quick and timely resolution of issues - often before administrators even know there is a problem. Examples include failed disks, faulty network interface cards (NICs) and unusually high utilization of cluster resources that could lead to potential problems.
Nutanix AOS 5.6.1 introduces support for Prism Central 5.6.1 to be used as a proxy for routing support information to Nutanix support.
Pulse is a basic support service and is included with every software edition and support contract.
|
Full
VxRail uses EMC Secure Remote Services (ESRS). ESRS maintains connectivity with the VxRail hardware components around the clock and automatically notifies EMC Support if a problem or potential problem occurs.
VxRail is also supported by Dell EMC Vision. Dell EMC Vision offers Multi-System Management, Health monitoring and Automated log collection for a multitude of products from the Dell EMC family.
|
|
Predictive Analytics
Details
|
Partial
Capacity Management: DataCore SANsymphony Analysis and Reporting supports depletion monitoring of the capacity, complements pool space threshold warnings by regularly evaluating the rate of capacity consumption and estimating when space will be depleted. The regularly updated projections give you a chance to add more storage to the pool before you run out of storage. It also helps you do a better job of capacity planning with fewer surprises. To help allocate costs, especially in private cloud and hosted cloud services, SANsymphony generates reports quantifying the storage resources consumed by specific hosts or groups of hosts. The reports tally several parameters.
Health Monitoring: A combination of system health checks and access to device S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) alerts help to isolate performance and disk problems before they become serious.
DataCore Insight Services (DIS) offers additional capabilites including log-analytics for predictive failure analysis and actionable insights - including hardware.
DIS also provides predictive capacity trend analysis in order to pro-actively warn about licensing limitations being reached within x days and/or disk pools running out of capacity.
|
Full
Nutanix Prism Central includes machine-learning capabilities that analyze resource usage over time and provide tools to monitor resource consumption, identify abnormal behavior, and guide resource planning. Predictive analysis of capacity usage and trends based on workload behavior enables pay-as-you-grow scaling. The integrated advisor functionality offers infrastructure optimization recommendations to improve efficiency and performance.
Nutanixs predictive analytics functionality requires the Prism Pro edition.
|
Full (not part of VxRail license bundle)
vRealize Operations (vROps) provides native vSAN support with multiplay dashboards for proactive alerting, heat maps, device and cluster insights, and streamlined issue resolution. Also vROps provides forward trending and forecasting to vSAN datastore as well as any another datastore (SAN/NAS).
vSAN 6.7 introduced 'vRealize Operations (vROps) within vCenter'. This provides end users with 6 dashboards inside vCenter console, giving insights but not actions. Three of these dashboards relate to vSAN: Overview, Cluster View, Alerts. One of the widgets inside these dasboards displays 'Time remaining before capacity runs out'. Because this provides only some very basic trending information, a full version of the vROps product is still required.
'vRealize Operations (vROps) within vCenter' is included with vSAN Enterprise and vSAN Advanced.
The full version of vRealize Operations (vROps) is licensed as a separate product from VMware vSAN.
In June 2019 an early access edition (=technical preview) of VxRail Analytical Consulting Engine (ACE) was released to the public for test and evaluation purposes. VxRail ACE is a cloud service that runs in a Dell EMC secure data lake and provides infrastructure machine learning for insights that can be viewed by end-user organizations on a Dell EMC managed web portal. On-premises VxRail clusters send advanced telemetry data to VxRail ACE by leveraging the existing SRS secyure transport mechanism in order to provide the cloud platform with raw data input. VxRail ACE is built on Pivotal Cloud Foundry.
VxRail ACE provides:
- global vizualization across all VxRail clusters and vCenter appliances.
- simplified health scores at the cluster and node levels;
- advanced capacity and performance metrics charting so problem areas (CPU, memory, disk, networking) can be pinpointed up to the VM level;
- future capacity planning by analyzing the previous 180 days of storage use data, and projecting data usage for the next 90 days.
VxRail ACE supports VxRail 4.5.215 or later, as well as 4.7.0001 or later. Sending advanced telemetry data to VxRail ACE is optional and can be turned off. The default collection frequency is once every hour.
VxRail ACE is designed for extensibility so that future visibility between VxRail ACE and vRealize Operations Manager is possible.
Because VxRail ACE is not Generally Available (GA) at this time, it is not yet considered in the WhatMatrix evaluation/scoring mechanism.
|
|