|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Broad range of hardware support
- + Strong VMware integration
- + Policy-based management
|
- + Broad range of hardware support
- + Strong Microsoft integration
- + Great simplicity to deploy
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Single hypervisor support
- - Very limited native data protection capabilities
- - Dedup/compr not performance optimized
|
- - Single hypervisor support
- - Limited native data protection
- - Dedup/compr not performance optimized
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: vSAN
Type: Software-only (SDS)
Development Start: Unknown
First Product Release: 2014
VMware was founded in 1998 and began to ship its first Software Defined Storage solution, Virtual SAN, in 2014. Virtual SAN was later rebranded to vSAN. The vSAN solution is fully integrated into the vSphere Hypervisor platform. In 2015 VMware released major updates in the second iteration of the product. Close to the end of 2016 the third iteration was released.
At the end of May 2019 the company had a customer install base of more than 20,000 vSAN customers worldwide. This covers both vSAN and VxRail customers. At the end of May 2019 there were over 30,000 employees working for VMware worldwide.
|
Name: Storage Spaces Direct (S2D)
Type: Software-only (SDS)
Development Start: 2015
First Product Release: Oct 2016
NEW
Microsoft, founded in 1975, released its first Software Defined Storage (SDS) solution, Storage Spaces, as a feature in Windows Server 2012. Storage Spaces was enhanced in the R2 release of Windows Server 2012. In october 2016 Microsoft introduced the all-new Storage Spaces Direct (S2D) as integral part of the Windows Server 2016 platform. Microsoft S2D aggregates direct attached storage from separate x86 servers into a highly available shared storage pool.
At the start of October 2019 Microsoft introduced a new S2D version with the release of Windows Server 2019. The new S2D version features both new capabilities as well as improvements in existing capabilities.
Customer install base and number of employees working on S2D are unknown at this time. In March 2018 there were more than 10,000 clusters worldwide running Storage Spaces Direct.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
vSAN 7.0 U1: oct 2020
vSAN 7.0: apr 2020
vSAN 6.7 U3: apr 2019
vSAN 6.7 U1: oct 2018
vSAN 6.7: may 2018
vSAN 6.6.1: jul 2017
vSAN 6.6: apr 2017
vSAN 6.5: nov 2016
vSAN 6.2: mar 2016
vSAN 6.1: aug 2015
vSAN 6.0: mar 2015
vSAN 5.5: mar 2014
NEW
7th Generation software. vSAN maturity has again increased by expanding its range of features with a set of advanced functionality.
vSAN is also a key element in both the VCE VxRail/VxRack and VMware EVO:SDDC propositions.
|
GA Release Dates:
S2D 2019: Oct 2018
S2D 2016: Oct 2016
NEW
2nd generation software. Because Storage Spaces Direct (S2D) is an integral part of the Windows Server platform, the first GA version of S2D was introduced as part of the GA release of Windows Server 2016 in October 2016. The second GA version of S2D was introduced as a part of the GA release of Windows Server 2019 in October 2018.
Microsoft recommends deploying Storage Spaces Direct on hardware validated by the Windows Server Software Defined (WSSD) program. For Windows Server 2019, the first wave of WSSD offers was launched in mid-January 2019.
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
N/A
vSAN is sold by VMware as a software-only solution. Server hardware must be acquired separately.
VMware maintains an extensive Hardware Compatibility List (HCL) with supported hardware for VMware vSAN implementations.
For guidance on proper hardware configurations, VMware provides a vSAN Hardware Quick Reference Guide.
You can view both the HCL and the Quick Reference Guide by using the following link: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vSAN
To help customers further, VMware also partners with multiple server hardware manufacturers to provide reference configurations in the VMware vSAN Ready Nodes document.
|
Per Node
Storage Spaces Direct (S2D) is sold by Microsoft as a software-only solution. Server hardware must be acquired separately.
You can find a Hardware Compatibiliy List here: https://www.windowsservercatalog.com/default.aspx
Microsoft recommends deploying Storage Spaces Direct on hardware validated by the Windows Server Software Defined (WSSD) program. For Windows Server 2019, the first wave of WSSD offers will launch in mid-January 2019.
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per CPU Socket
Per Desktop (VDI use cases only)
Per Used GB (VCPP only)
Editions:
vSAN Enterprise
vSAN Advanced
vSAN Standard
vSAN for ROBO Enterprise
vSAN for ROBO Advanced
vSAN for ROBO Standard
vSAN (for ROBO) Standard editions offer All-Flash Hardware, iSCSI Target Service, Storage Policy Based Management, Virtual Distributed Switch, Rack Awareness, Software Checksum and QoS - IOPS Limit.
vSAN (for ROBO) Enterprise and vSAN (for ROBO) Advanced editions exclusively offer All-Flash related features RAID-5/6 Erasure Coding and Deduplication+Compression, as well as vRealize Operations within vCenter over vSAN (for ROBO) Standard editions.
vSAN (for ROBO) Enterprise editions exclusively offer Stretched Cluster and Data-at-rest Encryption features over both vSAN (for ROBO) Standard and Advanced editions.
VMware vSAN is priced per CPU socket for VSI workloads, but can also be acquired per desktop for VDI use cases; A 25VM Pack is exclusively available for ROBO use cases.
vSAN for Desktop is priced per named user or per concurrent user (CCU) in a virtual desktop environment and sold in packs of 10 and 100 licenses.
VMware vSAN is priced per Used GB when subscribing to vSAN from a VMware Cloud Provider (VCPP = VMware Cloud Partner Program).
|
Per Node
Windows Server 2019 Datacenter edition is required. The Datacenter edition supports up to 16 physical cores. If your server has more than 16 physical cores, you have to buy a license pack for each two additional cores.
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per CPU Socket
Per Desktop (VDI use cases only)
Subscriptions: Basic, Production, Business, Critical and Mission Critical
For details on the different support subscriptions, please use the following link: https://www.vmware.com/support/services/compare.html
|
Per Node
Microsoft S2D is supported by Microsoft Customer Service & Support (CSS), which is truly global and offers Tier 1/2/3 support and onsite support in every region. Mission Critical support, account management, support contracts, and support management are available, with response time SLAs depending on the level of support purchased. Pay per incident support is also available.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
VMware is stack-oriented, whereas the vSAN platform itself is heavily storage-focused.
With the vSAN/VxRail platforms VMware aims to provide all functionality required in a Private Cloud ecosystem.
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Microsoft is stack-oriented, whereas the S2D platform itself is heavily storage-focused.
With Storage Spaces Direct (S2D) Microsoft aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 40 GbE
vSAN supports ethernet connectivity using SFP+ or Base-T. VMware recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
10, 25, 40, 100 GbE
Storage Spaces Direct (S2D) supports ethernet connectivity using SFP+ or Base-T. Microsoft requires 10GbE for intra-cluster communication to avoid the network becoming a performance bottleneck.
S2D supports several network bandwidths: 10, 25, 40 and 100 Gb Ethernet. Although it is not mandatory, a network compliant with RDMA (RoCE or iWARP) brings the best performance.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Medium
VMware vSAN was developed with simplicity in mind, both from a design and a deployment perspective. VMware vSANs uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities either natively or by leveraging features already present in the VMware hypervisor, vSphere, on a per-VM basis. However, there are still some key areas where vSAN still relies heavily on external products and where there is no tight integration involved (eg. backup/restore). In these cases choices need to made whether to incorporate 1st party of 3rd party solutions into the overall technical design.
|
Medium
Microsoft Storage Spaces Direct (S2D) is able to meet many different use-cases because of its flexible technical architecture, however this also means there are multiple design choices that need to be made. Today Microsoft S2D leverages the data protection capabilities available in Microsofts hypervisor platform, Hyper-V, and the Windows Server OS, which keeps the overall design from getting overly complex. Microsoft S2D in Windows Server 2019 also has a core set of native data services that the previous version lacked.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016)
Evaluator Group (oct 2018, jul 2017, aug 2016)
StorageReview (Aug 2018)
Title: 'VMware vSAN with Intel Optane Review'
Workloads: MySQL OLTP, MSSQL OLTP, Generic profiles
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL), Vdbench (generic)
Hardware: All-Flash Supermicro’s 2029U-TN24R4T+, 4-node cluster, vSAN 6.7
ESG Lab (Aug 2018)
Title: 'Optimize VMware vSAN with Western Digital NVMe SSDs and Supermicro Servers'.
Workloads: MSSQL OLTP
Benchmark Tools: HammerDB (MSSQL)
Hardware: All-Flash SuperMicro BigTwin, 4-node cluster, vSAN 6.6.1; All-Flash Lenovo X3650M5, 4-node cluster, vSAN 6.1
Evaluator Group (Jul 2017/Oct 2018)
Title: 'IOmark-VM-HC Test Report'
Workloads: Mix (MS Exchange, Olio, Web, Database)
Benchmark Tools: IOmark-VM (all)
Hardware: All-Flash Intel R2208WF, 4-node cluster, vSAN 6.6/6.7
Storage Review (Aug 2016)
Title: 'VMware VSAN 6.2 All-Flash Review'
Workloads: MySQL OLTP, MSSQL OLTP
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL)
Hardware: All-Flash Dell PowerEdge R730xd, 4-node cluster, vSAN 6.2
Evaluator Group (Aug 2016)
Title: 'IOmark-VM-HC Test Report'
Workloads: Mix (MS Exchange, Olio, Web, Database)
Benchmark Tools: IOmark-VM (all)
Hardware: All-Flash Intel S2600WT, 4-node and 6-node cluster, vSAN 6.2
ESG Lab (Apr 2016)
Title: 'Optimize VMware Virtual SAN 6 with SanDisk SSDs'.
Workloads: MSSQL OLTP
Benchmark Tools: HammerDB (MSSQL)
Hardware: Hybrid/All-Flash Lenovo X3650M5, 4-node cluster, vSAN 6.2
|
ESG Lab (mar 2017)
ESG Lab (Mar 2017)
Title: 'Performance and Cost Efficiency of Intel and Microsoft Hyperconverged Infrastructure'
Workloads: Generic
Benchmark Tools: Diskspd Utility (generic)
Hardware: Hybrid Intel servers, 4-node cluster, S2D 1.0; All-flash Intel servers, 4-node cluster, S2D 1.0; All-NVMe Intel servers, 4-node cluster, S2D 1.0
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Free Trial (60-days)
Online Lab
Proof-of-Concept (POC)
vSAN Evaluation is freely downloadble after registering online. Because it is embedded in the hypervisor, the free trial includes vSphere and vCenter Server. vSAN Evaluation can be installed on all commodity hardware platforms that meet the hardware requirements. vSAN Evaluation use is time-restricted (60-days). vSAN Evaluation is not for production environments.
VMware also offers a vSAN hosted hands-on lab that lets you deploy, configure and manage vSAN in a contained environment, after registering online.
|
Free Trial (180-days)
Proof-of-Concept (PoC)
A Storage Spaces Direct (S2D) PoC environment can be deployed either by running it physically or by running it virtually as VMs on top of a hypervisor (Hyper-V or VMware).
Windows Server 2019 Datacenter evaluation can be downloaded freely. It is time-restricted (180-days).
For lab purposes, Microsoft Support can enable S2D in current Windows Server 2019 build.
Microsoft recommends deploying Storage Spaces Direct on hardware validated by the Windows Server Software Defined (WSSD) program. For Windows Server 2019, the first wave of WSSD offers will launch in mid-January 2019.
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: VMware vSAN is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
VMware vSAN can partially serve in a dual-layer model by providing storage also to other vSphere hosts within the same cluster that do not contribute storage to vSAN themselves or to bare metal hosts. However, this is not a primary use case and also requires the other vSphere hosts to have vSAN enabled (Please view the compute-only scale-out option for more information).
|
Single-Layer
Dual-Layer
You can deploy Storage Spaces Direct (S2D) in two ways:
1. By using the S2D servers for hosting compute as well as storage, thus creating a hyper-converged single-layer configuration. Microsoft uses the term Hyper-converged.
2. By using the S2D servers as storage nodes only, thus creating a traditional dual-layer configuration where compute is hosted on other servers that access the storage through SMB3. Microsoft uses the term Disaggregated.
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
BYOS (fast, some automation)
Pre-installed (very fast, turnkey approach)
There are four methods to deploy vSAN:
1. Build-Your-Own.
2. vSAN Ready Nodes (Pre-installed).
3. vSAN Ready Nodes (non pre-installed).
4. Hardware Appliances aka HCI (Dell VxRail / Lenovo VX Series).
Each method has a different level of effort involved; the method that suits the end-user organization best is based on the technical level of the admin team, the IT architecture and state, as well as the desired level of customization. Where the 'Hardware Appliances' allow for the least amount of customization and the 'Build-Your-Own' for the most, in reality the majority of end-users choose either 'Pre-Installed vSAN Ready Nodes' or 'Hardware Appliances'.
Because of the tight integration with the VMware vSphere platform, vSAN itself is very easy to install and configure (just a few clicks in the vSphere GUI)
vSAN 6.7 U1 introduced a Quickstart wizard in the vSphere Client. The Quickstart workflow guides the end user through the deployment process for vSAN and non-vSAN clusters, and covers every aspect of the initial configuration, such as host, network, and vSphere settings. Quickstart also plays a part in the ongoing expansion of a vSAN cluster by allowing an end user to add additional hosts to the cluster.
BYOS = Bring-Your-Own-Server-Hardware
|
BYOS (some automation)
NEW
Because of the tight integration with the Microsoft Server platform, Storage Spaces Direct (S2D) is very easy to install and configure. It is also possible to streamline the software deployment of an entire S2D cluster by automating all the steps using PowerShell cmdlets.
With WSSD Certified Ready-Node solutions from well-known server hardware vendors you get a pre-defined configuration for the entire solution. However, this setup is not always optimal, especially with regard to the network parts.
WSSD solutions for S2D in Windows Server 2019 are currently slated for release in January 2019.
BYOS = Bring-Your-Own-Server-Hardware
WSSD = Windows Server Software-defined
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Kernel Integrated
vSAN is embedded into the VMware hypervisor. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Kernel Integrated
Storage Spaces Direct (S2D) is embedded into the Windows 2019 Server operating system. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
To be more precise, S2D is a feature that is fully integrated into Windows Failover Clustering. When you create a Windows cluster, the S2D feature is disabled by default so you will have to enable it.
Microsoft has also developed the Software Storage Bus so each direct attached disk in each server node can be accessed by others server nodes.
Both Kernel Integrated and Virtual Controller are distributed architectures, having one active component per virtualization host that work together as a group. Both architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 7.0 U1
NEW
VMware vSAN is an integral part of the VMware vSphere platform; As such it cannot be used with any other hypervisor platform.
vSAN supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
|
Microsoft Hyper-V 2012R2 (Dual Layer)
Microsoft Hyper-V 2016/2019
Storage Spaces Direct (S2D) is an integral part of the Microsoft Windows Server 2019 platform. As such it cannot be used with any other hypervisor platform than Hyper-V.
S2D supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
Both S2D deployment models (single-layer, dual layer) can be used in conjunction with Hyper-V. When setup in a dual-layer configuration, the storage nodes with S2D act as scale-out file servers that present storage to the Hyper-V compute nodes.
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
vSAN (incl. WSFC)
VMware uses a propietary protocol for vSAN.
vSAN 6.1 and upwards support the use of Microsoft Failover Clustering (MSFC). This includes MS Exchange DAG and SQL Always-On clusters when a file share witness quorum is used. The use of a failover clustering instance (FCI) is not supported.
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 introduced native support for SCSI-3 Persistent Reservations (PR), which enables Windows Server Failover Clusters (WSFC) to be directly deployed on native vSAN VMDKs. This capability enables migrations from legacy deployments on physical RDMs or external storage protocols to VMware vSAN.
|
SMB3
Storage Spaces Direct (S2D) is exposed to the Windows OS through the Cluster Shared Volume File System (CSVFS). In a dual-layer deployment model S2D volumes are presented to compute nodes through the SMB3 protocol.
Storage nodes communicate with one another using the SMB3 protocol, including SMB Direct and SMB Multichannel.
Hyper-V is capable of supporting virtual guest clusters by leveraging 'VHD Set' files, a feature that was introduced in Hyper-V 2016.
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Many
vSAN iSCSI Service enables hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore by providing highly available block storage as iSCSI LUNs. The physical workloads can be stand-alone servers, Windows Failover Clusters (including MSSQL) or Oracle RAC.
vSAN iSCSI Service does not support other vSphere or ESXi clients or initiators, third
party hypervisors, or migrations using raw device mapping (RDMs).
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
|
Microsoft Windows Server (Limited)
Storage Spaces Direct (S2D) is supported for use in MS SQL Server solutions.
When deploying S2D with Scale out File Server on top, file services are supported. This mean you can store data such as Hyper-V VM, RDS Profile (UPD), or more generic data such as Word and PowerPoint, even though this is not recommended by Microsoft. SMB shares are supported but no NFS shares or iSCSI targets.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
vSAN iSCSI block storage acts as one or more targets for Windows or Linux operating systems running on a bare metal (physical) server.
vSAN iSCSI Service does not support other vSphere or ESXi clients or initiators, third-party hypervisors, or migrations using raw device mapping (RDMs).
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 enhances the vSAN iSCSI service by allowing dynamic resizing of iSCSI LUNs without disruption.
|
SMB3
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Built-in (Hypervisor-based, vSAN supported)
VMware vSphere Docker Volume Service (vDVS) technology enables running stateful containers backed by storage technology of choice in a vSphere environment.
vDVS comprises of Docker plugin and vSphere Installation Bundle which bridges the Docker and vSphere ecosystems.
vDVS abstracts underlying enterprise class storage and makes it available as docker volumes to a cluster of hosts running in a vSphere environment. vDVS can be used with enterprise class storage technologies such as vSAN, VMFS, NFS and VVol.
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS). When Cloud Native Storage is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
|
Built-in (native)
Microsoft provides enhancements in its own OS software for enabling S2D container support.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
Windows (Native)
Linux (Docker in Linux VM or Windows Server 2016 VM)
NEW
Windows Server 2019 offers native support for Windows Server 2016/2019 containers at this time.
Currently Docker EE does not yet officially support Windows Server 2019 (Build 1809). For updates please check https://docs.docker.com/ee/supported-platforms/
Leveraging Docker inside a Linux VM or Windows Server 2016 VM is supported (nested virtualization).
Docker containers running on Windows Server 2019 are based on Windows Server Core or Nano Server. The base images are now hosted on Microsofts container registry, MCR.
Windows Server 2019 introduces support for running both Windows and Linux containers on the same container host, using the same Docker daemon. This enables end-user organizations to have a heterogenous container host environment while providing flexibility to application developers.
Docker EE = Docker Enterprise Edition
Native = Windows Containers
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
OS-integrated software + CSV/SMB (Native)
VHDX (Docker in Linux VM or Windows Server 2016 VM)
NEW
Native: With Windows Server 2019 (plus latest updates) there is support for mapping container data volumes on top of Cluster Shared Volumes (CSV) backed by S2D shared volumes. This allows a container to access its persistent data regardless of the physical cluster node where the container resides.
With Scaleout File Server, created on top of an S2D cluster, the same CSV data folder can be made accessible via an Server Message Block (SMB) share. This remote SMB share can then be mapped locally on a container host, using the new SMB Global Mapping PowerShell.
Docker: When leveraging Docker inside a Linux or Windows Sever 2016 VM, virtual disks (VHDX) is configured and attached to the VM. Inside the VM one or more virtual disks are mapped to the Linux/Windows container.
Native = Windows Containers
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
Virtualized container hosts on Microsoft Hyper-V hypervisor (Docker in Linux VM, Native in Windows VM)
Bare Metal container hosts (Native)
NEW
Both Windows Server 2019 with the Hyper-V role installed and bare metal Windows Server 2019 are supported for hosting Windows containers.
Windows Server 2019 is not yet officially supported by Docker and Kubernetes.
Leveraging Docker inside a Linux VM to run Linux containers is also a supported configuration.
Native = Windows Containers
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
Linux
Windows 10 or Windows Server 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
Windows Server 2019 (Native)
Windows Server 2016 (Docker)
Linux (Docker)
NEW
Windows Server 2019 supports native Windows Server 2016 containers with Hyper-V Isolation and Windows Server 2019 containers with and without Hyper-V Isolation. Windows 10 containers are not (yet) supported for running on Windows Server 2019, with or without Hyper-V Isolation.
Windows Server 2019 is not yet officially supported by Docker and Kubernetes.
Native = Windows Containers
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS).
When Cloud Native Storage (CNS) is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS, vVols) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
VCP = vSphere Cloud Provider
CSI = Container Storage Interface
|
Kubernetes v1.14+
NEW
Windows Server 2019 is officially supported by Kubernetes since version 1.14. The current version is Kubernetes 1.17.
Windows Server 2019 is the only Windows operating system supported, enabling Kubernetes Node on Windows (including kubelet, container runtime, and kube-proxy). Windows Server 2019 is only supported as a worker node in the Kubernetes architecture and component matrix. This means that a Kubernetes cluster must always include Linux master nodes, zero or more Linux worker nodes, and zero or more Windows worker nodes. There are no plans to have a Windows-only Kubernetes cluster.
Kubernetes currently only supports Windows containers with process isolation. Support for Windows containers with Hyper-V isolation is planned for a future release.
Docker EE-basic 18.09 is required on Windows Server 2019 / 1809 nodes for Kubernetes.
v1.17: Windows worker nodes in a Kubernetes cluster now support Windows Server version 1903 in addition to the existing support for Windows Server 2019.
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume Plugin
The VMware vSphere Container Storage Interface (CSI) Volume Driver for Kubernetes leverages vSAN block storage and vSAN file shares to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree VMware vSphere CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using vSAN block storage and vSAN native files shares dynamically provisioned by VMware vSAN File Services.
The VMware vSphere CSI Volume Driver requires Kubernetes v1.14 or later and VMware vSAN 6.7 U3 or later. vSAN File Services requires VMware vSAN/vSphere 7.0.
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, vVol, VMFS or NFS datastores.
|
Kubernetes FlexVolume Plugin
NEW
FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes since version 1.2 (before CSI). CSI Plugins are still in an alpha feature state.
Windows has a layered filesystem driver to mount container layers and create a copy filesystem based on NTFS. All file paths in the container are resolved only within the context of that container.
The following storage functionality is not supported on Windows nodes:
- Volume subpath mounts. Only the entire volume can be mounted in a Windows container.
- Subpath volume mounting for Secrets
- Host mount projection
- DefaultMode (due to UID/GID dependency)
- Read-only root filesystem. Mapped volumes still support readOnly
- Block device mapping
- Memory as the storage medium
- File system features like uui/guid, per-user Linux filesystem permissions
- NFS based storage/volume support
- Expanding the mounted volume (resizefs)
CSI = Container Storage Interface
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
VMware has vSAN related Reference Architecture whitepapers available for both VMware Horizon and Citrix VDI platforms.
|
Microsoft RDS on Hyper-V
Citrix Virtual Apps and Desktops 7 1808
Workspot VDI on Hyper-V
NEW
Citrix Virtual Apps and Desktops 7 1808 provides official support for Windows Server 2019.
Microsoft Windows Server 2019 with Remote Desktop Services (RDS) installed and Hyper-V can be used to host multiple virtual desktops. Storage Spaces Direct (S2D) supports all VDI scenarios related to RDS.
Remote Desktop Service (RDS) is a proprietary Microsoft protocol that allows users to connect remotely to a network with a graphic user interface. While the RDS client is installed on the user system, the RDS server software is installed on the server, and a remote connection is established with one or more terminal servers. While users in the RDS network connect to the server using a VM, this VM is shared with other users and operates on the same server OS for all users. A Microsoft RDS farm can provide a desktop session, an application session and a virtual desktop session located in a virtual machine
Virtual desktop infrastructure (VDI) involves running user desktops inside virtual machines that are hosted on datacenter servers. In a VDI environment, each user is allotted a dedicated VM that runs a separate operating system. This provides an isolated environment for each individual user. A connection broker is used to manage the VMs.
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
VMware Horizon 7.0: Load bearing number is based on Login VSI tests performed on all-flash rack servers using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.9: Load bearing number is based on Login VSI tests performed on hybrid VxRail 160 appliances using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please read the corresponding whitepapers.
|
N/A
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Many
The following server hardware vendors provide vSAN Ready Nodes: Cisco, DELL, Fujitsu, Hitachi, HPE, Huawei, Lenovo, Supermicro.
|
Many
Microsoft Windows Server 2019 supports many well-known server hardware vendors such as Dell, Lenovo and HPE.
Please review the Hardware Compatibility List (HCL) for more information about the supported hardware. (https://www.windowsservercatalog.com/default.aspx
Microsoft recommends to deploy Microsoft HCI on WSSD hardware: https://www.microsoft.com/en-us/cloud-platform/software-defined-datacenter
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Many
The following server hardware vendors provide vSAN Ready Nodes: Cisco, DELL, Fujitsu, Hitachi, HPE, Huawei, Lenovo, Supermicro.
|
Many
Microsoft Windows Server 2019 supports many well-known server hardware vendors such as Dell, Lenovo and HPE.
Please review the Hardware Compatibility List (HCL) for more information about the supported hardware.
Pre-validated solutions are available through the Windows Server Software Defined program: https://www.microsoft.com/en-us/cloud-platform/software-defined-datacenter
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
Because vSAN has an extensive HCL, customers can opt multiple server densities.
Note: vSAN Ready Nodes are mostly based on standard 2U rack server configurations.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
Because Storage Spaces Direct (S2D) is mostly hardware agnostic, customers can opt for multiple server densities.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
Mixing is allowed, although this is not advised within a single vSAN cluster for a consistent performance experience.
|
Yes
Storage Spaces Direct (S2D) accepts that different server hardware can be added to the cluster.
If the capacity of the storage devices are not the same, the capacity used will be the same as the smallest available.
When deploying S2D in a single-layer model, be careful about live migrating VMs between nodes with dissimilar CPUs (eg. mixing AMD servers with Intel servers within the same cluster).
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
VMware vSphere supports 2nd generation Intel Xeon Scalable (Cascade Lake) processors in version 6.7U1 and upwards.
|
Flexible
NEW
There is a wide range of Intel Xeon E5, 1st Intel Xeon Scalable (Skylake) and 2nd generation Intel Xeon Scalable processors (Cascade Lake) to choose from (2 processor sockets per node).
|
|
|
Flexible
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
|
Flexible
Up to 24TB of memory in can be installed in each server node (equal to Windows Server 2016 Datacenter limit).
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: number of disks + capacity
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
vSAN 7.0 provides support for 32TB physical capacity drives. This extends the logical capacity up to 1PB when deduplication and compression is enabled.
|
Flexible
Up to 1PB per storage devices can be part of the same pool. These devices can be NVMe, SSD (SAS or SATA) and/or HDD (SAS or SATA).
The storage devices can be mixed for caching and tiering purposes.
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
|
Flexible
Any network adapters can be installed as long as they are part of the hardware listed in the Windows Server Catalog. For storage purposes, Remote-Direct Memory Access (RDMA) capable adapters are highly recommended (iWARP or RoCE).
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
VMware vSphere 6.5U1 and up officially support several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
Microsoft Windows Server 2019 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
CPU
Memory
Storage
GPU
VMware allows for the on-the fly (non-disruptive) adding/removal of individual disks in existing disk groups.
There is a maximum of 5 disk groups (flash cache device + capacity devices) on an individual ESXi host participating in a vSAN cluster. In a hybrid configuration each diskgroup consists of 1 Flash Device + a maximum of 7 capacity devices. This totals to 40 devices per ESXi host, although an average rack server has room for up to only 24 devices.
|
CPU
Memory
Storage
GPU
Storage: Any node can scale up to a maximum of 400TB of raw storage capacity. There is no set maximum for the number of devices for a node, however the maximum number of storage devices for a cluster is 416.
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only (vSAN VMKernel)
Storage+Compute: Existing vSAN clusters can be expanded by adding additional vSAN nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: When the vSAN VMkernel is installed and enabled on a host that is not contributing storage but resides in the same cluster as contributing hosts, vSAN datastores can be presented to these hypervisor hosts as well. This is also beneficial to storage migrations as it allows for online storage vMotions between vSAN storage and non-vSAN storage platforms. The use of the vSAN VMkernel requires a vSAN license for this host.
Storage-only: N/A; A vSAN node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
Compute+storage
Compute-only/Storage-only
Compute+storage: Existing single-layer Storage Spaces Direct clusters can be expanded by adding additional S2D nodes, which adds additional compute and storage resources to the shared pool.
Compute-only/Storage-only: When Storage Spaces Direct (S2D) is using the dual-layer deployment model, the storage nodes act as scale-out file servers and can be shared to any compute node through SMB3. Therefore the compute layer and the storage layer can scale-out independently.
The storage layer cannot be shared when Storage Spaces Direct (S2D) is using the single layer deployment model.
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
2-64 nodes in 1-node increments
The hypervisor cluster scale-out limits still apply: 64 hosts for VMware vSphere.
|
2-16 nodes in 1-node increments; >1,000 nodes in a federation (cluster set)
NEW
The maximum S2D cluster consists of 16 server nodes. The data is automatically rebalanced across the cluster when a server is wither added to or removed from the cluster.
The maximum raw capacity per S2D cluster is 4PB. The maximum number of drives per S2D cluster os 416.
Microsoft Windows Server 2019 brings Cluster Sets that enables to move VMs across several clusters that are federated in a 'cluster set'.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
2 Node minimum
The use of the witness virtual appliance eliminates the requirement of a third physical node in vSAN ROBO deployments. vSAN for ROBO edition licensing is best suited for this type of deployment.
|
2 Node minimum
Storage Spaces Direct (S2D) supports a minimum of 2 server nodes. S2D in Windows Server 2019 introduces support for two simultaneous failures with the new 'nested resiliency' feature.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Object Storage File System (OSFS)
|
SSB Block Pool
Enabling the S2D feature inside the Windows Failover Cluster settings automatically enables Storage Bus Layer (SBL) as well.SBL provides a virtual Initiator and Target to each node. SBL uses block over SMB3 and SMB Direct as the transport for communication between the servers in the cluster. SBL allows each node to see all disks of all the nodes within the same cluster. The disks are then aggregated within a shared Storage Pool.
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example. Apart from the read-cache vSAN only uses data locality in stretched clusters to avoid high latency.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
Because Storage Spaces Direct (S2D) uses both SBL and RDMA, active data can be located on other storage nodes without negatively impacting performance. As such RDMA is highly recommened when using S2D in conjunction with Hyper-V VM workloads.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases require a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
The software takes ownership of the unformatted physical disks available inside the host.
VMware vSAN 7.0 U1 introduces the HCI Mesh concept. With VMware HCI Mesh a vSAN cluster can leverage the storage of remote vSAN clusters for hosting VMs without sacrificing important features such as HA and DR. Up to 5 remote vSAN datastores can be mounted by a single vSAN cluster. HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols. It is full software-based and does not require any specialized hardware.
|
Direct-Attached (Raw)
Direct-attached: The software takes ownership of the unformatted physical disks available each host.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Hybrid (Flash+Magnetic)
All-Flash
VMware Enterprise edition offers All-Flash related features Erasure Coding and Data Reduction (Deduplication+Compression).
Hybrid hosts cannot be mixed with All-Flash hosts in the same vSAN cluster.
vSAN 7.0 provides support for 32TB physical capacity drives. This extends the logical capacity up to 1PB when deduplication and compression is enabled.
|
Hybrid (Flash+Magnetic)
All-Flash
(Persistent Memory)
NEW
Storage Spaces Direct (S2D) can be deployed using different compositions:
- Hybrid (Flash + HDD)
- All-Flash (SSD-only / Performance SSD + Capacity SSD / NVMe + SSD)
- Multi-Resilient Volume (Performance SSD + Capacity SSD + HDD / NVMe + Capacity SSD + HDD)
In an all-flash solution, NVMe can be used as a high-performance caching layer whilst large SATA HDDs (eg. 2-4TB) can be used as the persistent storage layer.
Microsoft Server 2019 support Intel Optane DC Persistent Memory. However, the Intel hardware has just entered the beta stage ans as such is not generally available (GA) yet.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SD, USB, DOM, HDD or SSD
|
SSD/HDD
Persistent Memory
NEW
When deploying Windows Server 2019 in a standard configuration (Core or with GUI) the OS has to be installed on physical disks (SSD or HDD).
|
|
|
|
Memory |
|
|
|
DRAM
|
DRAM
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
DRAM
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read Cache
|
Read Cache
Metadata structures
Storage Spaces Direct (S2D) leverages the CSV Cache as read cache for storage volumes.
When there are cache devices, for each 1TB of cache, 4GB of memory is used for metadata structures.
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Non-configurable
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
S2D Cache (Hybrid): non-configurable
CSV Cache: configurable
Storage Spaces Direct (S2D) uses 4GB of DRAM per 1TB of configured cache space on each node. This cache is useful only in hybrid storage configurations (SSD + HDD, NVMe + SSD or NVMe + HDD). If you implement an all-flash solution (only SSD or only NVMe) the cache is not enabled.
With regard to Cluster Shared Volumes (CSVs) a Block Cache can be configured. Microsoft recommends configuring 2GBs of CSV Block Cache or more.
When S2D is implemented using the dual-layer (disaggregated) model, almost all of the memory on the storage hosts can be used for caching.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, PCIe, UltraDIMM, NVMe
VMware vSAN 6.6 offers support for Intel Optane (3D XPoint technology) NVMe SSDs.
|
SSD, NVMe, (Persistent memory)
NEW
Storage Spaces Direct (S2D) supports both NVMe devices and SSD drives (SATA and SAS).
Supported flash media: NAND, 3D-NAND/V-NAND, 3D X-Point etc.
Supported Persistent Memory: NVDIMM-N.
NVDIMM-N is a DRAM/Flash hybrid memory module using flash to save the DRAM contents upon power failure.
The Windows Server Catalog does not list any NVDIMMs - including Intel Optane - that are either certified or compatible with Windows Server 2019. Also Azure Stack HCI Catalog does list Persistent Memory as a separate feature, but none of the hardware configurations so far leverage such hardware.
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
In all vSAN configurations 1 separate SSD per disk group is used for caching purposes. The other disks in the disk group are used for persistent storage of data.
For all-flash configurations, the flash device(s) in the cache tier are used for write caching only (no read cache) as read performance from the capacity flash devices is more than sufficient.
Two different grades of flash devices are commonly used in an all-flash vSAN configuration: Lower capacity, higher endurance devices for the cache layer and more cost effective, higher capacity, lower endurance devices for the capacity layer. Writes are performed at the cache layer and then de-staged to the capacity layer, only as needed.
|
Read/Write Cache (hybrid)
Write Cache (all-flash)
Persistent storage (all-flash)
Hybrid solutions based on Flash + HDD):
- Flash is the Read/Write cache;
- HDD is the persistent storage layer.
All-flash solutions based on NVMe + SSD:
- NVMe is Write Cache only;
- SSD is the persistent storage layer.
When using only SSD or only NVMe in an all-flash solution, caching can be disabled.
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
Hybrid: 1-5 Flash devices per node (1 per disk group)
All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity)
VMware vSAN 7.0 provides support for large flash devices (up to 32TB).
|
Up to 4PB per cluster
NEW
With S2D in Windows Server 2019 you can install storage devices up to 4PB per cluster. The maximum raw capavity per node is 400TB.
the maximum number of storage devices in a cluster is 416. There is no set limit to the number of storage devices per node.
In hybrid configurations at least two flash devices (NVMe, SATA or SAS) per node are a hard requirement.
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
Hybrid: SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
VMware vSAN supports the use of 512e drives. 512e magnetic hard disk drives (HDDs) use a physical sector size of 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities.
VMware vSAN 6.7 introduced support for 4K native (4Kn) mode.
VMware vSAN 7.0 provides support for 32TB physical capacity drives.
|
Hybrid: SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
Magnetic disks are used for storing persistent data. There is a choice to use either SAS or SATA. Microsft has no limitation regarding magnetic storage.
|
|
|
Persistent Storage
|
Persistent Storage
|
Persistent Storage
HDD is primarily meant as a high-capacity storage tier.
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
1-35 SAS/SATA HDDs per host/node
|
Up to 4PB per cluster
NEW
With S2D in Windows Server 2019 you can install storage devices up to 4PB per cluster. The maximum raw capavity per node is 400TB.
The maximum number of storage devices in a cluster is 416. There is no set limit to the number of storage devices per node.
In hybrid configurations at least two flash devices (NVMe, SATA or SAS) per node are a hard requirement.
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Flash Layer (SSD;PCIe;NVMe)
|
Flash Layer (PMEM/NVMe/SSD)
NEW
The flash devices are used for Read and Write caching. When enabling Storage Spaces Direct (S2D), the SSDs are bound to the HDDs in round robin fashion. If an SSD fails, the HDDs are bound to another SSD within the same node. The same applies to the combinations NVMe/SSD, PMEM/SSD and PMEM/NVMe.
The cache information is replicated accross the node in cache storage devices. If a node fails and is up again, the cache information are recovered from others nodes in the cluster.
The cache information stored in a flash device is replicated accross the nodes within the storage cluster to be able to sustain the loss of a flash device or an entire node. If a failed node is up again, the cache information is automatically recovered from the others nodes in the cluster.
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduced the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
2-way and 3-way Mirroring (RAID-1) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nested Resiliency (4N; RAID 5+1) (primary; 2-node only)
NEW
On volume creation the resiliency choices are:
- 2 or 3 way mirroring (= 2 or 3 replicas)
- single or dual parity (= erasure coding)
- Mirror-Accelerated Parity (= mirroring + erasure coding)
- Nested Resiliency (2-node only)
Replicas:
When choosing mirroring, each replica is placed on a separate physical node within the storage cluster. This means that 2-way mirroring requires a minimum of 2 nodes and 3-way mirroring requires a minimum of 3 nodes. 2-way mirroring most closely resembles RAID-1.
Mirroring provides the fastest possible reads and writes, with the least complexity, meaning the least latency and compute overhead.
Erasure Coding:
Single parity keeps only one bitwise parity symbol, which provides protection against one failure at the same time. It most closely resembles RAID-5.
Dual parity implements Reed-Solomon error-correcting codes to keep two bitwise parity symbols, thereby providing protection against up to two failures at the same time. It most closely resembles RAID-6.
Parity encoding provides better storage efficiency than mirroring without compromising fault tolerance.
Mixed Resiliency:
A volume can be part mirror and part parity. Based on the read/write activity, the new Resilient File System (ReFS) intelligently moves data between the two resiliency types in real-time to keep the most active data in the mirror part and the least active data in the parity part.
Mixed resiliency can be considered when most of the data on the volume is 'cold' data, but some sustained write activity for some data is still expected.
Nested Resiliency (2-node only):
This resiliency enables to support two simultaneous failures. When using Nested two-way mirror, the data is copied 3 times across the cluster with 2 data instances per node as a result (equal to 4-copy mirror). Can be also used in Multi Tier with one tier using two-way mirroring and the other tier using RAID5 parity.
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduced the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
1-2 Replicas (2N-3N)
Erasure Coding
Nested Resiliency (4N; RAID5+1) (primary; 2-node only)
NEW
Windows Server 2016 introduced a new feature called 'Fault Domain Awareness'. With Fault Domain Awareness the physical placement of devices on the node-, chassis- and rack level, can be properly defined. In the configuration a node can be assigned to a chassis and the chassis can be assigned to a rack. When Fault Domain Awareness is configured, S2D will spread the data intelligently across individual Fault Domains.
Effectively, when a node fails the data is already located on one or two other cluster nodes depending on the chosen protection level.
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Failure Domains
Block failure protection can be achieved by assigning nodes in the same appliance to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
Fault Domain Awareness
Windows Server 2016 introduced a new feature called 'Fault Domain Awareness'. With Fault Domain Awareness the physical placement of devices on the node-, chassis- and rack level, can be properly defined. In the configuration a node can be assigned to a chassis and the chassis can be assigned to a rack. When Fault Domain Awareness is configured, S2D will spread the data intelligently across individual Fault Domains.
Effectively, when a node fails the data is already located on one or two other cluster nodes depending on the chosen protection level.
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Failure Domains
Rack failure protection can be achieved by assigning nodes within the same rack to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
Fault Domain Awareness
Windows Server 2016 introduces a new feature called 'Fault Domain Awareness'. With Fault Domain Awareness the physical placement of devices on the node-, chassis- and rack level, can be properly defined. In the configuration a node can be assigned to a chassis and the chassis can be assigned to a rack. When Fault Domain Awareness is configured, S2D will spread the data intelligently across individual Fault Domains.
Effectively, when a rack fails the data is already located on one or two cluster nodes located in other racks depending the chosen protection level and physical deployment.
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Host Pinning (1N): Dependent on # of VMs
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
RAID5: The stripe size used by vSAN for RAID5 is 3+1 (33% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID5 is 4 nodes.
RAID6: The stripe size used by vSAN for RAID6 is 4+2 (50% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID6 is 6 nodes.
RAID5/6 can only be leveraged in vSAN All-flash configurations because of I/O amplification.
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
EC (N+2) (secondary): 50%-80%
Nested Resiliency (4N) (primary): 300%
Nested Resiliency (RAID5+1) (primary): 150%
NEW
Erasure Coding: Microsoft discourages using single parity because it can only safely tolerate one hardware failure at a time. If one server is being rebooted when suddenly another drive or server fails, there will be downtime. If there are only 3 S2D servers, Micosoft recommends using three-way mirroring.
The EC configuration depends on the storage setup (hybrid vs. all-flash) as well as the number of nodes in the S2D cluster.
EC in Hybrid configurations (SSD+HDD):
4-6 Nodes use RS 2+2
7-11 Nodes use RS 4+2
12-16 Nodes use LRC 8+2,1
EC in All-Flash configurations (All SSD):
4-6 Nodes use RS 2+2
7-9 Nodes use RS 4+2
9-15 Nodes use RS 6+2
16 Nodes uses LRC 12+2,1
EC = Erasure Coding
RS = Reed-Solomon
LRC = Local Reconstruction Codes
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
Disk scrubbing (software)
End-to-end checksum provides automatic detection and resolution of silent disk errors. Creation of checksums is enabled by default, but can be disabled through policy on a per VM (or virtual disk) basis if desired. In case of checksum verification failures data is fetched from another copy.
The disk scrubbing process runs in the background.
|
Read integrity checks
Proactive file integrity scrubber (requires ReFS integrity streams; optional)
Automatic in-line corruption correction (requires ReFS integrity streams; optional)
NEW
During writing of the data checksums are created and stored. When read again, a new checksum is created and compared to the initial checksum. If incorrect, a checksum is created from another replica of the same data. After succesful comparison this replica is used to repair the corrupted replica in order to stay compliant with the configured protection level.
ReFS uses a background scrubber, which enables ReFS to validate infrequently accessed data. This scrubber periodically scans the volume, identifies latent corruptions, and proactively triggers a repair of any corrupt data. The data integrity scrubber can only validate file data for files where integrity streams is enabled. By default, the scrubber runs every four weeks, though this interval can be configured within Task Scheduler.
Integrity streams is an optional feature in ReFS that validates and maintains data integrity using checksums. Integrity streams can be enabled for individual files, directories, or the entire volume, and integrity stream settings can be toggled at any time. Additionally, integrity stream settings for files and directories are inherited from their parent directories. Once integrity streams is enabled, ReFS will create and maintain a checksum for the specified file(s) in that files metadata.
Though integrity streams provides greater data integrity for the system, it also incurs a performance cost.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
Built-in (native)
VMware vSAN uses the 'vSANSparse' snapshot format that leverages VirstoFS technology as well as in-memory metadata cache for lookups. vSANSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
|
N/A
Storage Spaces Direct (S2D) does not provide any snapshot capabilities of its own.
No specific integration exists between S2D and Microsoft VSS and/or Hyper-V Checkpoints.
However, when using ReFSv2 volumes (instead of NTFS volumes) in S2D configurations, ReFSv2 allows Hyper-V checkpointing to be both fast and very efficient.
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local
|
N/A
Storage Spaces Direct (S2D) does not provide any snapshot capabilities of its own.
No specific integration exists between S2D and Microsoft VSS and/or Hyper-V Checkpoints.
However, when using ReFSv2 volumes (instead of NTFS volumes) in S2D configurations, ReFSv2 allows Hyper-V checkpointing to be both fast and very efficient.
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
GUI: 1 hour
vSAN snapshots are invoked using the existing snapshot options in the VMware vSphere GUI.
To create a snapshot schedule using the vCenter (Web) Client: Click on a VM, then inside the Monitoring tab select Tasks & Events, Scheduled Tasks, 'Take Snapshots…'.
A single snapshot schedule allows a minimum frequency of 1 hour. Manual snapshots can be taken at any time.
|
N/A
Storage Spaces Direct (S2D) does not provide any snapshot capabilities of its own.
No specific integration exists between S2D and Microsoft VSS and/or Hyper-V Checkpoints.
However, when using ReFSv2 volumes (instead of NTFS volumes) in S2D configurations, ReFSv2 allows Hyper-V checkpointing to be both fast and very efficient.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM
|
N/A
Storage Spaces Direct (S2D) does not provide any snapshot capabilities of its own.
No specific integration exists between S2D and Microsoft VSS and/or Hyper-V Checkpoints.
However, when using ReFSv2 volumes (instead of NTFS volumes) in S2D configurations, ReFSv2 allows Hyper-V checkpointing to be both fast and very efficient.
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
External (vSAN Certified)
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
External
Storage Spaces Direct (S2D) does not provide any backup capabilities of its own.
Therefore it relies on existing data protection solutions like Microsofts free-of-charge Windows Server Backup (WSB) software, Microsoft Data Protection Manager (DPM) which is part of the System Center suite, or any Hyper-V compatible 3rd party backup application like Veeam, CommVault or NetBackup. Windows Server Backup is part of the Windows Server license.
No specific integration exists between Storage Spaces Direct (S2D) and Microsoft WSB.
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
WSB:
Locally
To remote sites
NEW
Windows Server Backup (WSB) can store backups locally or send them to a remote location.
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
WSB GUI: 30 minutes
Task Scheduler: 1 minute
The Windows Server Backup (WSB) GUI allows for backups to happen once a day or at multiple times a day that you select. Selectable times are at 30 minute increments.
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
WSB: File System Consistent (Windows)
WSB: Application Consistent (MS Apps on Windows)
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
WSB: Entire VM
Windows Server Backup (WSB) is capable of protecting and restoring a Hyper-V environment at the VM level.
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
WSB: Entire VM (GUI)
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (native)
DataCore SANsymphonys remote replication function, Asynchronous Replication, is called upon when secondary copies will be housed beyond the reach of Synchronous Mirroring, as in distant Disaster Recovery (DR) sites. It relies on a basic IP connection between locations and works in both directions. That is, each site can act as the disaster recovery facility for the other. The software operates near-synchronously, meaning that it does not hold up the application waiting on confirmation from the remote end that the update has been stored remotely.
|
Built-in (Stretched Clusters only)
External
VMware vSAN does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like VMwares free-of-charge vSphere Replication (VR) or any vSphere compatible 3rd party remote replication application (eg. Zerto VR).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between VMware vSAN and VMware vSphere VR.
As of vSAN 7.0 vSphere Replication objects are visible in the vSAN capacity view. Objects are recognized as vSphere replica type, and space usage is accounted for under the Replication category.
|
Built-in (native)
Storage Replica (SR): Windows Server 2016 introduced a new feature called 'Storage Replica'. This feature enables block-level replication of an active source volume to a passive destination volume located on another physical server. Source and destination volumes can reside within the same cluster or within separate clusters.
Because Storage Replica operates on the Operating System (OS) layer, it is storage-agnostic. This means that on one site you can have Hyper-V 2019 on SAN, whereas on the other site you can have Hyper-V 2019 on S2D.
Hyper-V Replica (HR): Hyper-V Replica is an integral part of the Hyper-V role. This feature enables block-level log-based replication of an active source VM to a passive destination VM located on another Hyper-V server or to Microsoft Azure (requires Azure Site Recovery, which is a paid external service, i.e. not part of Windows Server 2019).
Because Hyper-V Replica operates on the hypervisor layer, it is storage agnostic. This means that on one site you can have Hyper-V 2019 on SAN, whereas on the other site you can have Hyper-V 2019 on S2D.
|
|
Remote Replication Scope
Details
|
To remote sites
To MS Azure Cloud
On-premises deployments of DataCore SANsymphony can use Microsoft Azure cloud as an added replication location to safeguard highly available systems. For example, on-premises stretched clusters can replicate a third copy of the data to MS Azure to protect against data loss in the event of a major regional disaster. Critical data is continuously replicated asynchronously within the hybrid cloud configuration.
To allow quick and easy deployment a ready-to-go DataCore Cloud Replication instance can be acquired through the Azure Marketplace.
MS Azure can serve only as a data repository. This means that VMs cannot be restored and run in an Azure environment in case of a disaster recovery scenario.
|
VR: To remote sites, To VMware clouds
vSAN allows for replication of VMs to a different vSAN cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
|
SR: To remote sites, to public clouds
HR: To remote sites, to Microsoft Azure (not part of Windows Server 2019)
Storage Replica (SR): Windows Server 2016 introduced a new feature called 'Storage Replica'. This feature enables block-level replication of an active source volume to a passive destination volume located on another physical server. Source and destination volumes can reside within the same cluster or within separate clusters.
Because Storage Replica operates on the Operating System (OS) layer, it is both location-agnostic and storage-agnostic:
- Location agnostic: this means that volume replication can go to any location where a Windows Server is running, be it another datacenter or IaaS (eg. VM on Azure, AWS, Google Cloud, IBM Cloud, etc).
- Storage agnostic: this means that on one site you can have Hyper-V 2019 on SAN, whereas on the other site you can have Hyper-V 2019 on S2D.
Hyper-V Replica (HR): Hyper-V Replica is an integral part of the Hyper-V role. This feature enables block-level log-based replication of an active source VM to a passive destination VM located on another Hyper-V server or to Microsoft Azure (requires Azure Site Recovery, which is a paid external service, i.e. not part of Windows Server 2019).
Because Hyper-V Replica operates on the hypervisor layer, it is storage agnostic. This means that on one site you can have Hyper-V 2019 on SAN, whereas on the other site you can have Hyper-V 2019 on S2D.
|
|
Remote Replication Cloud Function
Details
|
Data repository
All public clouds can only serve as data repository when hosting a DataCore instance. This means that VMs cannot be restored and run in the public cloud environment in case of a disaster recovery scenario.
In the Microsoft Azure Marketplace there is a pre-installed DataCore instance (BYOL) available named DataCore Cloude Replication.
BYOL = Bring Your Own License
|
VR: DR-site (VMware Clouds)
Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
SR: Data repository (Azure)
HR: DR-site (Azure)
|
|
Remote Replication Topologies
Details
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
VR: Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
SR: Single site
HR: Single-site and chained
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
Storage Replica (SR): At this time Storage Replica only supports 1-to-1 replications. Between two sites remote replication can be setup bi-directionally, meaning that volume A in site A could be replicated to site B whereas volume B in site B could be replicated to site A at the same time.
Hyper-V Replica (HR): Besides 1-to-1 replications Hyper-V Replica allows for extended (chained) replication. A VM can be replicated from a primary host to a secondary host, and then be replicated from the secondary host to a third host. Please note that it is not possible to replicate from the primary host directly to the second and the third (1-to-many).
|
|
Remote Replication Frequency
Details
|
Continuous (near-synchronous)
SANsymphony Asynchronous Replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes). End-users can inject custom consistency checkpoints based on CDP technology which has no minimum time slot/frequency.
|
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
The 'Stretched Cluster' feature is only available in the Enterprise edition.
|
SR: seconds (Near-sync), continous (Synchronous)
HR: 30 seconds (Asynchronous)
Storage Replica (SR): If the latency between volumes < 5ms, Storage Replica supports synchronous replication (no data loss).
Storage Replica supports asynchronous replication for longer ranges and higher latency networks. Storage Replica asynchronous replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes).
Hyper-V Replica (HR): With Hyper-V Replica replication frequency can be set to 30 seconds, 5 minutes, or 15 minutes on a per-VM basis.
|
|
Remote Replication Granularity
Details
|
VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
VR: VM
|
SR: Volume
HR: VM
Storage Replica (SR): Storage Replica replicates an entire source volume to a destination volume. You cannot specify a particular data set located inside a source volume when configuring a replication plan.
Hyper-V Replica (HR): Hyper-V Replica operates on the VM level.
|
|
Consistency Groups
Details
|
Yes
SANsymphony provides the option to use Virtual Disk Grouping to enable end-users to restore multiple Virtual Disks to the exact same point-in-time.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
VR: No
Protection is on a per-VM basis only.
|
No
|
|
|
VMware SRM (certified)
DataCore provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). DataCore SRA 2.0 (SANsymphony 10.0 FC/iSCSI) shows official support for SRM 6.5 only. It does not support SRM 8.2 or 8.1.
There is no integration with Microsoft Azure Site Recovery (ASR). However, SANsymphony can be used with the control and automation options provided by Microsoft System Center (e.g. Operations Manager combined with Virtual Machine Manager and Orchestrator) to build a DR orchestration solution.
|
VMware SRM (certified)
VMware Interoperability Matrix shows official support for SRM 8.3.
|
Azure Site Recovery
Azure Site Recovery (ASR) can be leveraged for DR orchestration of physical and virtual workloads.
ASR support on-premises to on-premises scenarios as well as on-premises to public cloud scenarios.
ASR is licensed separately from Windows Server 2019 Datacenter edition.
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
DataCore SANsymphony is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/kb/2149740.
|
VMware vSphere: Yes (certified)
There is read-locality in place preventing sub-optimal cross-site data transfers.
vSAN 7.0 introduces redirection for all VM I/O from a capacity-strained site to the other site, untill the capacity is freed up. This feature improves the uptime of VMs.
The Stretched Cluster feature is only available in the Enterprise edition of vSAN.
|
N/A
At this time Microsoft does not support S2D clusters that are stretched across data centers.
|
|
|
2+sites = two or more active sites, 0/1 or more tie-breakers
Theoretically up to 64 sites are supported.
SANsymphony does not require a quorum or tie-breaker in stretched cluster configurations, but can be used as an optional component. The Virtual Disk Witness can provide a tie-breaker role if for instance redundant inter site paths are not implemented. The tie-breaker node (server or device) must be other than the two nodes presenting a virtual disk. Access to the Virtual Disk Witness is leading for storage node behavior.
There are 3 ways to configure the stretched cluster without any tie-breakers:
1. Default: in a split-brain scenario both sides stay active allowing upper infrastructure layers (OS/database/application) to make a decision (eg. clustering principles). In any case SANsymphony prevents a merge when there is a risk to data integrity, and the end-user has to make the choice on how to proceed next (which side is true)
2. Select one side to go inaccessible
3. Select both sides to go inaccessible.
|
3-sites: two active sites + tie-breaker in 3rd site
NEW
The use of the Stretched Cluster Witness Appliance automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM within a third site.
vSAN 6.7 introduced the option to configure a dedicated VMkernel NIC for witness traffic. This enhances data security because witness traffic is isolated from vSAN data traffic.
vSAN 7.0 U1 introduces the vSAN Shared Witness. This feature allows end-user organizations to leverage a single Witness Appliance for up to 64 stretched clusters. This is especially useful in scenarios where many edge locations are involved. The size of the Witness Appliance determines the maximum number of cluster and components that can be managed.
|
N/A
At this time Microsoft does not support S2D clusters that are stretched across data centers.
|
|
|
<=5ms RTT (targeted, not required)
RTT = Round Trip Time
In truth the user/app with the least tolerated write latency defines the acceptable RTT or distance. In practice
|
<=5ms RTT
|
N/A
At this time Microsoft does not support S2D clusters that are stretched across data centers.
|
|
|
<=32 hosts at each active site (per cluster)
The maximum is per cluster. The SANsymphony solution can consist of multiple stretched clusters with a maximum of 64 nodes each.
|
<=15 hosts at each active site
|
N/A
At this time Microsoft does not support S2D clusters that are stretched across data centers.
|
|
SC Data Redundancy
Details
|
Replicas: 1N-2N at each active site
DataCore SANsymphony provides enhanced stretched cluster availability by offering local fault protection with In Pool Mirroring. With In Pool Mirroring you can choose to mirror the data inside the local Disk Pool as well as mirror the data across sites to a remote Disk Pool. In the remote Disk Pool data is then also mirrored. All mirroring happens synchronously.
1N-2N: With SANsymphony Stretched Clustering, there can be either 1 instance of the data at each site (no In Pool Mirroring) or 2 instances of the data a each site (In Pool RAID-1 Mirroring).
|
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
VMware vSAN 6.6 introduced enhanced stretched cluster availability with Local Fault Protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. You can define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site.
In the case of stretched clustering, selecting 0 replicas means that there is only one instance of the data available at each of the active sites.
Local Fault Protection is only available in the Enterprise edition of vSAN.
|
N/A
At this time Microsoft does not support S2D clusters that are stretched across data centers.
|
|
|
Data Services
|
|
|
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
Software (integration)
NEW
SANsymphony provides integrated and individually selectable inline deduplication and compression. In addition, SANsymphony is able to leverage post-processing deduplication and compression options available in Windows 2016/2019 as an alternative approach.
|
All-Flash: Software
Hybrid: N/A
Deduplication and compression are only available in Enterprise and Advanced editions of vSAN.
Deduplication and compression are not available for vSAN hybrid configurations.
|
Software
NEW
Windows Server 2019 introduces support for data deduplication on Resilient File System (ReFS) volumes.
|
|
Dedup/Compr. Function
Details
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency (Space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency (space savings)
NEW
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most storage solutions place emphasis on efficiency.
|
|
Dedup/Compr. Process
Details
|
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
DataCore SANSymphony 10 PSP12 and above leverage both inline deduplication and compression, as well as post process deduplication and compression techniques.
With inline deduplication incoming writes first hit the memory cache of the primary host and are replicated to the cache of a secondary host in an un-deduplicated state. After the blocks have been written to both memory caches, the primary host acknowledges the writes back to the originator. Each host then destages the written blocks to the persistent storage layer. During destaging, written blocks are deduplicates and/or compressed.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
All-Flash: Inline (post-ack)
Hybrid: N/A
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
Post-Process
NEW
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
|
Dedup/Compr. Type
Details
|
Optional
NEW
By default, deduplication and compression are turned off. For both inline and post-process, deduplication and compression can be enabled.
For inline deduplication and compression the feature can be turned on per node. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization. Either deduplication or compression or both can be selected per individual vDisk. Pools can host both capacity optimized and non-capacity optimized vDisks at the same time. The optional capacity optimization settings can be added/changed/removed during operation for each vDisk.
For post-processing the feature can be enabled per pool. All vDisks in that pool would be deduplicated and compressed. Each pool is an independent deduplication domain. This means only data in the pool is capacity optimized, but not across pools. Additionally, for post-processing capacity optimization can be scheduled so admins can decide when deduplication should run.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
All-Flash: Optional
Hybrid: N/A
NEW
Compression occurs after deduplication and just before the data is actually written to the persisent data layer.
In vSAN 7.0 U1 and onwards there are three settings to choose from: 'None', 'Compression Only' or 'Deduplication'.
When choosing 'Compression only' deduplication is effectively disabled. This optimizes storage performance, resource usage as well as availability. When using 'Compression only' a single disk failing no longer impacts the entire disk group.
|
Optional
NEW
By default deduplication and compression are turned off. Deduplication and compression can be enabled for selected volumes.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
|
Persistent data layer
Deduplication and compression are not used for optimizing read/write cache
|
Persistent data layer
NEW
Windows Server 2019 Deduplication only happens in the persistent data layer and not in the cache. The cache is not accessible from the file system and so deduplication cannot be applied to it.
|
|
Dedup/Compr. Radius
Details
|
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
For inline deduplication and compression raw physical disks are added to a capacity optimization pool. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization.
The post-processing capability provided through Windows Server 2016/2019 is highly scalable and can be used with volumes up to 64 TB and files up to 1 TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
Disk Group
Deduplication and compression is a cluster-wide setting and is performed within each disk group. Redundant copies of a block within the same disk group are reduced to one copy. However, redundant blocks across multiple disk groups are not deduplicated.
|
Volume
NEW
Windows Server 2019 deduplication is highly scalable and can be used with volumes up to 64TB and files up to 4TB in size. Data deduplication identifies repeated patterns across files on that volume.
In Windows Server 2019 Datacenter there is a maximum of 64 volumes per S2D cluster.
|
|
Dedup/Compr. Granularity
Details
|
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
With inline deduplication and compression, the data is organized in 128 KB segments. Depending on the optimization setting, a write into such a segment first gets compressed (when compression is selected) and then a hash is generated. If the hash is unique, the 128 KB segment is written back and the hash is added to the deduplication hash-table. If the hash is not unique, the segment is referenced in the deduplication hash table and discarded. The smallest chunk in the segment can be 4 KB.
For post-processing the system leverages deduplication in Windows Server 2016/2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
4 KB fixed block size
vSANs deduplication algorithm utilizes a 4K-fixed block size.
|
32-128 KB variable block size
NEW
By leveraging deduplication in Windows Server 2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Microsoft provides the Deduplication Evaluation Tool (DDPEVAL) to assess the data in a particular volume and predict the dedup ratio.
|
N/A
Enabling deduplication and compression can reduce the amount of storage consumed by as much as 7x (7:1 ratio).
|
N/A
|
|
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense. When end-users want to isolate new workloads and corresponding data on new nodes, data rebalancing is not used.
|
Full
Data can be redistributed evenly across all nodes in the cluster when a node is either added or removed.
For VMware vSAN data redistribution happens is two ways:
1. Automated: when physical disks are between 30% and 80% full and a node is added to the vSAN cluster, a health alert is generated that allows the end-user to execute an automated data rebalancing run. For this VMware uses the term 'proactive'.
2. Automatic: when physical disks are more than 80% full, vSAN executes an data rebalancing fully automatically, without requiring any user intervention. For this VMware uses the term 'reactive'.
As data is written, all nodes in the cluster service RF copies even when no VMs are running on the node which ensures data is being distributed evenly across all nodes in the cluster.
VMware vSAN 6.7 U3 included proactive rebalancing enhancements. All rebalancing activities can be automated with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks.
|
Full
Storage Spaces Direct (S2D) automatically rebalances data across nodes when a node is either added or removed. There is no user-intervention required for these redistribution activities.
Also you can execute a rebalance operation manually with the PowerShell 'Optimize-StoragePool' cmdlet.
|
|
|
Yes
DataCore SANsymphonys Auto-Tiering is a real-time intelligent mechanism that continuously positions data on the appropriate class of storage based on how frequently the data is accessed. Auto-Tiering leverages any combination of Flash and traditional disk technologies, whether it is internal or array based, with up to 15 different storage tiers that can be defined.
As more advanced storage technologies become available, existing tiers can be modified as necessary and additional tiers can be added to further diversify the tiering architecture.
|
N/A
The VMware vSAN storage architecture does not include multiple persistent storage layers, but rather consist of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
Yes
Microsoft S2D is able to leverage data tiering in a configuration where 3 distinct storage types are being used (NVMe + SSD + HDD). In this configuration the fastest storage devices, NVMe, become part of the caching tier, whilst SSD devices and HDD devices automatically become part of the persistent storage tier. Within the persistent storage tier, the SSD devices are part of the performance sub-tier and the HDD devices are part of the capacity sub-tier. The performance sub-tier is optimized for I/O (hot data) while the capacity sub-tier is optimized for Storage Efficiency (cold data).
|
|
|
|
Performance |
|
|
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
DataCore SANsymphony iSCSI and FC are fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero
Note: DataCore SANsymphony does not support Thick LUNs.
DataCore SANsymphony is also fully qualified for Microsoft Hyper-V 2012 R2 and 2016/2019 ODX and UNMAP/TRIM.
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
vSphere: Integrated
VMware vSAN is an integral part of the VMware vSphere platform and as such is not a separate storage platform.
VMware vSAN 6.7 adds TRIM/UNMAP support: space that is no longer used can be automatically reclaimed, reducing the overall capacity needed for running workloads.
|
RDMA
ReFSv2
Two mechanisms enable offloading storage processes from the server CPU:
- RDMA (network protocol);
- ReFSv2 (accelerated VHDX operations).
RDMA (Read Direct Memory Access) is strongly recommended when implementing S2D solution. It enables reading the hosts memory thus bypassing the OS. The result is a reduction of CPU usage, a decrease of network latency and an increase in throughput.
ReFSv2 in Windows Server 2019 allows for accelerated VHDX operations. ReFSv2 works with metadata to maintain integrity. ReFSv2 also works with metadata when creating or extending virtual disks. Due to accelerated VHDX operations, ReFSv2 writes metadata instead of writing zeros as new blocks on disk. This results in an accelerated creation of a fixed VHDX and accelerated merging of checkpoints during data protection maintenance.
|
|
|
IOPs and/or MBps Limits
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical clients/hosts.
2. Ability to set guarantees to ensure service levels for mission-critical clients/hosts.
SANsymphony currently supports only the first method. Although SANsymphony does not provide support for the second method, the platform does offer some options for optimizing performance for selected workloads.
For streaming applications which burst data, it’s best to regulate the data transfer rate (MBps) to minimize their impact. For transaction-oriented applications (OLTP), limiting the IOPs makes most sense. Both parameters may be used simultaneously.
DataCore SANsymphony ensures that high-priority workloads competing for access to storage can meet their service level agreements (SLAs) with predictable I/O performance. QoS Controls regulate the resources consumed by workloads of lower priority. Without QoS Controls, I/O traffic generated by less important applications could monopolize I/O ports and bandwidth, adversely affecting the response and throughput experienced by more critical applications. To minimize contention in multi-tenant environments, the data transfer rate (MBps) and IOPs for less important applications are capped to limits set by the system administrator. QoS Controls enable IT organizations to efficiently manage their shared storage infrastructure using a private cloud model.
More information can be found here: https://docs.datacore.com/SSV-WebHelp/quality_of_service.htm
In order to achieve consistent performance for a workload, a separate Pool can be created where selected vDisks are placed. Alternatively 'Performance Classes' can be assigned to differentiate between data placement of multiple workloads.
|
IOPs Limits (maximums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
vSAN currently supports only the first method and focusses on IOPs. 'MBps Limits' cannot be set. It is also not possible to guarantee a certain amount of IOPs for any given VM.
|
IOPs/MBps Limits (maximums)
IOPs Guarantees (minimums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Windows 2019 Failover Clustering includes Storage QoS for use in scenarios where S2D is used in conjunction with Hyper-V. Storage QoS supports both methods (maximums as well as minimums) and mostly focusses on IOPs. A Storage QoS policy can be tied to an individual virtual disk.
Two kind of QoS policies can be used:
1. Aggregated policies apply maximums and minimum for the combined set of VHD/VHDX files and virtual machines where they apply.
2. Dedicated policies apply the minimum and maximum values for each VHD/VHDx, separately.
A single policy can be tied to one or more virtual disks.
Storage QoS can only be used when all servers (storage clients and storage servers) are running Windows Server 2019.
|
|
|
Virtual Disk Groups and/or Host Groups
SANsymphony QoS parameters can be set for individual hosts or groups of hosts as well as for groups of Virtual Disks for fine grained control.
In a VMware VVols (=Virtual Volumes) environment a vDisk corresponds 1-to-1 to a virtual disk (.vmdk). Thus virtual disks can be placed in a Disk Group and a QoS Limit can then be assigned it. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and QoS Limits can be applied on the virtual disk level. The 1-to-1 allignment is realized by installing the DataCore Storage Management Provider in SCVMM.
|
Per VM/Virtual Disk
Quality of Service (QoS) for vSAN is normalized to a 32KB block size, and treats reads the same as writes.
|
Per Virtual Disk
Quality of service (QoS) for S2D is normalized to an 8KB block size, and treats reads the same as writes. Normalization is configurable and can be set between 8KB and 4GB.
|
|
|
Per VM/Virtual Disk/Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
In SANsymphony 'Flash Pinning' can be achieved using one of the following methods:
Method #1: Create a flash-only pool and migrate the individual vDisks that require flash pinning to the flash-only pool. When using a VVOL configuration in a VMware environment, each vDisk represents a virtual disk (.vmdk). This method guarantees all application data will be stored in flash.
Method #2: Create auto-tiering pools with at least 1 flash tier. Assign the Performance Class “Critical” to the vDisks that require flash pinning and place them in the auto-tiering pool. This will effectively and intelligently put as much of the data that resides in the vDisk in the flash tier as long that the flash tier has enough space available. Therefore this method is on a best-effort basis and dependent on correct sizing of the flash tier(s).
Methods #1 and #2 can be uses side-by-side in the same DataCore environment.
|
Cache Read Reservation: Per VM/Virtual Disk
With vSAN the Cache Read Reservation policy for a particular VM can be set to 100% to allow all data to also exist entirely on the flash layer. The difference with Nutanix 'VM Flash Mode' is, that with 'VM Flash Mode' persistent data of the VM resides on Flash and is never destaged to spinning disks. In contrast, with vSANs Cache Read Reservation data exists twice: one instance on persistent magnetic disk storage and one instance within the SSD read cache.
|
Not relevant (Cache architecture)
When deploying an S2D solution, a ratio can be configured between the number of cache devices and the number of capacity devices (1:2, 1:3, 1:4, 1:5, etc). This enables bonding a specific number of capacity devices to a cache device to ensure performance for the working set (=the data that is actively being used).
When designing a S2D cluster it must be ensured that the capacity of cache is at least 10% of raw data storage. This ensures that there is enough cache capacity to avoid read misses.
Because the cache data are replicated across nodes, even if a cache fails, the cache is not lost. Microsoft leverages RDMA to improve throughput between nodes. In case a cache device does fail, the related capacity devices are bound to another cache device in the same host. This is why Microsoft recommends at least two cache devices per node.
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
SANsymphony 10.0 PSP9 introduced native encryption when running on Windows Server 2016/2019.
|
Built-in (native)
|
Built-in (native)
|
|
Data Encryption Options
Details
|
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: In SANsymphony deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: SANsymphony provides software-based data-at-rest encryption that is XTS-AES 256bit compliant.
|
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
Hardware: vSAN does no longer support self-encrypting drives (SEDs).
Software: vSAN supports native data-at-rest encryption of the vSAN datastore. When encryption is enabled, vSAN performs a rolling reformat of every disk group in the cluster. vSAN encryption requires a trusted connection between vCenter Server and a key management server (KMS). The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. In contrast, vSAN native data-in-transit encryption does not require a KMS server. vSAN native data-at-rest and data-in-transit encryption are only available in the Enterprise edition.
vSAN encryption has been validated for the Federal Information Processing Standard (FIPS) 140-2 Level 1.
VMware has also validated the interoperability of HyTrust DataControl software encryption with its vSAN platform.
|
Hardware: N/A
Software: Microsoft BitLocker Drive Encryption; SMB encryption
Hardware: N/A
Software: Microsoft Bitlocker provides software encryption on standalone and cluster based NTFS or ReFS(v2) volumes. Cluster volumes (CSV) encryption support was added in Windows 2012 Server.
Microsoft BitLocker uses the Advanced Encryption Standard (AES) encryption algorithm with either 128-bit or 256-bit keys. It is generally recommended to use 256-bit keys because of their superior strength.
|
|
Data Encryption Scope
Details
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: SANsymphony provides encryption for data-at-rest; it does not provide encryption for data-in-transit. Encryption can be enabled per individual virtual disk.
|
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
Hardware: N/A
Software: VMware vSAN 7.0 U1 encryption provides enhanced security for data on a drive as well as data in transit. Both are optional and can be enabled seperately. HyTrust DataControl encryption also provides encryption for data-at-rest and data-in-transit.
|
Hardware: N/A
Software: Data-at-rest (BitLocker); Data-in-transit (SMB Encryption)
Hardware: N/A
Software: Microsoft BitLocker provides encryption for data-at-rest as well as data-in-transit during live migration of a VM; Microsof SMB encyrption provides encryption for data-in-transit.
|
|
Data Encryption Compliance
Details
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (Bitlocker)
Microsoft BitLocker has been validated for Federal Information Processing Standard (FIPS) 140-2 in March 2018.
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: No
Software: No
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
|
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
Hardware: N/A
Software vSAN: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Hytrust: Because HyTrust DataControl is an end-to-end solution, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
Hardware: N/A
Software: No
Hardware: N/A
Software: Microsoft BitLocker can be used to provide whole-disk encryption on a deduplicated disk since BitLocker sits beneath the deduplication software ie. at the end of the write path.
|
|
|
|
Test/Dev |
|
|
|
Yes
Support for fast VM cloning via VMware VAAI and Microsoft ODX.
|
No
VMware vSAN itself does not include fast cloning capabilities.
Cloning operations actually copy all the data to provide a second instance. When cloning a running VM on vSAN, all the VMDKs on the source VM are snapshotted first before cloning them to the destination VM.
VMware vSphere however does provide Instant Clone technology, which enables you to clone a VM instantly both from CPU and a memory stance.
|
No
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
Hyper-V to ESXi (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
|
ESXi to Hyper-V/Azure (external)
Microsoft provides tools to convert VMs from one hypervisor (mostly VMware vSphere) to another. Microsoft recommends Azure Site Recovery when performing a large-scale conversions.
Microsoft Virtual Machine Converter (MVMC) is a stand-alone tool that can be used to:
- convert virtual machines and disks from VMware hosts to Hyper-V hosts and Microsoft Azure;
- convert physical machines and disks to Hyper-V hosts.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
Azure Site Recovery is a DR orchestration solution for Hyper-V or VMware (as well as physical servers). When a physical server or a VMware vSphere VM is replicated to Azure, the disks are also converted in VHD format. Next you can download the VHD to run the VM in your on premises datacenter.
|
|
|
|
File Services |
|
|
|
Built-in (native)
SANsymphony delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. SANsymphony is capable of simultaneously handling highly-available block and file level services.
Raw storage is provisioned from within the SANsymphony GUI to the Microsoft file services layer, similar to provisioning Storage Spaces Volumes to the file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
More information can be found under: https://www.datacore.com/products/features/high-availability-nas-cluster-file-sharing.aspx
|
Built-in (native)
External (vSAN Certified)
NEW
vSAN 7.0 U1 has integrated file services. vSAN File Services leverages scale-out architecture by deploying an Agent/Appliance VM (OVF templates) on individual ESXi hosts. Within each Agent/Appliance VM a container, or “protocol stack”, is running. The 'protocol stack' creates a file system that is spread across the VMware vSAN Virtual Distributed File System (VDFS), and exposes the file system as an NFS file share. The file shares support NFSv3, NFSv4.1, SMBv2.1 and SMBv3 by default. A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects. The minimum number of containers that need to be deployed is 3, the maximum 32 in any given cluster. vSAN 7.0 Files Services are deployed through the vSAN File Service wizard.
vSAN File Services currenty has the following restrictions:
- not supported on 2-node clusters,
- not supported on stretched clusters,
- not supported in combination with vLCM (vSphere Lifecycle Manager),
- it is not supported to mount the NFS share from your ESXi host,
- no integration with vSAN Fault Domains.
The alternative to vSAN File Services is to provide file services through Windows guest VMs (SMB) and/or Linux guest VMs (NFS) on top of vSAN. These file services can be made highly available by using clustering techniques.
Another alternative is to use virtual storage appliances from a third-party to host file services on top of vSAN. The following 3rd party File Services partner products are certified with vSAN 6.7:
- Cohesity DataPlatform 6.1
- Dell EMC Unity VSA 4.4
- NetApp ONTAP Select vNAS 9.5
- Nexenta NexentaStor VSA 5.1.2 and 5.2.0VM
- Panzura Freedom Filer VSA 7.1.9.3
However, none of the mentioned platforms have been certified for vSAN 7.0 or 7.0U1 (yet).
|
Built-in (native;limited)
Although Storage Spaces Direct (S2D) supports Scale-out File Server, it is primarily meant to be used in Hyper-V and MS SQL use cases. In cases where standard file services (eg. home folders or shared departmental folders) are needed, Microsoft recommends to virtualize a Windows file server in Hyper-V.
Storing generic data on Scale-Out File Server is possible but not recommended by Microsoft. It is not recommended because Scale-out file Server does not provide some of the common file services features such as quotas and DFS.
For these use cases Microsoft recommends to rely on Windows guest VMs (SMB) and/or Linux guest VMs (NFS) to provide file services on top of S2D. These file services can be made highly available by using clustering techniques.
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
Windows clients
Linux clients
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
Windows clients
Although Storage Spaces Direct (S2D) supports Scale-out File Server, it is primarily meant to be used in Hyper-V and MS SQL use cases. In cases where standard file services (eg. home folders or shared departmental folders) are needed, Microsoft recommends to virtualize a Windows file server in Hyper-V.
Storing generic data on Scale-Out File Server is possible but not recommended by Microsoft. It is not recommended because Scale-out file Server does not provide some of the common file services features such as quotas and DFS.
For these use cases Microsoft recommends to rely on Windows guest VMs (SMB) and/or Linux guest VMs (NFS) to provide file services on top of S2D. These file services can be made highly available by using clustering techniques.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
SMB
NFS
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
SMB
NFS is not supported for file services deployed on S2D.
Although Storage Spaces Direct (S2D) supports Scale-out File Server, it is primarily meant to be used in Hyper-V and MS SQL use cases. In cases where standard file services (eg. home folders or shared departmental folders) are needed, Microsoft recommends to virtualize a Windows file server in Hyper-V.
Storing generic data on Scale-Out File Server is possible but not recommended by Microsoft. It is not recommended because Scale-out file Server does not provide some of the common file services features such as quotas and DFS.
For these use cases Microsoft recommends to rely on Windows guest VMs (SMB) and/or Linux guest VMs (NFS) to provide file services on top of S2D. These file services can be made highly available by using clustering techniques.
|
|
Fileserver Quotas
Details
|
Share Quotas, User Quotas
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
Share Quotas
vSAN 7.0 File Services supports share quotas through the following settings:
- Share warning threshold: When the share reaches this threshold, a warning message is displayed.
- Share hard quota: When the share reaches this threshold, new block allocation is denied.
|
N/A
Microsoft S2D Scale-out File Server does not provide any quota capabilities.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Analytics
Details
|
Partial
Because SANsymphony leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
Partial
vSAN 7.0 File Services provide some analytics capabilities:
- Amount of capacity consumed by vSAN File Services file shares,
- Skyline health monitoring with regard to infrastructure, file server and shares.
|
N/A
Microsoft S2D Scale-out File Server currently does not have advanced file analytics capabilities.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
Microsoft S2D does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
Microsoft S2D does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
Microsoft S2D does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
|
|
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
SANsymphonys graphical user interface (GUI) is highly configurable to accommodate individual preferences and includes guided wizards and workflows to simplify administration. All actions available from the GUI may also be scripted with PowerShell Commandlets to orchestrate workflows with other tools and applications.
|
Centralized
vSAN management, capacity monitoring, performance monitoring and efficiency reporting is performed through the vSphere Web Client interface.
Other functionality such as backups and snapshots are also managed from the vSphere Web Client Interface.
|
Centralized
The Failover Clustering Manager has been enhanced in Windows Server 2016 to incorporate Storage Spaces Direct (S2D). However, not all features are accessible by GUI. For some actions you rely quite heavily on PowerShell.
System Center Virtual Machine Manager (SCVMM) also provides a GUI to manage the storage besides the VM. SCVMM enables managing Storage QoS, the automation of deployment , VM placement and Storage Spaces Direct (S2D) deployment.
Windows Admin Center provides centralized management for S2D clusters including provisioning as well as real-time monitoring and alerting.
|
|
|
Single-site and Multi-site
|
Single-site and Multi-site
Centralized management of multicluster environments can be performed through the vSphere Web Client by using Enhanced Linked Mode.
Enhanced Linked Mode links multiple vCenter Server systems by using one or more Platform Services Controllers. Enhanced Linked Mode enables you to log in to all linked vCenter Server systems simultaneously with a single user name and password. You can view and search across all linked vCenter Server systems. Enhanced Linked Mode replicates roles, permissions, licenses, and other key data across systems. Enhanced Linked Mode requires the vCenter Server Standard licensing level, and is not supported with vCenter Server Foundation or vCenter Server Essentials.
|
Single-site and Multi-site
From a single Failover Clustering console, you can connect to several clusters by typing the name of each cluster in the connection window.
From SCVMM you can connect to several Failover Clusters. You can manage every cluster resource (compute, network, storage) from a single-pane-of-glass.
Windows Admin Center provides centralized management for S2D clusters including provisioning as well as real-time monitoring and alerting.
|
|
GUI Perf. Monitoring
Details
|
Advanced
SANsymphony has visibility into the performance of all connected devices including front-end channels, back-end channels, cache, physical disks, and virtual disks. Metrics include Read/write IOPs, Read/write MBps and Read/Write Latency at all levels. These metrics can be exported to the Windows Performance Monitoring (Perfmon) utility where other server parameters are being tracked.
The frequency at which performance metrics can be captured and reported on is configurable, real-time down to 1 second intervals and long term recording at 2 minutes granularity.
When a trend analysis is required, an end-user can simply enable a recording session to capture metrics over a longer period of time.
|
Advanced
NEW
Performance information can be viewed on the cluster level, the Host level and the VM level. Per VM there is also a view on backend performance. Performance graphs focus on IOPS, MB/s and Latency of Reads and Writes. Statistics for networking, resynchronization, and iSCSI are also included.
End-users can select saved time ranges in performance views. vSAN saves each selected time range when end-users run a performance query.
There is also a VMware vRealize Operations (vROps) Management Pack for vSAN that provides additional options for monitoring, managing and troubleshooting vSAN.
The vSphere 6.7 Client includes an embedded vRealize Operations (vROps) plugin that provides basic vSAN and vSphere operational dashboards. The vROps plugin does not require any additional vROps licensing. vRealize Operations within vCenter is only available in the Enterprise and Advanced editions of vSAN.
vSAN observer as of vSAN 6.6 is deprecated but still included. In its place, vSAN Support Analytics is provided to deliver more enhanced support capabilities, including performance diagnostics. Performance diagnostics analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance Diagnostics requires participation in the Customer Experience Improvement Program (CEIP).
vSAN 6.7 U3 introduced a vSAN CPU metric through the performance service, and provides a new command-line utility (vsantop) for real-time performance statistics of vSAN, similar to esxtop for vSphere.
vSAN 7.0 introduces vSAN Memory metric through the performance service and the API for measuring vSAN memory usage.
vSAN 7.0 U1 introduces vSAN IO Insight for investigating the storage performance of individual VMs. vSAN IO Insight generates the following performance statistics which can be viewed from within the vCenter console:
- IOPS (read/write/total)
- Throughput (read/write/total)
- Sequential & Random Throughput (sequential/random/total)
- Sequential & Random IO Ratio (sequential read IO/sequential write IO/sequential IO/random read IO/random write IO/random IO)
- 4K Aligned & Unaligned IO Ratio (4K aligned read IO/4K aligned write IO/4K aligned IO/4K unaligned read IO/4K unaligned write IO/4K unaligned IO)
- Read & Write IO Ratio (read IO/writeIO)
- IO Size Distribution (read/write)
- IO Latency Distribution (read/write)
|
Advanced
NEW
Both the Failover Clustering GUI as well as the SCVMM GUI show capacity, usage, volume state (degraded, recovering or OK), and which physical disks are used for what volume.
The GUI is limited in displaying performance related information and you need to use PowerShell for detailed information (eg. on IOPS).
Windows Admin Center provides centralized management for S2D clusters including provisioning as well as real-time monitoring and alerting.
Admin Center shows current performance statistics and introduces historical data capture for S2D clusters in Windows Server 2019. Performance history is collected automatically and stored on the cluster for up to one year.
Cluster storage performance metrics that can be viewed are: IOPS, Throughput (MBps) and Latency (ms). The metrics do not differentiate between Reads and Writes.
Volume storage performance metrics that can be viewed are: Read/Write IOPS IOPS, Read/Write Throughput (MBps) and Read/Write Latency (ms).
Physical Drive storage performance metrics that can be viewed are: Read/Write IOPS, Read/Write Throughput (MBps) and Average Latency (ms).
Virtual Drive storage performance metrics that can be viewed are: Read/Write IOPS, Read/Write Throughput (MBps) and Read/Write Latency (ms).
Virtual Machine storage performance metrics that can be viewed are: IOPS and Throughput (MBps). There is no differentation for Reads/Writes (yet).
Server (Physical Host) storage performance metrics are not available as of yet.
Read Cache storage performance metrics that can be viewed are: Read hits, Read misses, Hit rate.
Write Cache storage performance metrics that can be viewed are: New writes, Cache size, % Full.
Windows Admin Center is complementary to Windows Server 2019 and Windows 10 and as such does not require separate licenses.
Windows Server 2019 introduces drive built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure. Drives with abnormal behavior, whether it’s their average or 99th percentile latency that stands out, are automatically detected and marked in PowerShell and Windows Admin Center with an “Abnormal Latency” status.
|
|
|
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
DataCore offers deep integration with VMware vSphere and Microsoft Hyper-V, as well as their respective systems management tools, vCenter and System Center.
SCVMM = Microsoft System Center Virtual Machine Manager
|
VMware HTML5 vSphere Client (integrated)
VMware vSphere Web Client (integrated)
vSAN 6.6 and up provide integration with the vCenter Server Appliance. End-users can create a vSAN cluster as they deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables end-users to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster.
vSAN 6.6 and up also support host-based vSAN monitoring. This means end-users can monitor vSAN health and basic configuration through the ESXi host client. This also allows end-users to correct configuration issues at the host level.
vSAN 6.7 introduced support for the HTML5-based vSphere Client that ships with vCenter Server. vSAN Configuration Assist and vSAN Updates are available only in the vSphere Web Client.
|
SCVMM 2016
Windows Admin Center (HCI only)
SCVMM 2016 provides wizards that allows you to configure and deploy both single-layer and dual-layer S2D clusters for use with Hyper-V.
The Create Hyper-V wizard allows deployment of S2D on hosts that already have Windows Server 2016 Datacenter installed as well as hosts that do not have an OS installed yet.
Also, from SCVMM S2D Storage QoS Policies can be configured.
Windows Admin Center provides centralized management for S2D clusters including provisioning as well as real-time monitoring and alerting.
Windows Admin Center is complementary to Windows Server and Windows 10 and as such does not require separate licenses.
|
|
|
|
Programmability |
|
|
|
Full
Using DataCores native management console, Virtual Disk Templates can be leveraged to populate storage policies. Available configuration items: Storage profile, Virtual disk size, Sector size, Reserved space, Write-trough enabled/disabled, Storage sources, Preferred snapshot pool, Accelerator enabled/disabled, CDP enabled/disabled.
Virtual Disk Templates integrate with System Center Virtual Machine Manager (SCVMM), VMware Virtual Volumes (VVol) and OpenStack. Virtual Disk Templates are also fully supported by the REST-API allowing any third-party integration.
Using Virtual Volumes (VVols) defined through DataCore’s VASA provider, VMware administrators can self-provision datastores for virtual machines (VMs) directly from their familiar hypervisor interface. This is possible even for devices in the DataCore pool that don’t natively support VVols and never will, as SANsymphony can be used as a storage-virtualization layer for these devices/solutions. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
Using Classifications and StoragePools defined through DataCore’s Storage Management Provider, Hyper-V administrators can self-provision virtual disks and pass-through LUNS for virtual machines (VMs) directly from their familiar SCVMM interface.
|
Full
Storage Policy-Based Management (SPBM) is a feature of vSAN that allows administrators to create storage profiles so that virtual machines (VMs) dont need to be individually provisioned/deployed and so that management can be automated. The creation of storage policies is fully integrated in the vSphere GUI.
|
Partial (Storage QoS)
From SCVMM S2D Storage QoS Policies can be configured.
Also you can either define or skip Storage Spaces configuration when creating the Storage Pool. If no configuration is specified during the Storage Spaces creation, it will take the default configuration that has been defined at the Storage Pool layer.
|
|
|
REST-APIs
PowerShell
The SANsymphony REST-APIs library includes more than 200 new representational state transfer (REST) operations, so automation can be leveraged more extensively. RESTful interfaces are used by products such as Lenovo XClarity, Cisco Embedded Resource Manager and Dell OpenManage to manage infrastructure in the enterprise.
SANsymphony provides its own Powershell cmdlets.
|
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
|
PowerShell
WMI
Public SDK (WAC)
Several PowerShell cmdlets has been developped to manage Storage Spaces Direct (S2D). These PowerShell cmdlets are extensive and provide you with a powerful tool to automate and troubleshoot an S2D solution.
WAC = Windows Admin Center
|
|
|
OpenStack
OpenStack: The SANsymphony storage solution includes a Cinder driver, which interfaces between SANsymphony and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage.
Datacore SANsymphony programmability in VMware vRealize Automation and Microsoft System Center can be achieved by leveraging PowerShell and the SANsymphony specific cmdlets.
|
OpenStack
VMware vRealize Automation (vRA)
OpenStack integration is achieved through VMware Integrated OpenStack v2.0
vRealize Automation 7 integration is achieved through vRA 7.0.
|
Azure Automation
System Center Orchestrator
System Center Orchestrator allows the orchestration of S2D management automation tasks. However, note that this product is to be deprecated in the near future. As an alternative, Azure Automation can be used to create S2D management workflows and automate tasks.
|
|
|
Full
The DataCore SANsymphony GUI offers delegated administration to secondary users through fine-grained Role-based Access Control (RBAC). The administrator is able to define Virtual Disk ownership as well as privileges associated with that particular ownership. Owners must have Virtual Disk privileges in an assigned role in order to perform operations on the virtual disk. Access can be very refined. For example, one owner may have the privilege to create a snapshot of a virtual disk, but not have the ability to serve or unserve the same virtual disk. Privilege sets define the operations that can be performed. For instance, in order for an owner to perform snapshot, rollback, or replication operations, they would require those privilege sets in an assigned role.
|
N/A (not part of vSAN license)
VMware vSAN does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license.
|
N/A (not part of S2D license)
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Microsoft Storage Spaces Direct (S2D) does not provide any end-user self service capabilities of its own.
Self-Service functionality however can be enabled by leveraging Windows Azure Pack (WAP) and Microsoft Azure Stack. These solutions require separate licenses.
If you are using Windows Azure Pack (WAP) and SCVMM, you can deploy the solution on top of an S2D cluster. WAP and SCVMM are based on storage classification that can be defined from SCVMM.
|
|
|
|
Maintenance |
|
|
|
Unified
All storage related features and functionality are built into the DataCore SANsymphony platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
Integration with 3rd party systems (e.g. OpenStack, vSphere, System Center) are delivered seperately but are free-of-charge.
|
Partially Distributed
For a number of features and functions vSAN relies on other components that need to be installed and upgraded next to the core vSphere platform. This is mosly relevant for backup/restore and file services. As a result some dependencies exist with other software.
|
Partially Distributed
For a number of features and functions Storage Spaces Direct (S2D) relies on other components that need to be installed and upgraded next to the core Windows platform. Examples are backup/restore and advanced management software. As a result some dependencies exist with other software.
Windows Admin Center is starting to close the gap where day-to-day administration is concerned.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
Each SANsymphony update is packaged in an installation Wizard which contains a fully guided upgrade process. The upgrade process checks all system requirements and performs a system health before starting the upgrade process and before moving from one node to the next.
The user can also decide to upgrade a SANsymphony cluster manually and follow all steps that are outlined in the Release Notes.
|
Rolling Upgrade (1-by-1)
Because vSAN is a kernel-based solution, upgrading vSAN requires upgrading the vSphere hypervisor. The VMware vSphere Update Manager (VUM) can be used to automate the upgrade process of hosts in a vSAN cluster. When upgrading, you can choose between two evacuation modes: Ensure Accessibility or Full Data Migration.
VMware Update Manager (VUM) builds recommendations for vSAN. Update Manager can scan the vSAN cluster and recommend host baselines that include updates, patches, and extensions. It manages recommended baselines, validates the support status from vSAN HCL, and downloads the correct ESXi ISO images from VMware.
vSAN 6.7 U1 performs a simulation of data evacuation to determine if the operation will succeed or fail before it starts. If the evacuation will fail, vSAN halts the operation before any resynchronization activity begins. In addition, the vSphere Client enables end users to modify the component repair delay timer.
vSAN 6.7 U3 includes an improved vSAN update recommendation experience from VUM, which allows users to configure the recommended baseline for a vSAN cluster to either stay within the current version and only apply available patches or updates, or upgrade to the latest ESXi version that is compatible with the cluster.
vSAN 6.7 U3 introduces vCenter forward compatibility with ESXi. vCenter Server can manage newer versions of ESXi hosts in a vSAN cluster, as long as both vCenter and its managed hosts have the same major vSphere version. Critical ESXi patches can be applied without updating vCenter Server to the same version.
vSAN 7.0 native File Services upgrades are also performed on a rolling basis. The file shares remain accessible during the upgrade as file server containers running on the virtual machines which are undergoing upgrade fail over to other virtual machines. During the upgrade some interruptions might be experienced while accessing the file shares.
|
Rolling Upgrade (1-by-1)
The recommended way to upgrade a Storage Spaces Direct (S2D) cluster is Cluster Aware Updating (CAU). CAU orchestrates the restart of nodes and cares about the volume state (degraded or not) before upgrading a node. For the operating system upgrade, Microsoft has developped Rolling Cluster Upgrade (RCU) that enables adding nodes with a different OS version inside the same cluster.
|
|
FW Upgrade Execution
Details
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
DataCore provides integrated firmware-control for FC-cards. This means the driver automatically loads the required firmware on demand.
|
Rolling Upgrade (1-by-1)
VMware vSAN provides 1-click GUI-based support for installing/updating firmware from a growing list of server hardware vendors. Currently this works for Dell, Lenovo, Fujitsu, and SuperMicro servers.
Some other server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
VMware Update Manager (VUM) builds recommendations for vSAN. Update Manager can scan the vSAN cluster and recommend host baselines that include updates, patches, and extensions. It manages recommended baselines, validates the support status from vSAN HCL, and downloads the correct ESXi ISO images from VMware.
HBA firmware update through VUM: Storage I/O controller firmware for vSAN hosts is now included as part of the VUM remediation workflow. This functionality was previously provided in a vSAN utility called Configuration Assist. VUM also supports custom ISOs that are provided by certain OEM vendors and vCenter Servers that do not have internet connectivity.
vSAN 7.0 introduces vSphere Lifecycle Manager (vLCM) support for Dell and HPE ReadyNodes. vSphere Lifecycle Manager (vLCM) uses a desired-state model that provides lifecycle management for the hypervisor and the full stack of drivers and firmware on ESXi hosts.
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
No
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer unified support for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Yes (most OEM vendors)
Most VMware OEM partners (eg. Fujitsu, HPE, Dell) offer full support for both server hardware and VMware vSphere software including vSAN. Because VMware vSphere is delivered through OEM, customers are required to buy both software and hardware from the server hardware vendor.
Because VMware vSAN is a software-only offering (SDS), end users still have the choice to acquire server hardware support and software support separately.
|
No (Yes for some Tier-1 server hardware vendors)
With regard to Microsoft Storage Spaces Direct (S2D) as a software-only offering, Microsoft does not offer unified support for the entire solution. This means storage software support (Microsoft Storage Spaces Direct) and server hardware support are separate.
Some Tier-1 server hardware vendors like Dell EMC or DataOn do offer hardware+software support in case of S2D Ready-Nodes.
S2D in Windows Server 2019 will be fully supported in mid-January 2019 when hardware will be validated and officially added to the Windows Server Software-Defined (WSSD) Solution list.
|
|
Call-Home Function
Details
|
Partial (HW dependent)
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer call-home for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Partial (vSAN Support Insight)
With regard to vSAN as a software-only offering (SDS), VMware does not offer call-home for the entire solution consisting of both storage software and sever hardware. This means storage software support (vSAN) and server hardware support are separate.
VMware vSAN Support Insight: For end user organizations participating in the VMware Customer Experience Improvement Program (CEIP) anonymized telemetry data is collected hourly to help VMware Global Support Services (GSS) and Engineering to understand usage patterns of features and hardware, identify common product issues faced by end users, and understand and diagnose the end user organizations configuration and runtime state to expedite support request handling. All information is gathered from VMWare vCenter and as such provides an indirect view of the server hardware state.
VMware vSAN Support Insight is available at no additional cost for all vSAN customers with an active support contract running vSphere 6.5 Patch 2 or higher.
Some server hardware vendors provide call-home support for hardware related failures eg. failed disks, faulty network interface cards (NICs). In some cases this type of pro-active support requires an enhanced support contract.
|
Partial (HW dependent)
With regard to Storage Spaces Direct (S2D) as a software-only offering (SDS), Microsoft does not offer call-home for the entire solution. This means storage software support (Microsoft Storage Spaces Direct) and server hardware support are separate.
|
|
Predictive Analytics
Details
|
Partial
Capacity Management: DataCore SANsymphony Analysis and Reporting supports depletion monitoring of the capacity, complements pool space threshold warnings by regularly evaluating the rate of capacity consumption and estimating when space will be depleted. The regularly updated projections give you a chance to add more storage to the pool before you run out of storage. It also helps you do a better job of capacity planning with fewer surprises. To help allocate costs, especially in private cloud and hosted cloud services, SANsymphony generates reports quantifying the storage resources consumed by specific hosts or groups of hosts. The reports tally several parameters.
Health Monitoring: A combination of system health checks and access to device S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) alerts help to isolate performance and disk problems before they become serious.
DataCore Insight Services (DIS) offers additional capabilites including log-analytics for predictive failure analysis and actionable insights - including hardware.
DIS also provides predictive capacity trend analysis in order to pro-actively warn about licensing limitations being reached within x days and/or disk pools running out of capacity.
|
Full (not part of vSAN license)
vRealize Operations (vROps) provides native vSAN support with multiplay dashboards for proactive alerting, heat maps, device and cluster insights, and streamlined issue resolution. Also vROps provides forward trending and forecasting to vSAN datastore as well as any another datastore (SAN/NAS).
vSAN 6.7 introduced 'vRealize Operations (vROps) within vCenter'. This provides end users with 6 dashboards inside vCenter console, giving insights but not actions. Three of these dashboards relate to vSAN: Overview, Cluster View, Alerts. One of the widgets inside these dasboards displays 'Time remaining before capacity runs out'. Because this provides only some very basic trending information, a full version of the vROps product is still required.
'vRealize Operations (vROps) within vCenter' is included with vSAN Enterprise and vSAN Advanced.
The full version of vRealize Operations (vROps) is licensed as a separate product from VMware vSAN.
VMware vSAN 6.7 U3 introduced increased hardening during capacity-strained scenarios. This entails new robust handling of capacity usage conditions for improved detection, prevention, and remediation of conditions where cluster capacity has exceeded recommended thresholds.
|
Full
NEW
System Insights is a new predictive analytics feature in Windows Server 2019.
System Insights introduces four default capabilities focussed on capacity forecasting:
- CPU capacity forecasting: Forecasts CPU usage.
- Networking capacity forecasting: Forecasts network usage for each network adapter.
- Total storage consumption forecasting: Forecasts total storage consumption across all local drives.
- Volume consumption forecasting: Forecasts storage consumption for each volume.
Each capability analyzes past historical data to predict future usage, and all of the forecasting capabilities are designed to forecast long-term trends rather than short-term behavior.
Other templates will be available over the time.
Windows Server version 1903 introduced the capability 'disk anomaly detection'. Disk anomaly detection highlights when disks are behaving differently than usual. While different isnt necessarily a bad thing, seeing these anomalous moments can be helpful when troubleshooting issues on your systems.
|
|