|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Built for performance and robustness
- + Broad range of hardware support
- + Well suited for open private cloud platforms
|
- + Extensive data protection integration
- + Policy-based management
- + Fast streamlined deployment
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Only basic support for VMware and Hyper-V
- - No native dedup capabilities
- - No native encryption capabilities
|
- - Single server hardware support
- - No bare-metal support
- - No native file services
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: StorPool Distributed Storage (StorPool Storage)
Type: Software-only (SDS)
Development Start: 2011
First Product Release: Nov 2012
StorPool was founded in 2011 and began to ship its first software-defined storage (SDS) platform, StorPool Distributed Storage, towards the end of 2012 to early access customers. In October 2013 the StorPool solution became Generally Available (GA). StorPool combines direct-attached storage drives from multiple standard servers to create a single pool of shared block storage. StorPool leverages a fully-distributed, shared-nothing architecture. All storage functions are performed by all servers that are part of the cluster on an equal peer-to-peer basis. StorPool can be used with any industry standard x86 server hardware.
StorPool is a mature SDS offering that has been running in production environments for over 7 years now and up to multi-petabyte scale.
The platforms customer install base is unknown. There are currently more than 30 employees working for StorPool.
|
Name: HPE SimpliVity 380
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2013
SimpliVity was founded late 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, OmniStack (OmniStack), in 2013. The core of the SimpliVity solution is the OmniStack OS combined with OmniStack Accellerator PCIe card. In January 2017 SimpliVity was acquired by HPE. In the second quarter of 2017 HPE introduced SimpliVity on HPE ProLiant server hardware and the platform was rebranded to HPE SimpliVity 380. In July 2018 HPE extended the HPE SimpliVity product family with the HPE SimpliVity 2600 featuring software-only deduplication and compression. In June 2019 HPE once again extended the SimpliVity product family with the HPE SimpliVity 325, a 1U server model featuring a single AMD EPYC CPU and all-flash storage.
In January 2018 HPE SimpliVity had a customer install base of approximately 2,000 customers worldwide. The number of employees working in the HPE SimpliVity division is unknown at this time.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
Release Dates:
SP 19.01: Jun 2019
SP 18.02: Jan 2018
SP 18.01: Dec 2017
SP 16.03: Dec 2016
SP 16.02: Aug 2016
SP 16.01: Mar 2016
SP 15.03: Nov 2015
SP 15.02: Mar 2015
SP 15.01: Jan 2015
SP 14.12: Dec 2014
SP 14.10: Oct 2014
SP 14.08: Aug 2014
SP 14.04: Apr 2014
SP 14.02: Feb 2014
SP 13.10: Oct 2013 (GA)
SP 20121217: Dec 2012 (Early access)
SP 20121119: Nov 2012 (Early access)
|
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
4th Generation software on 9th and 10th Generation HPE server hardware. The HPE SimpliVity 380 platform has shaped up to be a full-featured platform in virtualized datacenter environments.
RapidDR 3.5.1: dec 2020
RapidDR 3.5.0: oct 2020
RapidDR 3.1.1: feb 2020
RapidDR 3.1: jan 2020
RapidDR 3.0.1: sep 2019
RapidDR 3.0: jun 2019
RapidDR 2.5.1: dec 2018
RapidDR 2.5: oct 2018
RapidDR 2.1.1: jun 2018
RapidDR 2.1: mar 2018
RapidDR 2.0: oct 2017
RapidDR 1.5: feb 2017
RapidDR 1.2: oct 2016
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
N/A
StorPool is sold as a software-only solution. Server hardware must be acquired separately.
Hardware is either procured by the customer, as per Hardware Compatibility List (HCL), or as a complete software+hardware solution from selected StorPool technology partners.
Hardware from any vendor can be used, as the HCL is on a component level (CPU, SSD, etc.) basis.
|
Per Node
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Capacity based (per TB)
Both OPEX (monthly recurring) and CAPEX (perpetual) options are available.
StorPool standard licensing is on a pay-as-you-grow monthly recurring basis and is “all inclusive”. This means it includes the right to use the software, 24/7 support, managed services, new versions & updates, pre-deployment consulting, fine tuning and proactive monitoring.
StorPool also provides perpetual licensing with 1-year and 3-year prepay packages at considerable discounts.
Per TB licensing is based on the amount of data within two separate performance tiers: HDD and SSD. The capacity of these two tiers can be mixed within the same solution.
|
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
There is no separate software licensing for most platform integrated features. By default all software functionality is available regardless of the hardware model purchased.
The HPE SimpliVity 380 per node software license is tied to the selected CPU and storage configuration.
License Add-ons:
- RapidDR feature
RapidDR uses a 'per VM' licensing model and is available in 25 VM and 100 VM license packs.
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Capacity based (per TB)
Support is either bundled with OPEX or separately offered with CAPEX. in both cases the price is capacity based (per TB).
Support always includes 24/7 software support as well as proactive monitoring and managed services.
|
Per Node
3-year HPE SimpliVity 380 solution support (24x7x365) is mandatory.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Compute
Storage
Data Protection (limited)
Automation&Orchestration (limited)
StorPool provides primary, shared block-storage, consolidating all storage into one solution. StorPool is commonly used to build Hyper-Converged Infrastructure (HCI), as about half of the field deployments combine compute+storage within a single-layer architecture.
|
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
HPE is stack-oriented, whereas the SimpliVity 380 platform itself is heavily storage- and protection-focused.
HPE SimpliVity 380 aims to provide key components within a Private Cloud ecosystem as well as integration with existing hypervisors and applications.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
Standard Ethernet 10/25/40/50/100 GbE
Infiniband (EoL)
StorPool Distributed Storage (StorPool) supports ethernet connectivity using 10 GbE or faster network.
|
1, 10, 25 GbE
HPE SimpliVity 380 hardware models include ethernet connectivity using either SFP+, Base-T or SFP28. HPE SimpliVity 380 recommends at least 10GbE to avoid the network becoming a performance bottleneck.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Medium
StorPool is provided as a fully deployed working solution with 24/7 support and managed service included in the fee. The managed service covers solution design, hardware selection, deployment and help wirth the integration with 3rd party systems.
StorPool customers can choose to use StorPools integrated data protection functionality or to work with specialized backup solution providers.
|
Low
HPE SimpliVity was developed with simplicity in mind, both from a design and a deployment perspective. HPE SimpiVitys uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities natively and on a per-VM basis. There are only a handful of storage building blocks to choose from, and many advanced capabilities like deduplication and compression are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
N/A (one internally-validated)
No externally validated test reports have been published in past years (2016-2020).
However, in May 2019 StorPool did publish an internally validated performance benchmark test:
Intel Data Center Builders (Jun 2019)
Title: 'The IOPS challenge is over. StorPool holds the new world record – 13.8 mln IOPS'
Workloads:
Benchmark Tool: FIO
Hardware: All-Flash x86-based servers, 12-node cluster, 25Gb Ethernet connected, SP v19, 4x Intel SSD DC P4510 8TB NVMe per node
For more information please go to:
https://storpool.com/blog/the-iops-challenge-is-over-storpool-holds-the-new-world-record-13-8-mln-iops
or
https://builders.intel.com/blog/the-iops-challenge-is-over-new-world-record-is-hit-13-8-million-iops/
|
ESG Lab (jul 2017)
Login VSI (dec 2018; jun 2017)
ESG Lab (Jul 2017)
Title: 'The All-flash HPE SimpliVity 380: Simplicity, Performance, and Resiliency'
Workloads: Mix (OLTP, DSS, MS Exchange, VDI, Linux Apps, Web)
Benchmark Tools: Vdbench (all)
Hardware: All-Flash SimpiVity 380, 2-node cluster, OmniStack ?
Login VSI (Jun 2017)
Title: 'HPE Reference Architecture for Citrix XenDesktop 7.13 on HPE SimpliVity 380'
Workloads: Citrix XenDesktop VDI
Benchmark Tools: Login VSI (VDI)
Hardware: HPE SimpliVity 380 Gen9 4-node storage cluster, OmniStack 3.7.2 + HPE ProLiant DL380 Gen9 4-node compute cluster
Login VSI (Dec 2018)
Title: 'Citrix Virtual Apps and Desktops 7 1808 on HPE SimpliVity 380 Gen10'
Workloads: Citrix Virtual Apps and Desktops
Benchmark Tools: Login VSI (VDI)
Hardware: HPE SimpliVity 380 Gen10 4-node storage cluster, OmniStack 3.7.6; HPE SimpliVity 4-node storage cluster, OmniStack 3.7.6 + HPE ProLiant DL380 Gen10, 4-node compute cluster
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Evaluation license (30-days)
StorPool offers evaluation licenses for 30-days. These licenses can be used in non-production environments.
A StorPool evaluation license provides full functionalty and unlimited capacity. It is provided with a 'best effort' level of support.
|
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
HPE offers no Community Edition or Free Trial edition of their hyperconverged software. This is in part due to the fact that HPE SimpliVity 380 uses a hardware accelerator card for deduplication/compression.
However, HPE maintains a cloud-based evaluation environment in which demos can be conducted and where potential customers can load up their own workloads to execute Proof-of-Concepts. This is called the Cloud Technology Showcase (CTS).
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- StorPool Distributed Storage (StorPool) is a service running on a Linux OS, next to the KVM hypervisor, containers or applications. This is a hyper-converged deployment (compute+storage offered by a single layer).
Dual-Layer:
- StorPool is running on dedicated Linux servers and provides virtual disks to external compute hosts – bare metal and / or hypervisors.
Mixed:
- Some of the servers are hyper-converged (compute+storage offered by a single layer), while other servers are dedicated storage nodes or dedicated compute nodes.
|
Single-Layer
Single-Layer: HPE SimpliVity 380 is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Although HPE SimpliVity 380 can serve in a dual-layer model by providing storage to non-HPE SimpliVity 380 hypervisor hosts, this would negate many of the platforms benefits as well as the financial business case. (Please view the compute-only scale-out option for more information).
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (remote install)
StorPool delivers a working software defined storage (SDS) solution. This includes testing of the customers hardware, installing and configuring the software, tuning and assisting in integrations with other applications. The goal is to provide a hassle-free solution that is guaranteed to be effective and efficient while delivering high performance.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by HPE SimpliVity 380, customer deployments can be executed in hours instead of days.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Next to Hypervisor (KVM)
None (ESXi, Hyper-V, XenServer, OracleVM)
StorPool storage nodes can be deployed on bare metal only. The storage is presented to the compute nodes through a native StorPool block device driver for Linux-based clients or via iSCSI interface for non-Linux workloads.
StorPool Distributed Storage (StorPool) is accessed through the native StorPool client (initiator, block device driver) for Linux-based hypervisors (KVM).
VMware ESXi, Hyper-V, Xen, XenServer, OracleVM and other OS/hypervisors access storage presented by StorPool through iSCSI.
StorPool also features deep integrations with a number of Cloud Management Systems (OpenStack, CloudStack, OpenNebula, OnApp, etc). Kubernetes is connected to StorPool through K8S CSI driver.
|
Virtual Storage Controller
The HPE SimpliVity 380 OmniStack Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the HPE SimpliVity 380 storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
KVM
VMware ESXi
Hyper-V
XenServer
OracleVM
|
VMware vSphere ESXi 6.5U2-6.7U3
Microsoft Hyper-V 2016 UR6-UR8
HPE SimpliVity 380 OmniStack 3.7.4 introduced support for Microsoft Hyper-V 2016.
HPE SimpliVity OmniStack 4.0.0 introduced support for VMware vSphere 6.7 Update 3.
Microsoft Hyper-V is only supported on HPE SimpliVity 380 Gen10 server hardware.
Currently the following features are supported for Hyper-V: VM Backup, VM Restore, VM Clone, VM Move, Backup Policies, HPE SimpliVity Data Virtualization Platform, and HPE SimpliVity Deployment Manager.
Currently the following features are not support for Hyper-V: HPE SimpliVity Upgrade Manager, Intelligent Workload Optimizer, Compute (Access) Nodes, File Level Restore, Application Consistent Backups, DRS/Dynamic Optimization, Stretched Clusters, and Hypervisor Management System on HPE OmniStack.
UR = Update Rollup
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
Block device driver (KVM)
iSCSI (ESXi, Hyper-V, XenServer, OracleVM)
StorPool storage nodes can be deployed on bare metal only. The storage is presented to the compute nodes through a StorPool native block device driver for Linux-based clients or via iSCSI interface for non-Linux workloads.
StorPool Distributed Storage (StorPool) is accessed through the native StorPool client (initiator, block device driver) for Linux-based hypervisors (KVM).
VMware ESXi, Hyper-V, Xen, XenServer, OracleVM and other OS/hypervisors access storage presented by StorPool through iSCSI.
|
NFS
SMB
NFS is used as the storage protocol in vSphere environments.
SMB3 is used as the storage protocol in Hyper-V environments.
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Microsoft Windows Server 2012R2-2019
CentOS 7, 8
Debian Linux 9
Ubuntu Linux 16.04 LTS, 18.04 LTS
Other Linux distributions (eg. RHEL, OEL, SuSE)
StorPool supports all 64-bit Linux distributions. The following platform versions have been tested thoroughly and are thus supported by default:
- CentOS 7, 8
- Debian Linux 9
- Ubuntu Linux 16.04 LTS, 18.04 LTS
Other Linux distributions (eg. RHEL, OEL, SuSE) can be supported.
The Windows Server OS is supported through iSCSI.
StorPool HCI nodes as well as StorPool storage-only nodes run one of the supported Linux distributions.
|
N/A
HPE SimpliVity 380 does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
Block device driver (Linux)
iSCSI (Windows Server)
The StorPool Block Device Driver for Linux (storpool_block) component is installed on application servers that are going to consume storage.
Additionally, storage can be exposes through iSCSI for consumption by Linux and non-Linux operating systems.
|
N/A
HPE SimpliVity 380 does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Hypervisor: None
Bare metal (Linux): Block device driver
For containers in virtual machines StorPool relies on the container support delivered by the hypervisor platform.
With regard to bare metal container environments StorPool provides linux block devices that can be used as persistent storage for containers.
|
N/A
HPE SimpliVity 380 relies on the container support delivered by the hypervisor platform.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Most container platforms
StorPool can be used by any container platform that can leverage standard block devices as persistent storage.
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Standard block devices
StorPool provides standard block devices that can be used by Docker containers as any other block device or SAN.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Bare-metal container hosts
The Kubernetes worker node participates as a client (initiator) in the StorPool cluster. Both dual-layer architectures and single-layer architectures are supported, meaning that StorPool can be leveraged as a storage-only or as a hyperconverged (compute+storage) node when serving storage to containers.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
Linux
All Linux versions supported by Kubernetes.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes v1.13+
StorPool support for Kubernetes has been there since the container orchestration platform officially introduced their implementation of the Container Storage Interface (CSI) back in January 2019.
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
StorPool support for Kubernetes has been there since the container orchestration platform officially introduced their implementation of the Container Storage Interface (CSI) back in January 2019.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
So far StorPool has not published any Reference Architecture whitepapers on VMware Horizon or Citrix XenDesktop.
|
VMware Horizon
Citrix XenDesktop (certified)
Workspot VDI
HPE SimpliVity OmniStack 3.7.8 introduces support for VMware Horizon Instant Clone provisioning technology for vSphere 6.7.
HPE SimpliVity 380 has published Reference Architecture whitepapers for VMware Horizon, Citrix XenDesktop and Workspot VDI platforms.
HPE SimpliVity 380 is qualified as Citrix Ready.
The Citrix Ready Program showcases verified products that are trusted to enhance Citrix solutions for mobility, virtualization, networking and cloud platforms. The Citrix Ready designation is awarded to third-party partners that have successfully met test criteria set by Citrix, and gives customers added confidence in the compatibility of the joint solution offering.
HPE SimpliVity 380 has been first to be validated on Citrix XenDesktop 7.11. HPE has also published an HPE SimpliVity 380 reference architecture for Citrix XenDesktop 7.13.
HPE SimpliVity 380 is an official Workspot Technology Partner.
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
N/A
StorPool has not published any VDI reference architecture whitepapers with LoginVSI benchmark numbers.
|
VMware: up to 190 virtual desktops/node
Citrix: up to 190 virtual desktops/node
Workspot: up to 150 virtual desktops/node
VMware Horizon 7.4: Load bearing number is based on Login VSI tests performed on all-flash HPE SimpliVity 380 on ProLiant Gen 10 using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix Virtual Apps and Desktops 7 1808: Load bearing number is based on Login VSI tests performed on all-flash HPE SimpliVity 380 on ProLiant Gen10 using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
Workspot VDI 2.0: Load bearing number is based on Login VSI tests performed on all-flash CN2400 model using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding reference architecture whitepapers.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Many
StorPool maintains a component-level HCL, including the most common current and previous generation server components (SSDs, NICs, CPUs). Servers from all major server OEMs comply with StorPools System Requirements (HCL).
|
HPE
New deployments of HPE SimpliVity 380 are solely based on HPE ProLiant DL380 Gen10 server hardware. Gen 9 server hardware reached End-of-Life (EOL) status on December 31st 2018. This means that end users cannot acquire Gen 9 hardware any longer.
SimpliVity non-HPE platforms - Cisco, Dell, Lenovo, and OmniCube legacy products are no longer supported as of OmniStack 4.0.0. HPE will support all existing support contracts through their full term. If a
contract is valid beyond the end of support life (EOSL) date (September 30, 2021), HPE will provide maintenance releases and patches for critical issues and security issues for the remainder of the current contract.
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Many
StorPool maintains a component-level HCL, including the most common current and previous generation server components (SSDs, NICs, CPUs). Servers from all major server OEMs comply with StorPools System Requirements (HCL).
|
7 models
HPE SimpliVity 380 hardware:
Gen10 (4000 and 6000)
Gen10 G
Gen10 H
The Gen10-6000 is best for high performance, IO intensive mixed workloads, whereas the Gen10-4000 is best for typical workloads (heavy reads/lower ratio of writes) at lower cost than the Gen10-6000. The difference between 4000/6000 is the SSD-type that is inserted in the server hardware.
HPE SimpliVity 380 Gen10 series offer 7 workload models:
Extra Small All SSD (SMB remote offices)
Small All SSD (Small Business and remote offices)
Medium All SDD (Mid-Size Enterprise Data Centers)
Large All SSD (Large Enterprise Data Centers)
Extra Large All SSD (Data-intensive Enterprise Data Centers)
G (Software-optimized for flexibility)
H (Storage optimized for backup and recovery)
Notes:
- The Extra Large workload model is only available for the Gen10-4000.
- Currently only the Extra Large workload model of the Gen10-4000 is not supported for Microsoft Hyper-V environments.
HPE SimpliVity 380 Gen 9 server hardware reached End-of-Life (EOL) status on December 31st 2018. This means that end users cannot acquire Gen 9 hardware any longer.
There are no HPE SimpliVity 380 Hybrid (SSD+HDD) models to choose from.
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
Because StorPool Distributed Storage (StorPool) is hardware agnostic, customers can opt for multiple server densities. Common configurations include 1 node per chassis and 4 nodes per chassis.
|
1 node per chassis
All available models are 2U building blocks.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
StorPool Distributed Storage (StorPool) follows these hardware/software mixing rules:
1. Different server brands, models, generations with different hardware (eg. HDDs or SSDs) are supported within the same StorPool cluster.
2. Different hypervisors are supported within the same StorPool cluster.
3. Different deployment methods (eg. compute-only, storage-only and hyperconverged nodes) are supported within the same StorPool cluster.
|
Partial
For mixing HPE SimpliVity 380 nodes in a cluster, HPE recommends following these general guidelines:
- Only models of equal socket count are supported.
- All hosts should contain equal amounts of CPU & Memory.
- As a best practice, it’s recommended to use the same CPU model within a single cluster.
- Mixing server generations (Gen 9 and Gen 10) is allowed.
- Mixing Gen10 servers with legacy OmniCube/OmniStack platforms within the same cluster is allowed.
Heterogenous Federation Support: Although HPE SimpliVity 380 nodes cannot be mixed with HPE SimpliVity 2600 nodes or legacy SimpliVity nodes within the same cluster, they can coexist with such clusters within the same Federation.
HPE OmniStack 3.7.9 introduces support for using different versions of the OmniStack software within a federation. Some clusters can have hosts using HPE OmniStack 3.7.9 and other clusters can have hosts all using HPE OmniStack 3.7.8 and above. The hosts in each datacenter and cluster must use the same version of the software.
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
StorPool provides minimal hardware requirements in its StorPool Distributed Storage (StorPool) documentation.
|
Flexible
HP SimpiVity 380 on ProLiant Gen10/G/H server hardware:
Choice of 1st or 2nd generation Intel Xeon Scalable Silver, Gold and Platinum processors (1x or 2x per node), 8 to 28 cores selectable.
|
|
|
Flexible
|
Flexible
StorPool provides minimal hardware requirements in its StorPool Distributed Storage (StorPool) documentation.
|
Flexible
HP SimpiVity 380 on ProLiant Gen10 server hardware:
144GB to 3.0TB per node selectable.
HP SimpiVity 380 on ProLiant Gen10 G server hardware:
128GB to 3.0TB per node selectable.
HP SimpiVity 380 on ProLiant Gen10 H server hardware:
192GB to 3.0TB per node selectable.
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
StorPool Distributed Storage (StorPool) supports magnetic disks (HDD), solid-state drives (SSD) as well as NVMe. Different types, models and size of storage devices can be mixed in a storage node.
Each StorPool node can have up to a maximum of 500TB of storage attached. The storage capacity can be entirely based on magnetic disks, solid-state drives, or a mix of both storage media types. Typically an StorPool NVMe storage node has 10x 8 TB NVMe, resulting in 80 TB raw per node.
StorPool provides minimal hardware requirements in its StorPool Distributed Storage (StorPool) documentation.
|
Fixed: number of disks + capacity
For HP SimpliVity 380 Gen10 server hardware the following kits are selectable per node:
Extra Small (5 x 960GB SSD in RAID5)
Small (5 x 1.92TB SSD in RAID5)
Medium (9 x 1.92TB SSD in RAID6)
Large (12 x 1.92TB SSD in RAID6)
Extra Large (12x 3.84TB SSD in RAID6)
For HP SimpliVity 380 Gen10 G server hardware the following kit is selectable per node:
6 x 1.92TB SFF SSD in RAID5
8x 1.92TB SFF SSD in RAID5
12x 1.92TB SFF SSD in RAID5
16x 1.92TB SFF SSD in RAID5
For HP SimpliVity 380 Gen10 H server hardware, the following kit is selectable per node:
4x 1.92TB LFF SSD in RAID5 + 8x 4.0TB LFF HDD in RAID6 (Backup and Archive)
4x 1.92TB LFF SSD in RAID5 + 20x 1.2TB LFF HDD in RAID6 (General Purpose)
The always-on inline deduplication and compression by default allows HPE SimpliVity 380 to have much higher amounts of effective storage capacity on a single node than the raw disk capacity would indicate.
LFF = Large Form Factor (3.5')
SFF = Small Form Factor (2.5')
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
StorPool Distributed Storage (StorPool) fully supports 10/25/40/100Gbps Ethernet networks. For production workloads a dual-redundant network is required (2 switches and 2 ports per node). Management and data storage traffic can be performed across the same IP network or across separate IP networks.
StorPool provides minimal hardware requirements in its StorPool Distributed Storage (StorPool) documentation.
|
Flexible: additional 1, 10, 25 Gbps
A 10/25GbE FlexLOM is required. This is in addition to the standard 4 port 1GbE LOM included in the base node.
HP SimpiVity 380 on ProLiant Gen 10 server hardware:
Three optional 1 Gbps, 10 Gbps or 25 Gbps PCI adapters can be added to a node in the case of dual socket with secondary riser.
From HPE SimpliVity 3.7.10 and up the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
StorPool does not restrict the use of GPUs; end-user organizations are free to leverage any GPUs that are available on the market.
|
NVIDIA Tesla (vSphere only)
HPE SimpliVity 380 offers a GPU option in dual CPU ProLiant Gen10 server hardware with secondary riser for leveraging vGPU in virtual desktop/application environments.
HPE SimpliVity 380 Gen10 supports the following GPUs in a single server:
1x NVIDIA Tesla M10
1x NVIDIA Tesla P40
1x NVIDIA Tesla T4
HPE SimpliVity 380 Gen10 G supports the following GPUs in a single server:
2x NVIDIA Tesla M10
2x NVIDIA Tesla P40
2x NVIDIA Tesla T4
HPE SimpliVity 380 Gen10 H supports the following GPUs in a single server:
2x NVIDIA Tesla M10
2x NVIDIA Tesla P40
HPE SimpliVity 380 does not support Intel and/or AMD GPUs at this time.
HPE SimpliVity 380 does not support GPUs in Hyper-V environments at this time.
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
CPU
Memory
Storage
GPU
The StorPool Distributed Storage platform allows for expanding of all server hardware resources.
|
CPU
Memory
Storage
Network
GPU
HPE SimpliVity 380 Gen10 small and medium server models support CPU add-on, memory and storage expansion.
Storage expansion kits are available in two sizes:
- small to medium;
- medium to large.
Expansions are a support-led procedure that also requires a factory reset
of the server which permanently erases all data and configuration settings.
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Storage+Compute
Compute-only
Storage-only
StorPool implements a fully distributed shared nothing architecture, where all nodes are equal. By adding nodes, the cluster scales linearly in both capacity and performance.
Storage+Compute: In a single-layer deployment existing StorPool clusters can be expanded by adding additional nodes running StorPool, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only StorPool clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because StorPool leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the StorPool cluster. This is also StorPool and non-StorPool storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only StorPool clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only (NFS; vSphere)
Storage+Compute: Existing HPE SimpliVity 380 federations can be expanded by adding additional HPE SimpliVity 380 nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because HPE SimpliVity 380 leverages a file-level protocol (NFS;SMB3), storage can be presented to hypervisor hosts not participating in the HPE SimpliVity 380 cluster. This is also beneficial to migrations, since it allows for online storage vMotions between HPE SimpliVity 380 and non-HPE SimpliVity 380 storage platforms. Currently Compute-only is not supported in Hyper-V environments.
Storage-only: N/A; A HPE SimpliVity 380 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
3-63 nodes in 1-node increments
There is a maximum of 63 StorPool nodes within a single cluster, meaning there could be several Petabytes of data in a single cluster. Up to 64 clusters can be connected to a federation for larger capacity use-cases. Clusters can be scaled in 1-node increments.
|
vSphere: 1-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 1-96 storage nodes (Federation) in 1-node increments
Hyper-V: 4 storage nodes (cluster); 2-16 storage nodes (Federation)
vSphere: HPE SimpliVity 380 currently offers support for up to 16 storage nodes and 720 VMs within a single VSI cluster, and up to 8 storage nodes within a single VDI cluster. Up to 32 storage nodes are supported within a single Federation. Multiple Federations can be used in either single-site or multi-site deployments, allowing for a scalable as well as a flexible solution. Data protection can be configured to run in between federations.
Cluster scale enhancements (16 nodes instead of 8 nodes within a single cluster) apply to new as well as existing SimpliVity clusters that run OmniStack 3.7.7.
HPE SimpliVity supports up to 96 nodes within a single Federation. For specific use-cases a Request for Product Qualification (RPQ) process can be initiated to authorize more than 96 storage nodes within a single Federation. The largest known HPE SimpliVity 380 field deployment counts approximately 70 nodes within a single Federation.
HPE SimpliVity 380 also supports adding compute nodes to a storage node cluster in vSphere environments.
Stretched Clusters with Availability Zones remain supported for up to 8 HPE OmniStack hosts (4 per Availability Zone).
OmniStack 3.7.7 introduces support for multi-host deployment at one time to a cluster.
HPE OmniStack 3.7.8 introduced support for 48 clusters per Federation (previously 16 clusters) when using multiple vCenter Servers in Enhanced Linked Mode.
Hyper-V: HPE OmniStack 3.7.9 introduced support for up to 4 storage nodes within a single cluster and 16 storage nodes within a single Federation (4+4+4+4). Multiple Federations can be used in either single-site or multi-site deployments, allowing for a scalable as well as a flexible solution. Data protection can be configured to run in between federations. Currently for Microsoft Hyper-V environments (2-4 node clusters) the tie-breaker VM (aka 'arbiter) is a hard requirement.
HPE SimpliVity 380 currently does not support adding compute nodes to a storage node cluster in Hyper-V environments.
VDI = Virtual Desktop Infrastructure
VSI = Virtual Server Infrastructure
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
3 Node minimum
StorPools smallest deployment contains 3 nodes, albeit these could be small cost-efficient servers.
|
1 Node minimum (2 Nodes for local HA)
HPE SimpliVity 380 supports single-node configurations in a site, which can bring huge cost savings. In single-node deployments HPE SimpliVity 380s advanced backup features can be leveraged for data protection. HPE SimpliVity 380 also supports 2-node configurations without sacrificing any of the data reduction and data protection capabilities. The HPE SimpliVity 380 XS-model is ideal for ROBO deployments.
All the remote sites can be centrally managed from a single dasboard at the central site.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Block Storage Pool
StorPool only serves block devices to the supported OS platforms.
StorPool aggrates direct-attached storage from multiple x86 servers into one or more pools. There can be multiple pools in each StorPool cluster, eg. an SSD pool and a HDD-only pool.
Volumes are striped widely across many drives in the pool. Copies (replicas) are guaranteed to be on different servers or different chassis; this ensures high availablity of data access. Thus StorPool aggregates the capacity and performance of all drives in each pool.
StorPool uses a shared-nothing architecture where all servers participate on an equal basis - there are no meta data servers or active-standby roles, just servers on a flat network.
|
Parallel File System
on top of Object Store
Both the File System and the Object Store have been internally developed by HPE SimpliVity 380.
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
Data locality is not used by default but is partially supported. In most cases it is statistically better to not use data locality due to the higher performance of the large pool available from the whole pool of drives in all servers. This can be configured on a per-volume basis.
Physical drives are grouped in one or more pools called placement groups. One disk can participate in more than one placement group. In the simplest configuration all disks reside in a single placement group. By default StorPool will distribute user data across all the disks in the cluster proportional to their size.
If for a particular volume it is preferred that data is stored only on a subset of the disks, then a separate placement group that includes the target disks only can be created and the volume can be configured to store one or all three copies of the data using this placement group.
Placement groups used by the volumes can be changed in realtime, which causes the data to be migrated from one set of disks to another in the background, while the volume is in use, and without a noticeable performance impact.
There is no automated mechanism that changes data locality based on the current usage because limiting the data only to a subset of disks usually doesnt add any performance benefits. However, such functionality can be achieved by external logic through the StorPool API to change the volume settings in realtime.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
When a VM is created, it is optimally placed on the best 2 nodes available. When data is written, it is deduplicated and compressed on arrival, and then stored on the local node as well as a dedicated partner node. To keep the performance optimal through the VMs lifecycle, OmniStack automatically creates VMware vSphere DRS affinity rules and policies. This is called Intelligent Workload Optimization. VMware DRS is made aware of where the data of an individual VM is. In effect, the VM follows the data rather than having the data follow the VM, as this prevents heavy moves of data to the VM. When a HPE SimpliVity 380 node is added to the federation, the DRS rules related to OmniStack are automatically re-evaluated.
Currently HPE SimpliVity 380s DRS/Dynamic Optimization features are not available for Hyper-V environments.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
Direct-attached: The software takes ownership of the unformatted physical disks available on each node – SATA/SAS/NVMe.
|
Direct-attached (RAID)
The software takes ownership of the RAID groups provisioned by the servers hardware RAID controller.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Magnetic-Only
Hybrid
All-Flash
Pools of different storage types (magnetic-only, hybrid and all-flash) can be created within the same StorPool Distributed Storage (StorPool) cluster.
|
Hybrid (Flash+Magnetic)
All-Flash (SSD-only)
All but one HPE SimpliVity 380 Gen10 model are all-flash models. These models facilitate a variety of workloads including those that demand ultra-high low-latency performance.
The HPE SimpliVity 380 Gen10 H is a hybrid model (SSD+HDD) and is meant for backup and recovery use cases.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SD, USB, DOM, SSD/HDD
StorPool storage nodes are Linux servers and thus any boot drive supported by Linux is supported by StorPool for root/boot.
|
SSD
2x 480GB SSDs are used for system boot.
|
|
|
|
Memory |
|
|
|
DRAM
|
DRAM
StorPool uses RAM in servers for caching.
|
NVRAM (PCIe card)
DRAM (VSC)
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Metadata
Read Cache
Write-back Cache (optional)
Memory is used by StorPool for storing metadata, read caching, and proprietary write back-caching (WBC).
When HDDs are used, write operations are usually processed through a write-back cache to reduce the write latency and sequence the write operations to maximize the performance of each disk drive. In typical deployments, this write-back cache is implemented with a persistent storage layer such as power-loss protected memory in RAID controllers or an Optane NVMe drive. This approach guarantees consistent data and data-loss protection in unlikely events such as simultaneous power loss of the entire site.
In non-critical deployments with lower requirements, the write-back cache can be configured to be stored in the DRAM. This can reduce the hardware cost by eliminating the RAID controller with power-loss protected memory or Optane NVMe drive.
|
NVRAM (PCIe): Write Buffer
DRAM (VSC): Read Cache
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Configurable
StorPool Distributed Storage (StorPool) is designed to take a minimal and fixed amount of DRAM. As a rule of thumb 1 GB of DRAM is used per 1 TB of raw data per server. Additional DRAM in the server can be configured for caching. The remaining DRAM is available for use by other applications or virtual machines in single-layer deployments.
|
NVRAM (PCIe): Unknown
DRAM (VSC): 16-48GB for Read Cache
Each Virtual Storage Controller (VSC) is equipped with 48-100GB total memory capacity, of which 16-48GB is used as read cache. The amount of memory allocated is fixed (non-configurable) and model dependent (small, medium, large).
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, PCIe, NVMe
StorPool Distributed Storage (StorPool) supports industry-standard datacenter-grade SATA, SAS and NVMe SSDs for delivering sub-millisecond performance. StorPool does not support consumer-grade SSDs.
|
SSD
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Persistent Storage
Write-back Cache
StorPool Distributed Storage (StorPool) does not use flash as partial read cache, only as full primary storage. NVMe devices (including Intel Optane) can be used for write-back caching purposes in order to accelerate writes to HDDs.
|
All-Flash: Metadata + Persistent Storage Tier
Write buffer is provided by the PCIe card (NVRAM).
Read cache is not necessary in All-flash configurations.
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
No limit
There is no technological limit within the StorPool software architecture to the number of flash drives used, just a best practice of deployment. Typical building blocks (storage nodes) have 10-24 drives installed, typically using 4-8TB NVMe. The number and capacity are tailored to the requirements of the specific end-user organizationss use case.
In general StorPool advices to use small nodes with 10-12 SSDs each, rather than small number of nodes with 36 or more drives each. The storage density and the overall cost are about the same, however a StorPool storage cluster build of smaller nodes is both faster and safer. It takes less time to rebuild a single node in case it fails.
|
All-Flash: 5-12 SSDs per node
All-Flash SSD configurations:
Extra Small 5x 960GB
Small 5x 960GB
Medium 9x 1.92TB
Large 12x 1.92TB
Extra Large 12x 3.84TB
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
SAS or SATA
StorPool supports HDDs in magnetic-only as well as hybrid storage configurations.
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
Hybrid: SATA
The HPE SimpliVity 380 Gen10 H is a hybrid model (SSD+HDD) and is meant for backup and recovery use cases.
|
|
|
Persistent Storage
|
Persistent Storage
The magnetic tier is either used as primary storage of data, or as secondary storage for storing backups (fault recovery) or remote replicas (disaster recovery).
|
Persistent Storage
The HPE SimpliVity 380 Gen10 H is a hybrid model (SSD+HDD) and is meant for backup and recovery use cases.
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
No limit
StorPool is a highly scalable platform, so supports an extensive amount of magnetic storage. Magnetic devices (HDDs) are not mandatory in a StorPool Distributed Storage (StorPool) solution.
The number and capacity of HDDs depend on the storage requirements. For example, when needed 36x 16TB HDDs can be installed in a single storage node.
|
8 SATA HDDs per host/node
For HP SimpliVity 380 Gen10 H server hardware, the following HDD kit is selectable per node:
8x 4.0TB LFF HDD in RAID6
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Hybrid configurations (optional): Intel Optane NVMe, "Pool" NVMe drive, Broadcom/LSI CacheVault or BBU
Write buffering (aka write-back cache) is optionally used with magnetic HDDs. StorPool supports Intel Optane drives for persistent write-back cache, the use of a small partition on large capacity Pool NVMe drives, or the use of an LSI CacheVault(supercap) or BBU. This effectively protects against data loss even in the events of full power outage of the entire data center.
In low cost / low performance use-cases write buffering for HDDs can be disabled, thus removing the requirement for a write-back cache device, at the cost of increased write latency (write operations wait for the HDD media).
Datacenter-grade SSDs and NVMe have integrated power-loss protection which StorPool uses, so for these types of devices StorPool doesnt require external write buffering.
|
NVRAM (PCIe card)
The HPE SimpliVity 380 propietary PCIe based HPE OmniStack Accelerator Cards NVRAM is used as a write buffer.
The HPE SimpliVity 380 propietary PCIe based HPE OmniStack Accelerator Cards NVRAM is not used in the read path.
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
0-2 Replicas (1N-3N)
StorPool uses replicas to guarantee data redundancy.
StorPools implementation of replicas is called Copies:
- Maintaining 1 copy/replica (1N) means that data is kept only once and is not protected by another copy/replica.
- Maintaining 2 copies/replicas (2N) means that data is protected by writing 2 copies of the data to the StorPool cluster. Protection applies to both disk and node failures.
- Maintaining 3 copies/replicas (3N) means that data is protected by writing 3 copies of the data to the StorPool cluster. Protection applies to both disk and node failures.
StorPool recommends using 3 copies/replicas as a standard and using 2 copies/replicas for data that is less critical. Using the standard (3N) means that the StorPool Distributed Storage (StorPool) platform can withstand a failure of any two disks or any two nodes within the storage cluster.
Before any write is acknowledged to the host, it is synchronously replicated to the prescribed number of nodes. All nodes in the cluster participate in replication. This means that with 3N one instance of data that is written is stored on one node and other instances of that data are stored on two different nodes in the cluster. For all instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated copies/replicas is initiated in order to restore the desired number of copies/replicas.
|
1 Replica (2N)
+ Hardware RAID (5, 6 or 60)
HPE SimpliVity 380 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical disk fails, hardware RAID maintains data availability.
Only when more than 2 disks fail within the same node, data has to be read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
The hardware RAID level that is applied depends on drive count in an individual node:
2 drives = RAID1
4-5 drives = RAID5
8-12 drives = RAID6
14-20 drives = RAID60 (2 per RAID6 set)
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
0-2 Replicas (1N-3N)
Node failure is not a critical event in StorPool Distributed Storage (StorPool) when using multiple copies/replicas (3N or 2N) for data protection. A node failure does not cause downtime or even partial unavailability. The system is self-healing: the StorPool cluster rebuilds only the changed/missing data when the failed node returns or just creates a new copy of the missing data when the failed node is not back within a pre-set time (eg. 5 minutes as most failures are transient).
StorPool uses replicas to guarantee data redundancy.
StorPools implementation of replicas is called Copies:
- Maintaining 1 copy/replica (1N) means that data is kept only once and is not protected by another copy/replica.
- Maintaining 2 copies/replicas (2N) means that data is protected by writing 2 copies of the data to the StorPool cluster. Protection applies to both disk and node failures.
- Maintaining 3 copies/replicas (3N) means that data is protected by writing 3 copies of the data to the StorPool cluster. Protection applies to both disk and node failures.
StorPool recommends using 3 copies/replicas as a standard and using 2 copies/replicas for data that is less critical. Using the standard (3N) means that the StorPool Distributed Storage (StorPool) platform can withstand a failure of any two disks or any two nodes within the storage cluster.
Before any write is acknowledged to the host, it is synchronously replicated to the prescribed number of nodes. All nodes in the cluster participate in replication. This means that with 3N one instance of data that is written is stored on one node and other instances of that data are stored on two different nodes in the cluster. For all instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated copies/replicas is initiated in order to restore the desired number of copies/replicas.
|
1 Replica (2N)
+ Hardware RAID (5, 6 or 60)
HPE SimpliVity 380 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical node fails, VMs need to be restarted and data is read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
The hardware RAID level that is applied depends on drive count in an individual node:
2 drives = RAID1
4-5 drives = RAID5
8-12 drives = RAID6
14-20 drives = RAID60 (2 per RAID6 set)
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Fault Sets
StorPool protects data by keeping number of copies (1, 2 or 3) in different servers or racks. The default is 3N. This means that for example in a 5-node cluster any 2 nodes can be lost without impacting availability.
In larger StorPool clusters (e.g. 12 nodes in 3 chassis), StorPool can be configured to store replicas (copies) in different racks or different chassis, tolerating entire chassis or rack failure.
Fault Sets: By default StorPool uses a placement policy that distributes user’s data on as many physical drives and servers as possible in order to increase performance and minimize the impact in case of a node failure. When some storage nodes are interrelated and there is a higher chance to fail simultaneously, the placement policy can be tuned by defining 'Fault Sets' - a set of nodes that have a higher probability to fail simultaneously. When fault sets are defined, StorPool will place data on different Fault Sets - in other words there is only one copy of the data in a particular Fault Set. By leveraging Fault Sets a storage cluster can be arranged for example in racks, where each rack represents a separate Fault Set. If an entire rack fails, the placement policy will guarantee there are at least two available copies of the data that reside on the remaining racks.
|
Not relevant (1-node chassis only)
HPE SimpliVity 380 building blocks are based on 1-node chassis only. Therefore multi-node block (appliance) level protection is not relevant for this solution as Node Failure Protection applies.
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Fault Sets
StorPool protects data by keeping number of copies (1, 2 or 3) in different servers or racks. The default is 3N. This means that for example in a 5-node cluster any 2 nodes can be lost without impacting availability.
In larger StorPool clusters (e.g. 12 nodes in 3 chassis), StorPool can be configured to store replicas (copies) in different racks or different chassis, tolerating entire chassis or rack failure.
Fault Sets: By default StorPool uses a placement policy that distributes user’s data on as many physical drives and servers as possible in order to increase performance and minimize the impact in case of a node failure. When some storage nodes are interrelated and there is a higher chance to fail simultaneously, the placement policy can be tuned by defining 'Fault Sets' - a set of nodes that have a higher probability to fail simultaneously. When fault sets are defined, StorPool will place data on different Fault Sets - in other words there is only one copy of the data in a particular Fault Set. By leveraging Fault Sets a storage cluster can be arranged for example in racks, where each rack represents a separate Fault Set. If an entire rack fails, the placement policy will guarantee there are at least two available copies of the data that reside on the remaining racks.
|
Group Placement
HPE SimpliVity 380 intelligent software features include Rack failure protection. Both rack level and site level protection within a cluster is administratively determined by placing hosts into groups. Data is balanced appropriately to ensure that each VM is redundantly stored across two separate groups.
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
StorPool implements 3-way or 2-way synchronous replication, meaning multiple full copies/replicas of the data exist.
With 3N the raw storage capacity is approximately 300% of the stored data capacity, not accounting for space saving features that reduce space usage and not including overhead that increases space usage.
For each data copy/replica there is 10% capacity overhead that includes checksums (for end-to-end data integrity), metadata and copy-on-write/thin provisioning overheads and safety. The capacity overheads are taken into account when performing StorPool sizing exercises. When a StorPool quote states a solution supports 100TB of stored data, it is really able to store 100TB of data.
|
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
The hardware RAID level that is applied depends on drive count in an individual node:
2 drives = RAID1
4-5 drives = RAID5
8-12 drives = RAID6
14-20 drives = RAID60 (2 per RAID6 set)
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks (end-to-end checksums)
Disk scrubbing
StorPool has incorporated a checksum based end-to-end data integrity feature. StorPool Distributed Storage (StorPool) protects data and guarantees data integrity by leveraging a 64-bit checksum and a version number for each sector maintained by StorPool. The mechanism is more extensive than those used by (many) other platforms. It checksums data in the initiator, i.e. it protects against errors in both hardware and the full software stack, not just the storage system itself. The checksum based end-to-end data integrity feature has been designed not to impact storage performance.
If a corrupted copy of data is detected, the copy is invalidated and restored by an undamaged copy of the data.
Data at rest is regularly checked (scrubbed) for errors and recovered in case any corruption is detected.
|
Read integrity checks (CLI)
Disk scrubbing (software)
While writing data, checksums are created and stored as part of the inline deduplication process. When one of the underlying layers detects data corruption, a checksum comparison is performed and when required, another copy of the data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Read integrity checks can be enabled through the CLI.
Disk Scrubbing, termed 'RAID Patrol' by HPE SimpliVity 380, is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
Built-in (native)
StorPool Distributed Storage (StorPool) supports instantaneous, copy-on-write (CoW) storage snapshots and clones. Creating snapshots can be performed on a per volume basis with deep chains (eg. 100+ snaps of an individual volume) without having a tangible impact on the performance.
|
Built-in (native)
HPE SimpliVity 380 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local + Remote
StorPool snapshots can be replicated to a remote StorPool cluster over an encrypted site-to-site (internet) link. After the first sync, StorPool only copies new or changed data rather than the entire data set. There is no fixed primary-secondary relationship between clusters. Snapshots and volumes on individual clusters have independent lifecycles and can be created and deleted not affecting other clusters or data on them.
|
Local + Remote
HPE SimpliVity 380 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
Seconds (workload dependent)
Snapshots are created on request via CLI or API. There is no default frequency.
There is a separate service that can create regular snapshots and apply retention policies on the local or remote cluster. There is no default frequency, as is defined in the snapshot policy. While StorPool can create snapshots with no performance degradation, practical use cases are with hourly and daily snapshots.
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Backups can be scheduled.
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per Volume (LUN)
Per VM/container (eg. OpenStack, Kubernetes)
StorPool supports per-volume (LUN) snapshots that are fine-grained (4KB) and crash-consistent by default. Higher levels of data consistency can be achieved by orchestration e.g. you can get application-consistent snapshots by first instructing the application to freeze, then taking the snapshot and unfreezing the application.
In many deployments where cloud orchestration platforms are used, there is 1-to-1 relationship between a virtual disk and a volume. In these cases snapshots are being created per virtual disk. Examples of relevant cloud orchestration platforms are OpenStack, CloudStack with KVM, OnApp, OpenNebula and Kubernetes (for persistent volumes).
StorPool also supports crash-consistent snapshots of multiple volumes (LUNs).
|
Per VM
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
Built-in (native)
StorPool Distributed Storage (StorPool) provides integrated backup/restore functionality controlled through REST-API or CLI.
For backup purposes commands in the API/CLI can be used to create a crash-consistent snapshot and send the snapshot to a remote site that is also running a StorPool cluster.
For restore purposes commands in the API/CLI can be used to create a volume in the local StorPool cluster based on the contents of a local or remote (backed up) snapshot.
Some of the end-user organizations that leverage StorPool rely entirely on maintaining local and remote snapshots for data protection purposes and thus do not use an external backup/restore application. They have built their own scripting or orchestration around the StorPool API. Other end-user organizations that leverage StorPool leverage an independent backup/restore solution fpr data protection purposes.
Remote snapshots are fully independent of local storage and snapshots stored on it. Snapshots on the remote site can have an independent retention polity. There is no need to keep any snapshot on the local site in order to have a backup (snapshot) on the remote site.
|
Built-in (native)
HPE SimpliVity 380 provides native backup capabilities. Its backup feature supports remote-replication, is deduplication aware and data is compressed over the wire.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
Local or Remote
StorPool can create snapshots on a frequent basis and send these snapshots between sites securely and efficiently.
|
Locally
To other SimpliVity sites
To Service Providers
Backup remote-replication is deduplication aware + data is compressed over the wire.
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
Seconds (workload dependent)
Snapshots are created on request via CLI or API. There is no default frequency.
There is a separate service that can create regular snapshots and apply retention policies on the local or remote cluster. There is no default frequency, as is defined in the snapshot policy. While StorPool can create snapshots with no performance degradation, practical use cases are with hourly and daily snapshots.
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
Crash Consistent (also Group Consistency)
All snapshots in StorPool are crash consistent. StorPool supports atomic snapshots of multiple volumes – e.g. all virtual disks of a VM can be snapshotted at a single point of time, providing consistent backup and restore for multi-disk systems.
The StorPool REST-API allows creating a snapshot of multiple volumes with a single call by specifying the names of the volumes. The snapshots created in this way store the respective volumes at exactly the same point in time thus preserving data consisteny across an application. There is no requirement to group volumes in advance.
|
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
HPE SimpliVity 380 provides the option to enable Microsoft VSS integration when configuring a backup policy or when initiating manual backups using the CLI backup command. This ensures application-consistent backups are created for MS Exchange and MS SQL database environments.
In OmniStack 3.6.1 support was added for VSS on virtual machines running SQL Server 2012/2016 on the Windows Server 2012 R2 operating system.
HPE SimpliVity 380 also still provides the option to create crash-consistent backups by setting consistency to 'none'.
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire Volume
Commands in the API/CLI can be used to create a volume in the local cluster that is based on the contents of a local or remote (backed up) snapshot.
In addition snapshots can be instantiated (copy-on-write cloned) as read-write volumes. When doing so an end-user gains file-level access to the backed-up filesystem of a specific volume.
|
vSphere: Entire VM or Single File
Hyper-V: Entire VM
With HPE SimpliVity 380 snapshots and backups are the same.
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire Volume: API or CLI
Single File: Multi-step
Backed up snapshots are read-only when accessed from the original site. Restoring a volume from a backup is a matter of instantiating (cloning) the snapshot in the original site. This is controlled through API or CLI.
Backed up snapshots can also be transferred to a remote site when needed. This is controlled through API or CLI as well.
|
Entire VM: GUI, CLI and API
Single File: GUI
Single File restores can be performed entirely from the vSphere Web Client GUI due to the plugin integration
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (native)
DataCore SANsymphonys remote replication function, Asynchronous Replication, is called upon when secondary copies will be housed beyond the reach of Synchronous Mirroring, as in distant Disaster Recovery (DR) sites. It relies on a basic IP connection between locations and works in both directions. That is, each site can act as the disaster recovery facility for the other. The software operates near-synchronously, meaning that it does not hold up the application waiting on confirmation from the remote end that the update has been stored remotely.
|
Built-in (native)
StorPool Distributed Storage (StorPool) provides a native remote replication capability for disaster recovery (DR) purposes. This capability uses the same snapshot and transfer techniques as the backup functionality, but differs in terms of what snapshots stored in the DR location are used for.
When a DR event occurs, snapshots are instantiated as volumes and compute workloads (VMs/containers) are started from those volumes. Fail-over is typically triggered manually and the exact DR procedure depends on the application landscape and cloud management platform (CMP) in use.
|
Built-in (native)
HPE SimpliVity 380 provides native DR and replication capabilities.
|
|
Remote Replication Scope
Details
|
To remote sites
To MS Azure Cloud
On-premises deployments of DataCore SANsymphony can use Microsoft Azure cloud as an added replication location to safeguard highly available systems. For example, on-premises stretched clusters can replicate a third copy of the data to MS Azure to protect against data loss in the event of a major regional disaster. Critical data is continuously replicated asynchronously within the hybrid cloud configuration.
To allow quick and easy deployment a ready-to-go DataCore Cloud Replication instance can be acquired through the Azure Marketplace.
MS Azure can serve only as a data repository. This means that VMs cannot be restored and run in an Azure environment in case of a disaster recovery scenario.
|
To remote sites
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
To remote sites
|
|
Remote Replication Cloud Function
Details
|
Data repository
All public clouds can only serve as data repository when hosting a DataCore instance. This means that VMs cannot be restored and run in the public cloud environment in case of a disaster recovery scenario.
In the Microsoft Azure Marketplace there is a pre-installed DataCore instance (BYOL) available named DataCore Cloude Replication.
BYOL = Bring Your Own License
|
DR-site (several cloud providers)
Several, mostly European, cloud providers such as MetaNet, CloudSigma, Amito and Dustin leverage StorPool. Therefore on-premises StorPool clusters operated by end-user organizations are able to replicate to the StorPool clusters operated by these public cloud providers. This allows for setting up Disaster Recovery scenarios.
|
N/A
HPE SimpliVity 380 does not support replication to hyperscale public cloud targets (AWS, Azure, GCP) at this time.
|
|
Remote Replication Topologies
Details
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
HPE SimpliVity 380 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
|
|
Remote Replication Frequency
Details
|
Continuous (near-synchronous)
SANsymphony Asynchronous Replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes). End-users can inject custom consistency checkpoints based on CDP technology which has no minimum time slot/frequency.
|
Seconds (workload dependent)
There is no default frequency. For the minimum RPO on a particular data set a continuous remote snapshot can be created. This means that after the transfer of the last snapshot completes, the next snapshot is created immediately. Depending on the size of the cluster, size of the volumes, amount of write operations and bandwidth of the link to the remote cluster, the time between snapshots can be from several seconds to several minutes.
This Backup/DR process can be implemented by the customer with external automation tools leveraging the StorPool REST-API or configured in the included snapshot/remote backup scheduling and retention module.
|
GUI: 10 minutes (Asynchronous)
CLI: 1 minute (Asynchronous)
Continuous (Stretched Cluster)
Although setting the remote replication frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
|
Remote Replication Granularity
Details
|
VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per Volume (LUN)
Remote replication can be setup for an individual volume or a volume set containing multiple volumes.
StorPool detects and sends only the deltas, referring to the data that has been changed since the last succesful remote application run. Deltas are discovered by examining the metadata in RAM, so the remote replication mechanism is a very lightweight operation to the primary StorPool cluster.
|
VM
|
|
Consistency Groups
Details
|
Yes
SANsymphony provides the option to use Virtual Disk Grouping to enable end-users to restore multiple Virtual Disks to the exact same point-in-time.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
Yes
All snapshots in StorPool are crash-consistent. StorPool supports atomic snapshots of multiple volumes – e.g. all virtual disk of a VM can be snapshotted at a single point of time, providing consistent backup and restore for multi-disk systems.
The StorPool REST-API allows creating a snapshot of multiple volumes with a single call by specifying the names of the volumes. The snapshots created in this way store the respective volumes at exactly the same point in time thus preserving data consisteny across an application. There is no requirement to group volumes in advance.
|
No
Protection is on a per-VM basis only; no logical groupings of VMs can be created.
|
|
|
VMware SRM (certified)
DataCore provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). DataCore SRA 2.0 (SANsymphony 10.0 FC/iSCSI) shows official support for SRM 6.5 only. It does not support SRM 8.2 or 8.1.
There is no integration with Microsoft Azure Site Recovery (ASR). However, SANsymphony can be used with the control and automation options provided by Microsoft System Center (e.g. Operations Manager combined with Virtual Machine Manager and Orchestrator) to build a DR orchestration solution.
|
N/A
StorPool does not provide any native DR orchestration tooling and is not supported by VMware Site Recovery Manager (SRM).
|
RapidDR (native; VMware only)
NEW
HPE SimpliVity has developed its own DR orchestration software named 'RapidDR'. The current version is RapidDR 3.5.1, released in December 2020. Microsoft Hyper-V and VMware NSX are not supported.
HPE SimpliVity RapidDR includes an intuitive planning guide that allows for the creation of DR workflows in five easy steps. RapidDR provides one-click activation for recovery of all virtualized workloads according to the plan. RapidDR also creates detailed historical reports for compliance audits. Lastly, RapidDR provides the ability to test existing recovery plans without impacting running workloads. This allows an organization to confirm the IT readiness for a disaster by testing recovery plans in advance.
HPE SimpliVity RapidDR v2.0 introduced:
- priority-based parallel recovery of VMs, significantly reducing failover time (up to 80% compared to sequential failover time).
- setting recovery order priority at both the recovery group level and the individual VM level, where the recover group level priority takes precedence.
- proactive assessment of the source site configuration.
- choice out of four recovery actions if VM recovery fails .
HPE SimpliVity RapidDR v2.1 introduced:
- automated failback functionality.
- increased number of recovery plan from 150 VMs to 600 VMs.
- increased number of recovery groups per recovery plan from 20 to 50.
- increased number of VMs in the entire recovery environment from 300 to 1000.
HPE SimpliVity RapidDR v2.5 introduced:
- new and improved user interface with expandable menu options and
intuitive work flows.
- option to validate failover and failback settings (check and
report inconsistencies between the recovery configuration file and the recovery site).
HPE SimpliVity RapidDR v2.5.1 introduced:
- automated PowerCLU configuration during installation
- support for RPO functionality (minimum RPO is 10 minutes)
- simplified VM and Recovery Group settings page
HPE SimpliVity RapidDR v3.0 introduced:
- support for Microsoft Hyper-V hypervisor
- support for 50 VMs per recovery plan in a Hyper-V environment
- Quick Plan Editor, which allows editing of VM login credentials in recovery plans created for VMware
- option to generate Audit Report pdf which contains the entire recovery execution sequence
- significant improvement of recovery times for VMware based recovery plans
- improved error reporting and troubleshooting information
HPE SimpliVity RapidDR v3.0.1 introduced software fixes and no new features.
HPE SimpliVity RapidDR v3.1 introduced:
- encrypted passwords in recovery plans providing enhanced security
- revamped user interface for enhanced user experience
- recovery of Windows guest VMs by using non-administrator user accounts
- Log Mode button to choose a log level of all the RapidDR logs.
- a single-click log collection button for downloading all of the RapidDR logs in zip format
- export/import of recovery configuration settings for VMs from an excel sheet during plan creation or modification
- support for centrally managed federation
HPE SimpliVity RapidDR v3.5.0 introduces:
- NIC specific gateway and DNS to be used during recovery
- validation of guest VMs network settings after it is recovered.
- copying or moving backups from HPE SimpliVity clusters to HPE StoreOnce.
- recovery from external store (HPE StoreOnce appliance) backups
- user intuitive UX features for seamless navigation within and across workflows
- viewing recovery plan details and all plan activity
- Microsoft Hyper-V is no longer supported (!)
HPE SimpliVity RapidDR v3.5.1 introduces:
- support for DVS
- support for CentOS 8 and RHEL 8 guest VMs
- support for creation of VM backups in Test Failover and Test Failback workflows
HPE SimpliVity RapidDR is optional and requires separate software licenses.
Alternatively, DR orchestration can also be built using vRealize Automation (vRA).
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
DataCore SANsymphony is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/kb/2149740.
|
Linux KVM: Yes
Stretched StorPool clusters are possible in both single-layer (compute+storage) deployments and dual-layer (storage-only) deployments.
Stretched clusters have very strict requirements with regard to latency and reliability of the network between cluster nodes. Stretched clusters are created on a case-by-case basis after examining the exact network properties.
|
VMware vSphere: Yes (certified)
Microsoft Hyper-V: No
HPE SimpliVity is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/s/article/51462
HPE SimpliVity offers the ability to convert existing non-stretched cluster HPE SimpliVity deployments to stretched cluster deployments. Workloads can be automatically distributed among availability zones. Availability Zone management can be performed from the HPE SimpliVity vCenter tab.
|
|
|
2+sites = two or more active sites, 0/1 or more tie-breakers
Theoretically up to 64 sites are supported.
SANsymphony does not require a quorum or tie-breaker in stretched cluster configurations, but can be used as an optional component. The Virtual Disk Witness can provide a tie-breaker role if for instance redundant inter site paths are not implemented. The tie-breaker node (server or device) must be other than the two nodes presenting a virtual disk. Access to the Virtual Disk Witness is leading for storage node behavior.
There are 3 ways to configure the stretched cluster without any tie-breakers:
1. Default: in a split-brain scenario both sides stay active allowing upper infrastructure layers (OS/database/application) to make a decision (eg. clustering principles). In any case SANsymphony prevents a merge when there is a risk to data integrity, and the end-user has to make the choice on how to proceed next (which side is true)
2. Select one side to go inaccessible
3. Select both sides to go inaccessible.
|
2+sites = two or more active sites
A single StorPool cluster can be stretched to more than one physical location, provided there is high-bandwidth, low-latency connectivity between the nodes. Two, three, or more sites are possible. When a stretched cluster is designed, maintaining the cluster quorum in case of site or network failure is considered. This means that more than 50% of the storage nodes have to be available when such a failure occurs.
|
3-sites: two active sites + tie-breaker in 3rd site (optional)
HPE SimpliVity calls the tie-breaker 'arbiter'. The use of the arbiter automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The arbiter can be either on-premises or hosted as a cloud instance, and is recommended but not a hard requirement. The arbiter is a small Windows service, so it can run about anywhere, as long as it has network access to both active sites. A single arbiter may be shared by multiple clusters in the federation. Installing an additional arbiter for every 4,000 virtual machines helps ensure best performance and distributes workloads.
In VMware vSphere environments the arbiter is a hard requirement for 2-node clusters and stretched cluster configurations. Furthermore, HPE recommends using the arbiter in non-stretched cluster configurations with 4 HPE OmniStack hosts.
Currently for Microsoft Hyper-V environments (2-node clusters) the arbiter is a hard requirement.
The HPE OmniStack 3.7.9 version of Arbiter introduced support for clusters with hosts that use HPE OmniStack 3.7.8 and other clusters with hosts that use HPE OmniStack 3.7.9. The federation can contain a mix of clusters with those two versions. However, all the hosts within the same cluster must use the same version.
|
|
|
<=5ms RTT (targeted, not required)
RTT = Round Trip Time
In truth the user/app with the least tolerated write latency defines the acceptable RTT or distance. In practice
|
<=1ms RTT (targeted, not required)
Typical use cases for StorPool are where high-performance storage with minimal latency is required. While StorPool can tolerate higher network latency this will inevitably impact the performance of the storage system with the added latency. In the typical use cases, latency between nodes is kept below 100 microseconds round-trip-time.
|
<=5ms RTT
RTT = Round Trip Time
A RTT of
|
|
|
<=32 hosts at each active site (per cluster)
The maximum is per cluster. The SANsymphony solution can consist of multiple stretched clusters with a maximum of 64 nodes each.
|
<=32 nodes in each site part of a synchronous stretched cluster configuration
Maximum number of nodes per cluster is 63, max 64 clusters in federation.
Two different technologies can be used to deploy a storage system over multiple sites.
The first is a simple stretched cluster where nodes of one cluster are installed at different sites. This requires persistent reliable low-latency connectivity between all nodes. This is used in small deployments, typically with 3 sites with 3, 6, or 9 nodes.
When a larger storage system is needed a 'federated' deployment is used. This is a federation of multiple storage clusters that can operate independently, but with a common namespace. This allows all clusters to operate like one large cluster. This mode allows up to 64 clusters with 63 nodes each. (The supported maximum is 4032 nodes = 64 clusters by 63 nodes). Depending on the specific requirement a single cluster or a federation of multiple clusters can be better.
|
<=8 hosts at each active site
HPE SimpliVity 380 allows up to 16 nodes to be placed across two datacenters.
|
|
SC Data Redundancy
Details
|
Replicas: 1N-2N at each active site
DataCore SANsymphony provides enhanced stretched cluster availability by offering local fault protection with In Pool Mirroring. With In Pool Mirroring you can choose to mirror the data inside the local Disk Pool as well as mirror the data across sites to a remote Disk Pool. In the remote Disk Pool data is then also mirrored. All mirroring happens synchronously.
1N-2N: With SANsymphony Stretched Clustering, there can be either 1 instance of the data at each site (no In Pool Mirroring) or 2 instances of the data a each site (In Pool RAID-1 Mirroring).
|
Replicas: 1N-2N at each active site
Although two-way replication (2N) is supported, it is recommended only in limited use cases due to the lower data protection level.
Stretched clusters behave exactly the same as non-stretched clusters. Replication policies and data locality can be managed using Fault Sets and Placement Groups. Fault Sets can be defined to store copies of the data on nodes in different sites.
|
Replicas: 1N at each active site
+ Hardware RAID (5, 6 or 60)
In the case of stretched clustering, 1N means that there is only one instance of the data available at each of the active sites.
With hardware RAID (5 or 6) implemented, data is protected across cluster nodes within each active site.
|
|
|
Data Services
|
|
|
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
Software (integration)
NEW
SANsymphony provides integrated and individually selectable inline deduplication and compression. In addition, SANsymphony is able to leverage post-processing deduplication and compression options available in Windows 2016/2019 as an alternative approach.
|
N/A
StorPool Distributed Storage (StorPool) does not have any native deduplication and/or compression capabilities. StorPool has implemented only those space saving features that do not compromise storage performance or take significant amounts of RAM/CPU. This means performance is the primary consideration.
StorPool provides the following space saving features:
- thin provisioning;
- TRIM/discard;
- zeroes detection.
The measured level of space saving delivered by these features is up to 6 times (minimum measured gain is 2 times) for virtualized public/private environments.
|
Hardware (PCIe)
HPE SimpliVity 380 has its deduplication- and compression engine embedded into a PCIe card, consisting of FPGA, NVRAM and DRAM, thus all deduplication/compression is hardware accelerated and fully offloaded from the hypervisor host.
|
|
Dedup/Compr. Function
Details
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
N/A
StorPool Distributed Storage (StorPool) does not have any native deduplication and/or compression capabilities.
|
Efficiency and Performance
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
HPE SimpliVity 380 focusses on both aspects.
|
|
Dedup/Compr. Process
Details
|
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
DataCore SANSymphony 10 PSP12 and above leverage both inline deduplication and compression, as well as post process deduplication and compression techniques.
With inline deduplication incoming writes first hit the memory cache of the primary host and are replicated to the cache of a secondary host in an un-deduplicated state. After the blocks have been written to both memory caches, the primary host acknowledges the writes back to the originator. Each host then destages the written blocks to the persistent storage layer. During destaging, written blocks are deduplicates and/or compressed.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
N/A
StorPool provides the following space saving features:
- thin provisioning;
- TRIM/discard;
- zeroes detection.
Discard commands are initated by the user of the storage system.
Write operations containing zeroes are detected as early as possible and converted to a metadata-only write operation.
Thin provisioning is on all the time.
|
Deduplication: Inline (on-ack)
Compression: Inline (on-ack)
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack).
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
|
Dedup/Compr. Type
Details
|
Optional
NEW
By default, deduplication and compression are turned off. For both inline and post-process, deduplication and compression can be enabled.
For inline deduplication and compression the feature can be turned on per node. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization. Either deduplication or compression or both can be selected per individual vDisk. Pools can host both capacity optimized and non-capacity optimized vDisks at the same time. The optional capacity optimization settings can be added/changed/removed during operation for each vDisk.
For post-processing the feature can be enabled per pool. All vDisks in that pool would be deduplicated and compressed. Each pool is an independent deduplication domain. This means only data in the pool is capacity optimized, but not across pools. Additionally, for post-processing capacity optimization can be scheduled so admins can decide when deduplication should run.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
N/A
Data reduction features don’t have negative performance impact.
|
Always-on
HPE SimpliVity 380s data deduplication and compression features are always on and cannot be disabled as it is an integral component of the platform architecture providing both performance and efficiency. It also provides end-user simplicity.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
|
N/A
StorPool Distributed Storage (StorPool) does not have any native deduplication and/or compression capabilities.
|
All data (memory-, flash- and persistent data layers)
Each node maintains its own deduplication database in order to preserve data availability when a node in the Federation fails.
|
|
Dedup/Compr. Radius
Details
|
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
For inline deduplication and compression raw physical disks are added to a capacity optimization pool. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization.
The post-processing capability provided through Windows Server 2016/2019 is highly scalable and can be used with volumes up to 64 TB and files up to 1 TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
N/A
StorPool Distributed Storage (StorPool) does not have any native deduplication and/or compression capabilities.
|
Federation
HPE SimpliVity 380 inline deduplication works globally, which means that deduplication happens across the entire data set within a federation (a single federation consists of on or multiple clusters). The data set includes primary data as well as backup copies.
HPE SimpliVity 380 inline deduplication also works across sites. This means that SimplVity OmniStack will talk to the other site and only send changed blocks.
As HPE SimpliVity 380 inline deduplication and compression that is performed by the hardware accelerator card provides essential value and does not incur any performance penalty, it cannot be turned off.
|
|
Dedup/Compr. Granularity
Details
|
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
With inline deduplication and compression, the data is organized in 128 KB segments. Depending on the optimization setting, a write into such a segment first gets compressed (when compression is selected) and then a hash is generated. If the hash is unique, the 128 KB segment is written back and the hash is added to the deduplication hash-table. If the hash is not unique, the segment is referenced in the deduplication hash table and discarded. The smallest chunk in the segment can be 4 KB.
For post-processing the system leverages deduplication in Windows Server 2016/2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
N/A
StorPool Distributed Storage (StorPool) does not have any native deduplication and/or compression capabilities.
|
4-8 KB variable block size
HPE SimpliVity deduplication uses 4KB - 8KB variable block segments.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Microsoft provides the Deduplication Evaluation Tool (DDPEVAL) to assess the data in a particular volume and predict the dedup ratio.
|
N/A
StorPool Distributed Storage (StorPool) does not have any native deduplication and/or compression capabilities.
|
90% (10:1) capacity savings across storage and backup combined
Capacity space savings are due to deduplication+compression and include both storage and backups.
90% savings is the equivalent of 10:1 efficiency in the Datacenter panel in the HPE SimpliVity tab within the vSphere Client. Efficiency is calculated across all HPE SimpliVity systems in a VMware Datacenter. It’s the ratio of storage capacity that would have been used on a comparable traditional storage solution to the physical storage that is actually used in the HPE SimpliVity hyperconverged infrastructure. ‘Comparable traditional solutions’ are storage systems that provide VM-level synchronous replication for storage and backup and do not include any deduplication or compression capability.
The savings/efficiency are based on the assumption that you configure a backup policy to take at least one HPE SimpliVity backup per day of every virtual machine on every HPE SimpliVity system in a given VMware Datacenter with those backups retained for 30 days. If backups are performed more frequently and/or retained for a longer period, you will enjoy even greater efficiency. The data change rate is assumed to be up to 5% per day with up to 30% growth rate of the data over a duration of 30 contiguous days.
|
|
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense. When end-users want to isolate new workloads and corresponding data on new nodes, data rebalancing is not used.
|
Full
When one or more nodes or drives are added or removed from the cluster, data is automatically redistributed evenly across all available nodes and drives.
|
Partial
Data that is already present before adding a node is not rebalanced across all nodes within a Federation. This is in accordance with HPE SimpliVity 380s data locality strategy. However, data is automatically rebalanced across all nodes before removing/evicting a node from a Federation.
Data can be rebalanced at any time, but currently requires a HPE SimpliVity 380 Support engagement. Some customers have regular, support-initiated cadences for rebalancing.
|
|
|
Yes
DataCore SANsymphonys Auto-Tiering is a real-time intelligent mechanism that continuously positions data on the appropriate class of storage based on how frequently the data is accessed. Auto-Tiering leverages any combination of Flash and traditional disk technologies, whether it is internal or array based, with up to 15 different storage tiers that can be defined.
As more advanced storage technologies become available, existing tiers can be modified as necessary and additional tiers can be added to further diversify the tiering architecture.
|
Yes
Volumes can be moved online (no service interruption) between storage tiers and disk pools.
|
N/A
The HPE SimpliVity 380 and HPE SimpliVity G storage architecture is based on a single storage layer (SSD) and hence does not include multiple persistent storage layers to distribute data across.
The HPE SimpliVity 380 H storage architecture does not include multiple persistent storage layers, but rather consist of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
|
|
|
Performance |
|
|
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
DataCore SANsymphony iSCSI and FC are fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero
Note: DataCore SANsymphony does not support Thick LUNs.
DataCore SANsymphony is also fully qualified for Microsoft Hyper-V 2012 R2 and 2016/2019 ODX and UNMAP/TRIM.
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
OpenNebula, OnApp, OpenStack, CloudStack
StorPool has deep integrations with OpenNebula, OnApp, OpenStack, CloudStack and other cloud orchestration platforms. Cloning and snapshotting are performed in the StorPool storage system and offloaded from the cloud orchestration platform.
SCSI UNMAP is supported for block device driver and iSCSI.
|
vSphere: VMware VAAI-NAS (Limited)
GUI integrated tasks/commands
HPE SimpliVity 380 is qualified for: File Cloning.
HPE SimpliVity 380 offers an alternative task/command set through the vSphere management interface that provide instant offloading.
|
|
|
IOPs and/or MBps Limits
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical clients/hosts.
2. Ability to set guarantees to ensure service levels for mission-critical clients/hosts.
SANsymphony currently supports only the first method. Although SANsymphony does not provide support for the second method, the platform does offer some options for optimizing performance for selected workloads.
For streaming applications which burst data, it’s best to regulate the data transfer rate (MBps) to minimize their impact. For transaction-oriented applications (OLTP), limiting the IOPs makes most sense. Both parameters may be used simultaneously.
DataCore SANsymphony ensures that high-priority workloads competing for access to storage can meet their service level agreements (SLAs) with predictable I/O performance. QoS Controls regulate the resources consumed by workloads of lower priority. Without QoS Controls, I/O traffic generated by less important applications could monopolize I/O ports and bandwidth, adversely affecting the response and throughput experienced by more critical applications. To minimize contention in multi-tenant environments, the data transfer rate (MBps) and IOPs for less important applications are capped to limits set by the system administrator. QoS Controls enable IT organizations to efficiently manage their shared storage infrastructure using a private cloud model.
More information can be found here: https://docs.datacore.com/SSV-WebHelp/quality_of_service.htm
In order to achieve consistent performance for a workload, a separate Pool can be created where selected vDisks are placed. Alternatively 'Performance Classes' can be assigned to differentiate between data placement of multiple workloads.
|
IOPs and/or MBps Limits
Fair-sharing (built-in)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
StorPool supports only the first method currently and focusses on setting IOPS and MB/s limits per volume. It is not possible to guarantee a certain amount of IOPS for any given volume, unless all volumes in the system are IOPS-limited.
Furthermore, in StorPool there is built-in fairness between volumes. There are no privileged volumes and all IOPS operations are completed as fast as possible.
The QoS in StorPool is achieved by templates by which it is possible to set various limitations (IOPS and MB/s) for volumes.
StorPool Distributed Storage can apply hard limits on IOPS and bandwidth (MB/s) per volume. For example, if there are 10 virtual machines (VMs) connected to 10 virtual disks, you can configure IOPS and MB/s QoS limits for all 10 virtual disks. As long as there is sufficient storage/network capacity to handle the total IOPS and MB/s set for all the virtual disks, StorPool Distributed Storage fairly shares the available storage/network resources between the volumes. However, if components of the network equipment or underlying hardware used for storage fail, StorPool cannot guarantee that these limits will be kept.
Next to IOPS and MB/s StorPool provides volume-based QoS settings for Size, # of replicas and data placement policy.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
HPE SimpliVity 380 currently does not offer any QoS mechanisms.
|
|
|
Virtual Disk Groups and/or Host Groups
SANsymphony QoS parameters can be set for individual hosts or groups of hosts as well as for groups of Virtual Disks for fine grained control.
In a VMware VVols (=Virtual Volumes) environment a vDisk corresponds 1-to-1 to a virtual disk (.vmdk). Thus virtual disks can be placed in a Disk Group and a QoS Limit can then be assigned it. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and QoS Limits can be applied on the virtual disk level. The 1-to-1 allignment is realized by installing the DataCore Storage Management Provider in SCVMM.
|
Per volume
Per vdisk (CMPs)
Because StorPool presents block-based storage volumes, QoS Policies can be applied to VMware datastores as well as Raw Device Mappings (RDMs). This also extends to individual vdisks when Cloud Management Platforms (eg. OpenNebula, OpenStack, CloudStack, and OnApp) are leveraged. For a demonstration of this please view: https://www.youtube.com/watch?v=WZ6UggyVEfg
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
HPE SimpliVity 380 currently does not offer any QoS mechanisms.
|
|
|
Per VM/Virtual Disk/Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
In SANsymphony 'Flash Pinning' can be achieved using one of the following methods:
Method #1: Create a flash-only pool and migrate the individual vDisks that require flash pinning to the flash-only pool. When using a VVOL configuration in a VMware environment, each vDisk represents a virtual disk (.vmdk). This method guarantees all application data will be stored in flash.
Method #2: Create auto-tiering pools with at least 1 flash tier. Assign the Performance Class “Critical” to the vDisks that require flash pinning and place them in the auto-tiering pool. This will effectively and intelligently put as much of the data that resides in the vDisk in the flash tier as long that the flash tier has enough space available. Therefore this method is on a best-effort basis and dependent on correct sizing of the flash tier(s).
Methods #1 and #2 can be uses side-by-side in the same DataCore environment.
|
Yes
Every volume(LUN) can be dynamically re-assigned to different performance tiers.
Every volume in StorPool is associated to a 'template'. The template specifies the replication level, which drive pools (aka placement groups) and default IOPS and MB/s limits. Multiple templates can be configured in a StorPool storage system. For example one template could be created for a NVMe storage pool, another template could be for a SSD-HDD hybrid pool (with 1-2 copies on SSDs and 1-2 copies on HDDs) and yet another template could be for HDD-only storage.
A volume can be moved between templates online, without interrupting the storage service. In most cases a volume (LUN) in StorPool corresponds to one virtual disk of a VM or one persistent volume of a container/pod.
Drive pools (placement groups) can be changed online, without interrupting the service, for example to add or remove storage nodes and individual drives (SSDs, HDDs) from them. Templates can also be changed to refer to different drive pools. For example if the user decides that a large group of volumes needs to be moved to different media.
|
Not relevant (All-Flash only)
The HPE SimpliVity 380 platform is not available as a hybrid (flash+magnetic) configuration and as such has no need for a ' Flash Pinning' feature.
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
SANsymphony 10.0 PSP9 introduced native encryption when running on Windows Server 2016/2019.
|
N/A
StorPool does not have native data encryption capabilities. Adding data encryption capabilities requires encryption storage hardware.
|
Built-in (native)
|
|
Data Encryption Options
Details
|
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: In SANsymphony deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: SANsymphony provides software-based data-at-rest encryption that is XTS-AES 256bit compliant.
|
Hardware: Self-encrypting drives (SEDs)
Software: N/A
Hardware: In StorPool deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
|
Hardware: Smart array controller
Software: HyTrust DataControl (validated); Vormetric VTE (validated)
Hardware: HPE SimpliVity 380 on ProLiant Gen 10 server hardware supports HPE Smart Array controller data-at-rest encryption in order to offer protection against data theft. Smart Array based encryption can only be enabled before a system is deployed.
Software: HPE SimpliVity 380 has validated the interoperability of HyTrust DataControl as well as Vormetric Transparant Encryption (VTE) software encryption with its OmniStack platform.
Currently HPE SimpliVity 380 does not provide native software-based data-at-rest encryption.
|
|
Data Encryption Scope
Details
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: SANsymphony provides encryption for data-at-rest; it does not provide encryption for data-in-transit. Encryption can be enabled per individual virtual disk.
|
Hardware: Data-at-rest
Software: N/A
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
|
Hardware: Data-at-rest
Software: Data-at-rest + Data-in-transit
Hardware: HPE Smart Array Controller provides encryption for data-at-rest, but does not provide encryption for data-in-transit.
Software: HyTrust and Vormetric encryption solutions do provide both encryption for data-at-rest and encryption for data-in-transit.
|
|
Data Encryption Compliance
Details
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: FIPS 140-2 Level 1 (Smart array controller)
Software: FIPS 140-2 Level 1 (HyTrust, VTE)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: No
Software: No
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
|
Hardware: No
Software: N/A
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
|
Hardware: No
Software: Yes
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Because HyTrust and Vormetric are end-to-end solutions, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
|
|
|
Test/Dev |
|
|
|
Yes
Support for fast VM cloning via VMware VAAI and Microsoft ODX.
|
Yes
|
Yes
The cloning process takes advantage of the global deduplication and compression.
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
Hyper-V/ESXi/XenServer to KVM (external)
Migration between hypervisors depends on the cloud orchestration platform or external tools.
StorPool provides free tools for migrating VMs with minimal downtime from Hyper-V, VMWare ESXi and XenServer to KVM.
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
|
|
|
File Services |
|
|
|
Built-in (native)
SANsymphony delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. SANsymphony is capable of simultaneously handling highly-available block and file level services.
Raw storage is provisioned from within the SANsymphony GUI to the Microsoft file services layer, similar to provisioning Storage Spaces Volumes to the file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
More information can be found under: https://www.datacore.com/products/features/high-availability-nas-cluster-file-sharing.aspx
|
N/A
StorPool does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
HPE SimpliVity 380 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
N/A
StorPool does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
HPE SimpliVity 380 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
N/A
StorPool does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
HPE SimpliVity 380 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Quotas
Details
|
Share Quotas, User Quotas
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
N/A
StorPool does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
HPE SimpliVity 380 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Analytics
Details
|
Partial
Because SANsymphony leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
N/A
StorPool does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
N/A
HPE SimpliVity 380 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
StorPool does not provide any object storage serving capabilities of its own.
Inside a Guest VM an object storage service could be leveraged to provide these services to the clients.
|
N/A
HPE SimpliVity 380 does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
StorPool does not provide any object storage serving capabilities of its own.
Inside a Guest VM an object storage service could be leveraged to provide these services to the clients.
|
N/A
HPE SimpliVity 380 does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
StorPool does not provide any object storage serving capabilities of its own.
Inside a Guest VM an object storage service could be leveraged to provide these services to the clients.
|
N/A
HPE SimpliVity 380 does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
|
|
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
SANsymphonys graphical user interface (GUI) is highly configurable to accommodate individual preferences and includes guided wizards and workflows to simplify administration. All actions available from the GUI may also be scripted with PowerShell Commandlets to orchestrate workflows with other tools and applications.
|
Centralized
StorPool provides a dashboard with a high-level view of the cluster status, real time statistics as well as information about the components, services and resources.
For all counters both real time data and historic data is captured; data is stored with up to 1 second granularity.
The information is provided through StorPools REST-API and short historical data is captured in the GUI.
The following information can be found in the StorePool Management GUI:
- Real-time at-a-glance status
- Detailed status, list/filter volumes, snapshots, etc.
- Performance Statistics
|
Centralized
OmniStack management, capacity monitoring, performance monitoring and efficiency reporting is performed through the vSphere Web Client interface.
The HPE SimpliVity HTML5 Plug-in for vSphere Client supports all of the HPE SimpliVity features.
HPE Simplvity OmniStack 4.0.0 introduces a roles based access control (RBAC) structure that allows defining users that can perform crash consistent backups, search for, and restore the crash consistent backups when using the HPE OmniStack REST API or the HPE SimpliVity Plug-in.
HPE SimpliVity OmniStack 4.0.0 also introduces a new, centralized federation management type. This optional management type supports high-scale configurations. The centrally managed federation includes a virtual machine called the Management Virtual Appliance. The Management Virtual Appliance is a dedicated virtual machine that provides centralized management and coordination of operations across
clusters of HPE SimpliVity hosts for high-scale configurations. In a centrally managed federation, up to 96 HPE SimpliVity hosts can be deployed in a single vCenter Server.
|
|
|
Single-site and Multi-site
|
Single-site and Multi-site (read-only)
The GUI is capable of displaying data from different StorPool clusters. The data is read-only as the GUI is currently not for management purposes.
|
Single-site and Multi-site
Centralized management of one or multiple Federations is performed from a single dasboard. This means that global implementations with multiple sites in multiple countries can be easily managed.
|
|
GUI Perf. Monitoring
Details
|
Advanced
SANsymphony has visibility into the performance of all connected devices including front-end channels, back-end channels, cache, physical disks, and virtual disks. Metrics include Read/write IOPs, Read/write MBps and Read/Write Latency at all levels. These metrics can be exported to the Windows Performance Monitoring (Perfmon) utility where other server parameters are being tracked.
The frequency at which performance metrics can be captured and reported on is configurable, real-time down to 1 second intervals and long term recording at 2 minutes granularity.
When a trend analysis is required, an end-user can simply enable a recording session to capture metrics over a longer period of time.
|
Advanced
Real-time as well as historical data are available graphically, in a table format and through the REST-API interface. Data is available at different levels – aggregated for the storage cluster, per node, per client, per volume, per drive, per CPU. User defined queries and reports can be generated.
Some of the most important metrics are:
- CPU and memory - usage per service, node
- Per drive statstics - drive utilization, IOPS, bandwidth, busy/free time, ops latency, request size
- Per volume statistics - IOPS, bandwidth, latency, request size, queue size
- Network utilization - per node and per interface
- Per client load - CPU, memory, requests, latency, requests in flight, IOPS, bytes/sec
- Aggregated cluster metrics - IOPS, BW, latency, request in flight, used and available space
|
Advanced
HPE SimpliVity 380 provides out-of-the-box performance monitoring functionality through the VMware vSphere Web Client plug-in. Performance metrics can be viewed at the datacenter, cluster and VM level. Performance graphs focus on IOPS (#), Throughput (MB/s) and Latency (ms), both for Reads and Writes. The GUI allows for adjusting the timescale from minutes to years in multiple steps.
|
|
|
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
DataCore offers deep integration with VMware vSphere and Microsoft Hyper-V, as well as their respective systems management tools, vCenter and System Center.
SCVMM = Microsoft System Center Virtual Machine Manager
|
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
StorPool integrates with a number of private cloud platform interfaces.
StorPool does not integrate with the GUI of hypervisor platforms such as VMware and Hyper-V.
|
VMware vSphere HTML5 Client (plugin)
SCVMM 2016/2019 (add-in)
The HPE SimpliVity OmniStack 3.7.10 HTML5 Plug-in for vSphere Client supports all of the HPE SimpliVity features. The HPE SimpliVity Plug-in for vSphere Web Client (Flex) is no longer supported.
HPE SimpliVity OmniStack 4.0.0 introduced support for SCVMM 2019 with Hyper-V Server 2016.
HPE SimpliVity OmniStack 3.7.7 introduced plug-in support for vSphere HTML5 Client with the following functionality:
- View backup policies in a federation
- View cluster storage efficiency
- View virtual machines in a cluster
- View datastores in a cluster
- View hosts in a cluster
- View virtual machines and templates on a host
- View virtual machines or templates assigned to a backup policy
- View datastores assigned to a backup policy
- Create a backup policy
- Set a backup policy for datastore
HPE SimpliVity OmniStack 3.7.8 introduced the following expanded functionality for the vSphere HTML5 Client plug-in:
- Create backup policy with settings for backup days, backup type, and server start and stop times
- Rename backup policy and change or delete rules
- Delete a backup policy
- Identify space savings and see alerts when cluster storage reaches 20% of free space
- View cluster performance (IOPS, MBps, Latency)
- Enable or disable HPE OmniWatch for the federation
- Set proxy server for HPE OmniWatch agent
HPE SimpliVity OmniStack 3.7.9 introduced the following expanded functionality for the vSphere HTML5 Client plug-in:
- Access details on the federation through the following views: HPE SimpliVity Federation Home, Connected Clusters (topology), Backup Limits, Support Capture Monitor, and About
- Access options to view hardware components, create a Support Capture file, shut down the Virtual Controller to safely shut down an HPE OmniStack host, remove a host, share an HPE SimpliVity datastore with a standard ESXi host, and delete a datastore
- Calculate unique backup size
- Rename a backup
- Copy a backup to another cluster
HPE SimpliVity OmniStack 3.7.4 introduced add-in support for Microsoft System Center Virtual Machine Manager (SCVMM) 2016.
HPE SimpliVity OmniStack 3.7.7 introduced the following expanded functionality for the Hyper-V add-in:
- Safe Shutdown of Virtual Controllers
- Display of top virtual machine contributors in the Cluster view
- Policy change impact chart when setting or editing a backup policy
- Backup impact chart in the Federation view
HPE SimpliVity OmniStack 3.7.8 introduced the following expanded functionality for the Hyper-V add-in:
- Suspend and resume policy-based backups
- Remove HPE OmniStack hosts from a Federation
- Built-in event viewer
- Ability to backup HPE SimpliVity VMs outside of the HPE OmniStack Add-in for Hyper-V
HPE SimpliVity OmniStack 3.7.9 introduced the following expanded functionality for the Hyper-V add-in:
- Filtering events by time and severity
HPE SimpliVity OmniStack 4.0.0 introduced the following expanded functionality for the Hyper-V add-in:
- Registering with HPE InfoSight
- Selecting external stores (HPE StoreOnce Catalyst) as a destination for cost-effective secondary backups.
- Viewing a dashboard that allows viewing the status of Virtual Controllers and the impact of backups.
HPE SimpliVity OmniStack 4.0.0 introduced the following expanded functionality for the Hyper-V add-in:
- Unregister an external store.
- Change the login credentials for an external store.
- Select an external store as a destination when creating a backup policy rule.
- Copy a backup from an external store to an HPE SimpliVity cluster.
- Cancel a backup that has an external store destination.
- Dashboard that displays which backups failed during the last two weeks.
|
|
|
|
Programmability |
|
|
|
Full
Using DataCores native management console, Virtual Disk Templates can be leveraged to populate storage policies. Available configuration items: Storage profile, Virtual disk size, Sector size, Reserved space, Write-trough enabled/disabled, Storage sources, Preferred snapshot pool, Accelerator enabled/disabled, CDP enabled/disabled.
Virtual Disk Templates integrate with System Center Virtual Machine Manager (SCVMM), VMware Virtual Volumes (VVol) and OpenStack. Virtual Disk Templates are also fully supported by the REST-API allowing any third-party integration.
Using Virtual Volumes (VVols) defined through DataCore’s VASA provider, VMware administrators can self-provision datastores for virtual machines (VMs) directly from their familiar hypervisor interface. This is possible even for devices in the DataCore pool that don’t natively support VVols and never will, as SANsymphony can be used as a storage-virtualization layer for these devices/solutions. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
Using Classifications and StoragePools defined through DataCore’s Storage Management Provider, Hyper-V administrators can self-provision virtual disks and pass-through LUNS for virtual machines (VMs) directly from their familiar SCVMM interface.
|
Partial
StorPool does not provide a policy-based experience for every storage-related function, but offers an extensive REST-API/CLI instead.
|
Partial (Protection)
HPE SimpliVity 380s VM-level management significantly reduces administrative overheads and the consumption of system resources by allowing policies for functions such as replication and backup to be specified for both individual VMs and groups of VMs.
|
|
|
REST-APIs
PowerShell
The SANsymphony REST-APIs library includes more than 200 new representational state transfer (REST) operations, so automation can be leveraged more extensively. RESTful interfaces are used by products such as Lenovo XClarity, Cisco Embedded Resource Manager and Dell OpenManage to manage infrastructure in the enterprise.
SANsymphony provides its own Powershell cmdlets.
|
REST-API
CLI
All StorPool Distributed Storage (StorPool) features are exposed through the JSON API and end-user organizations usually leverage bash/shell/other scripts to automate operational workflows.
|
REST-APIs
XML-APIs
PowerShell (community supported)
CLI
HPE SimpliVity 380 provides an extensive REST-APIs command set that can be used to automate many operational activities.
HPE OmniStack REST API v1.13 changes GET /hosts and GET /virtual_machines functionality.
HPE OmniStack REST API v1.6 extended functionality by adding new fields to several object types.
vRealize Operations for example can make use of the XML-API to automate operational tasks.
|
|
|
OpenStack
OpenStack: The SANsymphony storage solution includes a Cinder driver, which interfaces between SANsymphony and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage.
Datacore SANsymphony programmability in VMware vRealize Automation and Microsoft System Center can be achieved by leveraging PowerShell and the SANsymphony specific cmdlets.
|
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
OpenStack: StorPool storage solution provides Cinder block storage component to use the StorPool distributed storage as a backend for volumes of OpenStack.
CloudStack: StorPool deeply integrates with CloudStack and has a native driver in the host OS which provides block devices as raw disk images for qemu/KVM.
OnApp: StorPool provides and maintains integration with OnApp; each VM disk becomes a separate volume in the StorPool storage system. The integration replaces the traditional LVM layer in OnApp.
OpenNebula: StorPool enables OpenNebula to use a StorPool storage system for storing disk images by leveraging its native datastore driver.
Kubernetes - StorPool provides persistent volumes for bare-metal Kubernetes clusters through its CSI driver.
|
VMware vRealize Automation (vRA)
Cisco UCS Director
There is no OpenStack support for HPE SimpliVity 380.
|
|
|
Full
The DataCore SANsymphony GUI offers delegated administration to secondary users through fine-grained Role-based Access Control (RBAC). The administrator is able to define Virtual Disk ownership as well as privileges associated with that particular ownership. Owners must have Virtual Disk privileges in an assigned role in order to perform operations on the virtual disk. Access can be very refined. For example, one owner may have the privilege to create a snapshot of a virtual disk, but not have the ability to serve or unserve the same virtual disk. Privilege sets define the operations that can be performed. For instance, in order for an owner to perform snapshot, rollback, or replication operations, they would require those privilege sets in an assigned role.
|
N/A
StorPool Distributed Storage (StorPool) does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging a Cloud Management Platform (CMP). When not using an open-source CMP, this requires separate licenses.
|
N/A
HPE SimpliVity 380 does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license. HPE SimpliVity 380 officially supports vRealize Automation (vRA) and vRealize Orchestration (vRO) through a reference architecture.
|
|
|
|
Maintenance |
|
|
|
Unified
All storage related features and functionality are built into the DataCore SANsymphony platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
Integration with 3rd party systems (e.g. OpenStack, vSphere, System Center) are delivered seperately but are free-of-charge.
|
Unified
All storage related features and functionality are built into the StorPool platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
|
Unified
A few minor components aside (eg. plugin, SRA), all storage related features and functionality are built into the HPE SimpliVity 380 platform. This type of consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
Each SANsymphony update is packaged in an installation Wizard which contains a fully guided upgrade process. The upgrade process checks all system requirements and performs a system health before starting the upgrade process and before moving from one node to the next.
The user can also decide to upgrade a SANsymphony cluster manually and follow all steps that are outlined in the Release Notes.
|
Rolling Upgrade (1-by-1)
StorPool does in-service rolling upgrades: StorPool performs upgrades node-by-node while the workloads maintain accessible and storage operations keep on running. Usually the software upgrade does not impact performance noticeably.
|
Rolling Upgrade (1-by-1)
OmniStack upgrades are initiated against the Federation and are completed in an automated, rolling fashion. OmniStack Accelerator Card upgrades are included in the OmniStack upgrade process. Reboots of a host are only required if there is an associated OmniStack Accelerator Card firmware update. The updating of the firmware only happens occasionally.
HPE SimpliVity 380 provides a Fast Upgrade Manager that manages the entire upgrade process, from detection to execution.
HPE SimpliVity 380 OmniStack 3.7.7 provides Upgrade Manager support for simulteaneously upgrading hosts in a cluster when the hosts do not have powered-on guest VMs, accelerating the upgrade process.
HPE SimpliVity 380 OmniStack 3.7.9 provides Upgrade Manager support for upgrading HPE OmniStack software at the cluster level. This feature only works when upgrading from HPE OmniStack 3.7.8 to HPE OmniStack 3.7.9.
|
|
FW Upgrade Execution
Details
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
DataCore provides integrated firmware-control for FC-cards. This means the driver automatically loads the required firmware on demand.
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
|
Rolling Upgrade (1-by-1)
In HPE SimpliVity OmniStack 3.7.10 the Upgrade Manager allows upgrading host firmware (SVT Service Pack for Proliant) on supported platforms.
Previously, server firmware and drivers had to be updated by rebooting the server from an ISO that automatically installed the Service Pack for Proliant (SPP). Because this is an offline procedure, it was required to migrate VMs and put the node in maintenance mode first.
Upgrade Manager now allows you to generate firmware upgrade reports
for individual hosts or clusters. These reports show version (previous and
current) and state information for the individual firmware components.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
No
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer unified support for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Yes (limited)
There are bundles available at selected channel partners where the end-user organization can procure a complete hardware/software solution, eg.: https://s3s.eu/solutions/storpool
|
Yes
HPE provides unified support for the entire native solution. This means HPE is the single point-of-contact for any storage software (HPE SimpliVity 380) and server hardware (HPE Proliant) related issues.
|
|
Call-Home Function
Details
|
Partial (HW dependent)
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer call-home for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Yes
StorPool has live monitoring enabled on StorPool Distributed Storage (StorPool) production clusters. StorPool reacts proactively on incidents to prevent issues. For example, if StorPool detects a drive failure or latency peaks, the impacted end-user organization is alerted or StorPool can step in to tackle the issue before it escalates. For end-user organizations that do not want StorPool to have full access to their clusters, StorPool aligns with these organizations and schedules updates and troubleshoots when there is an open support ticket.
|
Full
HPE SimpliVity 380s call-home function is called 'OmniWatch' and is fully integrated into the platform. Enabling the feature is simple and straightforward and requires very few clicks.
OmniWatch is a proactive and preventative service that continuously monitors and evaluates the health of a customer’s hyperconverged infrastructure. OmniWatch is able to identify problems before they impact the business by filing a support case and alerting a HPE SimpliVity 380 support expert of potential issues or risks.
OmniWatch is a basic support service and is included with every support plan.
HPE SimpliVity 3.7.10 introduces the following functionality for Hyper-V environments:
- Ability to disable or enable HPE OmniWatch data collection after deployment.
- Ability to configure a proxy server for one or more clusters.
HPE SimpliVity OmniStack 4.0.0 introduces support for HPE InfoSight cloud service. HPE InfoSight automatically monitors the health of each HPE SimpliVity host in the federation. Once a day, HPE InfoSight sends a report that includes information on the system status and significant events. It also contains details on:
- Cluster, host, virtual machine, and Virtual Controller names
- Host serial numbers
- Host IP addresses
- Virtual machine sizes
- Datastore details (name, physical capacity, free space, memory size)
The report does not contain any user identifying information such as user names or virtual machine IP addresses.
|
|
Predictive Analytics
Details
|
Partial
Capacity Management: DataCore SANsymphony Analysis and Reporting supports depletion monitoring of the capacity, complements pool space threshold warnings by regularly evaluating the rate of capacity consumption and estimating when space will be depleted. The regularly updated projections give you a chance to add more storage to the pool before you run out of storage. It also helps you do a better job of capacity planning with fewer surprises. To help allocate costs, especially in private cloud and hosted cloud services, SANsymphony generates reports quantifying the storage resources consumed by specific hosts or groups of hosts. The reports tally several parameters.
Health Monitoring: A combination of system health checks and access to device S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) alerts help to isolate performance and disk problems before they become serious.
DataCore Insight Services (DIS) offers additional capabilites including log-analytics for predictive failure analysis and actionable insights - including hardware.
DIS also provides predictive capacity trend analysis in order to pro-actively warn about licensing limitations being reached within x days and/or disk pools running out of capacity.
|
Partial
As part of the the real-time monitoring system, automatic notifications are generated based on metrics that can predict hardware failures such as the increase of the read/write error rate or increased disk latency over time. Automatic notifications are also generated on storage depletion which covers almost-full states for individual disks.
|
Full
HPE SimpliVity 380s predictive analytics function is called 'OmniView'. OmniView is Software as a Service (SaaS) that runs in the HPE SimpliVity 380 Support cloud and is accessible through a native web interface.
OmniView provides advanced insight into running HPE SimpliVity 380 deployments through customizable dashboards and reports, allowing for easy visualization. OmniView keeps track of historical data about the deployment and uses predictive analytics to spot trends and make forecasts that can be used for resource planning. Resources include Host/VM CPU, Host/VM Memory, Host/VM Storage Performance (IOPS/Latency) and Host/VM Storage Capacity (Primary and Backup TBs).
OmniView offers the ability to drill down into multiple vCenter and HPE SimpliVity 380 data points in great detail, enabling users to diagnose and troubleshoot any issues that may come up.
Currently OmniView is only included with mission critical support.
|
|