|
|
|
Storage |
|
|
Supported Storage
Details
|
|
=EA65
|
DAS, SAS, iSCSI, NAS, SMB, FC, FCoE, openFCoE
NEW
Citrix Hypervisor data stores are called Storage Repositories (SRs), they support IDE, SATA, SCSI (physical HBA as well as SW initiator) and SAS drives locally connected, and iSCSI, NFS, SMB, SAS and Fibre Channel remotely connected.
Background: The SR and VDI abstractions allow for advanced storage features to be exposed on storage targets that support them. For example, advanced features such as thin provisioning, VDI snapshots, and fast cloning. For storage subsystems that don’t support advanced operations directly, a software stack that implements these features is provided. This software stack is based on Microsoft’s Virtual Hard Disk (VHD) specification.
SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
Reference: https://tinyurl.com/y222m23o
Also refer to the XenServer Hardware Compatibility List (HCL) for more details.
|
|
|
|
=EA66
|
Yes
NEW
Dynamic multipathing support is available for Fibre Channel and iSCSI storage arrays (round robin is default balancing mode). Citrix Hypervisor also supports the LSI Multi-Path Proxy Driver (MPP) for the Redundant Disk Array Controller (RDAC) - by default this driver is disabled.
|
|
Shared File System
Details
|
|
=EA67
|
Yes (SR)
Citrix Hypervisor uses the concept of Storage Repositories (disk containers/data stores). These SRs can be shared between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a defined resource pool. All hosts in a single resource pool must have at least one shared SR in common. NAS, iSCSI (Software initiator and HBA are both supported), SAS, or FC are supported for shared storage.
|
|
|
|
=EA68
|
Yes (iSCSI, FC, FCoE)
Boot from SAN depends on SAN-based disk arrays with either hardware Fibre Channel or HBA iSCSI adapter support on the host. For a fully redundant boot from SAN environment, you must configure multiple paths for I/O access. To do so, ensure that the root device has multipath support enabled. For information about whether multipath is available for your SAN environment, consult your storage vendor or administrator. If you have multiple paths available, you can enable multipathing in your Citrix Hypervisor deployment upon installation.
XenServer 7.x adds Software-boot-from-iSCSI for Cisco UCS
Yes for XenServer 6.1 and later (XenServer 5.6 SP1 added support for boot from SAN with multi-pathing support for Fibre Channel and iSCSI HBAs)
Note: Rolling Pool Upgrade should not be used with Boot from SAN environments. For more information on upgrading boot from SAN environments see Appendix B of the XenServer 7.5 Installation Guide: http://bit.ly/2sK4wGn
|
|
|
|
=EA69
|
No
While there are several (unofficial) approaches documented, officially no flash drives are supported as boot media for Citrix Hypervisor.
|
|
Virtual Disk Format
Details
|
|
=EA70
|
vhd, raw disk (LUN)
Citrix Hypervisor supports file based vhd (NAS, local), block device-based vhd (FC, SAS, iSCSI) using a Logical Volume Manager on the Storage Repository or a full LUN (raw disk)
|
|
|
|
=EA71
|
2TB (16TB with GFS2)
For Citrix Hypervisor 8 the maximum virtual disk sizes are:
- NFS: 2TB minus 4GB
- LVM (block): 2TB minus 4 GB
- GFS2: 16T
Reference: https://tinyurl.com/y5o67fdj
|
|
Thin Disk Provisioning
Details
|
|
=EA72
|
Yes
Thin provisioning for shared block storage is of particular interest in the following cases:
- You want increased space efficiency. Images are sparsely and not thickly allocated.
- You want to reduce the number of I/O operations per second on your storage array. The GFS2 SR is the first SR type to support storage read caching on shared block storage.
- You use a common base image for multiple virtual machines. The images of individual VMs will then typically utilize even less space.
- You use snapshots. Each snapshot is an image and each image is now sparse.
- Your storage does not support NFS and only supports block storage. If your storage supports NFS, we recommend you use NFS instead of GFS2.
- You want to create VDIs that are greater than 2 TiB in size. The GFS2 SR supports VDIs up to 16 TiB in size.
The shared GFS2 type represents disks as a filesystem created on an iSCSI or HBA LUN. VDIs stored on a GFS2 SR are stored in the QCOW2 image format.
|
|
|
|
=EA73
|
No
There is no NPIV support for Citrix Hypervisor
|
|
|
|
=EA74
|
Yes - Clone on boot, clone, PVS, MCS
XenServer 6.2 introduced Clone on Boot
This feature supports Machine Creation Services (MCS) which is shipped as part of XenDesktop. Clone on boot allows rapid deployment of hundreds of transient desktop images from a single source, with the images being automatically destroyed and their disk space freed on exit.
General cloning capabilities: When cloning VMs based off a single VHD template, each child VM forms a chain where new changes are written to the new VM, and old blocks are directly read from the parent template. When this is done with a file based vhd (NFS) then the clone is thin provisioned. Chains up to a depth of 30 are supported but beware performance implications.
Comment: Citrixs desktop virtualization solution (XenDesktop) provides two additional technologies that use image sharing approaches:
- Provisioning Services (PVS) provides a (network) streaming technology that allows images to be provisioned from a single shared-disk image. Details: https://tinyurl.com/yy37zxvt
- With Machine Creation Services (MCS) all desktops in a catalog will be created off a Master Image. When creating the catalog you select the Master and choose if you want to deploy a pooled or dedicated desktop catalog from this image.
Note that neither PVS (for virtual machines) or MCS are included in the base XenServer license.
|
|
SW Storage Replication
Details
|
|
No
Note: vSphere Replication is NOT included with this version;
VMware introduced a built-in vSphere Replication function with vSphere 5 and SRM 5 (hypervisor-based replication product). Prior to vSphere 5.1 this was however only a feature enabled with Site Recovery Manager (SRM), a fee-based extension product - not the base vSphere product.
With 5.1 this feature has become available with the standard vSphere editions.
VMware vSphere Replication will manage replication at the virtual machine (VM) level through VMware vCenter Server. It will also enable the use of heterogeneous storage across sites and reduce costs by provisioning lower-priced storage at failover location. Changed blocks in the virtual machine disk(s) for a running virtual machine at a primary site are sent via the network to a secondary site, where they are applied to the virtual machine disks for the offline (protection) copy of the virtual machine.
|
No
There is no integrated (software-based) storage replication capability available within XenServer
|
|
|
|
=CR76
|
IntelliCache
XenServer 6.5 introduced a read caching feature that uses host memory in the new 64-bit Dom0 to reduce IOPS on storage networks, improve LoginVSI scores with VMs booting up to 3x Faster. The read cache feature is available for XenDesktop & XenApp Platinum users who have an entitlement to this feature.
Within XenServer 7.0 LoginVSI scores of 585 have been attained (Knowledge Worker workload on Login VS 4.1)
IntelliCache is a Citrix Hypervisor feature that can (only!) be used in a XenDesktop deployment to cache temporary and non-persistent operating-system data on the local XenServer host. It is of particular benefit when many Virtual Machines (VMs) all share a common OS image. The load on the storage array is reduced and performance is enhanced. In addition, network traffic to and from shared storage is reduced as the local storage caches the master image from shared storage.
IntelliCache works by caching data from a VMs parent VDI in local storage on the VM host. This local cache is then populated as data is read from the parent VDI. When many VMs share a common parent VDI (for example by all being based on a particular master image), the data pulled into the cache by a read from one VM can be used by another VM. This means that further access to the master image on shared storage is not required.
Reference: https://tinyurl.com/yyjqom4p
|
|
|
|
=CR77
|
No
There is no specific Storage Virtualization appliance capability other than the abstraction of storage resources through the hypervisor.
|
|
Storage Integration (API)
Details
|
|
=EA78
|
Integrated StorageLink (deprecated)
Integrated StorageLink was retired in XenServer 6.2.
Background: XenServer 6 introduced Integrated StorageLink Capabilities. It replaces the StorageLink Gateway technology used in previous editions and removes the requirement to run a VM with the StorageLink components. It provides access to use existing storage array-based features such as data replication, de-duplication, snapshot and cloning. Citrix StorageLink allows to integrate with existing storage systems, gives a common user interface across vendors and talks the language of the storage array i.e. exposes the native feature set in the storage array. StorageLink also provides a set of open APIs that link XenServer and Hyper-V environments to third party backup solutions and enterprise management frameworks.
|
|
|
|
=EA79
|
Basic
Virtual disks on block-based SRs (e.g. FC, iSCSI) have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI.
Note: Bare in mind that QoS setting are applied to virtual disks accessing the LUN from the same host. QoS is not applied across hosts in the pool!
|
|
|
|
Networking |
|
|
Advanced Network Switch
Details
|
|
=EA80
|
Yes (Open vSwitch) - vSwitch Controller
The Open vSwitch is fully supported
The vSwitch brings visibility, security, and control to XenServer virtualized network environments. It consists of a virtualization-aware switch (the vSwitch) running on each XenServer and the vSwitch Controller, a centralized server that manages and coordinates the behavior of each individual vSwitch to provide the appearance of a single vSwitch.
The vSwitch Controller supports fine-grained security policies to control the flow of traffic sent to and from a VM and provides detailed visibility into the behavior and performance of all traffic sent in the virtual network environment. A vSwitch greatly simplifies IT administration within virtualized networking environments, as all VM configuration and statistics remain bound to the VM even if it migrates from one physical host in the resource pool to another.
Details in the Citrix Hypervisor docs: https://tinyurl.com/y6nq33o4
|
|
|
|
=EA81
|
Yes
XenServer 6.1 added the following functionality, maintained with XenServer 6.5 and later:
- Link Aggregation Control Protocol (LACP) support: enables the use of industry-standard network bonding features to provide fault-tolerance and load balancing of network traffic.
- Source Load Balancing (SLB) improvements: allows up to 4 NICs to be used in an active-active bond. This improves total network throughput and increases fault tolerance in the event of hardware failures. The SLB balancing algorithm has been modified to reduce load on switches in large deployments.
Background:
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
• LACP bonding is only available for the vSwitch, while active-active and active-passive are available for both the vSwitch and Linux bridge.
• When the vSwitch is the network stack, you can bond either two, three, or four NICs.
• When the Linux bridge is the network stack, you can only bond two NICs.
XenServer 6.1 provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:
• Active/Active mode, with VM traffic balanced between the bonded NICs.
• Active/Passive mode, where only one NIC actively carries traffic.
• LACP Link Aggregation, in which active and stand-by NICs are negotiated between the switch and the server.
Reference: https://tinyurl.com/y2qbordz
|
|
|
|
=EA82
|
Yes
NEW
VLANs are supported with XenServer. Switch ports configured as 802.1Q VLAN trunk ports can be used with the Citrix Hypervisor VLAN features to connect guest virtual network interfaces (VIFs) to specific VLANs. In this case, the Citrix Hypervisor server performs the VLAN tagging/untagging functions for the guest, which is unaware of any VLAN configuration.
Citrix Hypervisor VLANs are represented by additional PIF objects representing VLAN interfaces corresponding to a specified VLAN tag. You can connect Citrix Hypervisor networks to the PIF representing the physical NIC to see all traffic on the NIC. Alternatively, connect networks to a PIF representing a VLAN to see only the traffic with the specified VLAN tag. You can also connect a network such that it only sees the native VLAN traffic, by attaching it to VLAN 0.
Please refer to https://tinyurl.com/y4wqtsld
|
|
|
|
=EA83
|
No
XenServer does not support PVLANs.
Please refer to https://tinyurl.com/y4wqtsld
|
|
|
|
=EA84
|
Yes (guests only)
Guest VMs hosted on Citrix Hypervisor can use any combination of IPv4 and IPv6 configured addresses.
However, Citrix Hypervisor doesn’t support the use of IPv6 in its Control Domain (Dom0). You can’t use IPv6 for the host management network or the storage network. IPv4 must be available for the Citrix Hypervisor host to use.
|
|
|
|
=EA85
|
SR-IOV
NEW
Experimental support for SR-IOV-capable networks cards delivers high performance networking for virtual machines, catering for workloads that need this type of direct access.
|
|
|
|
=EA86
|
Yes
You can set the Maximum Transmission Unit (MTU) for a XenServer network in the New Network wizard or for an existing network in its Properties window. The possible MTU value range is 1500 to 9216.
|
|
|
|
=EA87
|
Yes (TSO)
TCP Segmentation Offload can be enabled, see https://tinyurl.com/y3d7ktyg
By default, Large Receive Offload (LRO) and Generic Receive Offload (GRO) are disabled on all physical network interfaces. Though unsupported, you can enable it manually https://tinyurl.com/ychkx3cn
|
|
|
|
=EA88
|
Yes (outgoing)
QoS of network transmissions can be done either at the vm level (basic) by setting a ks/sec limit for the virtual NIC or on the vSwitch-level (global policies). With the DVS you can select a rate limit (with units), and a burst size (with units). Traffic to all virtual NICs included in this policy level (e.g. you can create vm groups) is limited to the specified rate, with individual bursts limited to the specified number of packets. To prevent inheriting existing enforcement the QoS policy at the VM level should be disabled.
Background:
To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.
The QoS value limits the rate of transmission from the VM. As with many QoS approaches the QoS setting does not limit the amount of data the VM can receive. If such a limit is desired, Citrix recommends limiting the rate of incoming packets higher up in the network (for example, at the switch level).
Depending on networking stack configured in the pool, you can set the Quality of Service (QoS) value on VM virtual interfaces (VIFs) in one of two places-either a) on the vSwitch Controller or b) in Citrix Hypervisor (using theCLI or XenCenter).
|
|
Traffic Monitoring
Details
|
|
=EA89
|
Yes (Port Mirroring)
The Citrix Hypervisor vSwitch has Traffic Mirroring capabilities. The Remote Switched Port Analyzer (RSPAN) policies support mirroring traffic sent or received on
a VIF to a VLAN in order to support traffic monitoring applications. Use the Port Configuration tab in the vSwitch Controller UI to configure policies that apply to the VIF ports.
|