Initially it was planned as version 5.5.1, but as there are plenty of new features a “0.+1.0” Release is more than worthy.
the short summary: 5.6 release – feature highlights
- Prism Central (5.6)
- Microsegmentation Policies GA
- API V3 GA
- Scale Out Prism Central
- AOS (5.6)
- Support 80 Terabytes Storage per Node
- Two-node Cluster
- Guest VM-initiated Power Operations
- Load Balancing vDisks in a Volume Group
- AFS 3.0
- NFS v4 Support
- CFT Support
- File Activity Monitoring & Global Name Space
and here the Details:
Prism Central 5.6
Microsegmentation Policies GA
Microsegmentation enables a stateful distributed firewall which is natively integrated in AHV and Prism Central, and allows to:
- apply policy without changes on the underlying network and
- can be extended with service chaining capabilities (virtual 3rd party firewalls)
API v3 GA
The Nutanix v3 API is based on an “intentful API” philosophy. According to the intentful API philosophy the machine should handle the programming instead of the user enabling the datacenter administrator able to focus on the other tasks. You need to specify the end state of an entity and the system will compile and execute a series of steps to achieve the defined end state of the entity. The progress to achieve the desired state is tracked through waits and events.
You can find the details of the v3 API here: http://developer.nutanix.com/reference/prism_central/v3/
Scale-Out Prism Central
Previously, a Prism Central instance was limited to a single VM. You can now expand a Prism Central instance to three VMs , to increase the capacity and resiliency of Prism Central. Scale out Prism Central is supported on AHV or ESXi clusters.
Support 80 Terabytes Storage (10 x 8 TB) Per Node
AOS and AHV now support up to 80 TB of storage per node (yes, per node!). Use cases for this storage capacity include Acropolis File Services, back up, and cluster capacity expansion.
A traditional Nutanix cluster requires a minimum of three nodes, but Nutanix now offers the option of a two-node cluster for small implementations that require a lower cost yet high resiliency option. A two-node cluster can still provide many of the resiliency features of a three-node cluster by adding an external Witness VM in a separate failure domain to the configuration.
Guest VM-initiated Power Operations
You can initiate graceful power operations such as soft shutdown and restart of the VMs running on AHV hosts by using the aCLI. You can create a pre-shutdown script that you can choose to run before a shutdown or restart of the VM. In the pre-shutdown script, include any tasks or checks that you want to run before a VM is shut down or restarted.
Load Balancing vDisks in a Volume Group
AHV hosts now support the load balancing of vDisks in a volume group for user VMs. Load balancing of vDisks in a volume group enables IO-intensive VMs to utilize resources such as the CPU and memory of multiple Controller VMs (CVMs). vDisks belonging to a volume group are distributed across the CVMs, helping to improve performance and prevent bottlenecks. However, each vDisk still utilizes the resources of a single CVM.
AFS 3.0 release
NFS v4 support
AFS now supports the NFSv4 protocol. It enables you to manage a collection of NFS exports distributed across multiple file server VMs. With NFS, users can now use Linux and UNIX clients with AFS. This feature is also hypervisor agnostic. AFS supports two types of NFS exports:
- Distributed. A distributed export (“sharded”) means the data is spread across all FSVMs to help improve performance and resiliency. A distributed export can be used for any application.
- Non-distributed. A non-distributed export (“non-sharded”) means all data is contained in a single FSVM. A non-distributed export is used for any purpose that does not require a distributed structure.
CFT (Changed File Tracker) Support
AFS supports an API that allows third-party developers to implement backup server change file tracking. When a third-party backup solution is enabled, the API allows the application to record and collect information about any changes to the files in each snapshot sent to the backup server, thus providing a log of all file changes across snapshots.
File Activity Monitoring & Global Name Space
AFS also allows third-party developers to implement file activity monitoring in their applications. The API allows an application to collect information about every action on each file in a file server and supports two use cases:
- File monitoring. The API allows applications to record and collect system logs for each file server and make those logs accessible to an administrator through, for example, a syslog server. This provides an external auditing record of all file events and operations for each file server.
- Global name space. When changes are made to files within the file server, the changes can be replicated between multiple remote and home sites. The API allows applications to distribute a global name space where users in one site can create files that are replicated to all sites. Although AFS is local to one site, it can distribute the name space across multiple sites.