Last week I had updates on two start-ups, both tackling the problem of keeping virtual storage totally in synch with the virtual servers in a VMware environment - but using radically different approaches to achieve different ends. Here I will focus on one of these - Nutanix.
A huge headache implementing VMware is in trying to allocate the storage every time a new virtual machine (VM) is created. At the same time, companies who have moved to a virtual environment gain hugely improved server utilisation but invariably disappointing storage performance and utilisation as compared with non-virtual access. The reasons boil down to storage virtualisation being developed separately and in a different way from the virtualised servers needing to use it.
Now imagine a large data centre running thousands of virtual machines (VMs) with their own storage - without a SAN or even NAS in sight. That is what the Nutanix Complete Cluster hardware and software building block (appliance) approach can deliver.
"Consolidated storage is not delivering", Nutanix's President and CEO Dheeraj Pandey told me, as he described the problems of aligning the storage to the VMs. "A medium enterprise is understaffed for IT. They want price-performance." He added that Nutanix provides a 2U self-return data centre.
He was referring to the 2U high x86 Nutanix box that converges the servers (compute) and data (storage) as a single data centre infrastructure. One building block has four nodes each of 5TB disk and Â½ TB flash (SSDs) per node.
"Normally, the VM to storage is five hops away. But [With Nutanix] the network has a smaller role" said Pandey. This is because the server-storage network traffic is self-contained, which alone has a big impact on performance.
The company claims a 40-60% reduction in capital expenditure (CAPEX) just by eliminating the network. Operational expenditure (OPEX) savings come from reduced admin (with a simplified top-down VM management view), and lower power and cooling costs.
Performance is optimised partly by careful use of flash. Fusion-io PCIe flash is used for metadata and for the most active data; Intel SSD with a SATA interface is used primarily to fast-boot each VM host, and the high capacity 7200rpm SATA drives take the rest of the storage. The cluster's converged backplane/frontplane interconnection is 10 gigabit Ethernet with all switching virtualised in software.
However, the scale-out capability is probably the one thing that could make this a truly disruptive architecture, as Pandey claims it is. Scale-out performance is linear from very small to supporting thousands of VMs, which is far more than others using scale-out approaches.
Flash-based metadata handles all the storage which is pooled across the server nodes. This appears to the VMs as iSCSI-accessed vDisks - each one dedicated to one VM; one node can incorporate both flash and hard disk and has a scale out controller for the VM.
The cluster itself has out-of-the-box features to get a business up and running from scratch in 30 minutes, with a similar time needed to add another building block.
All in all, I can see Nutanix Complete Clusters as very attractive to data centres and private clouds, and especially to medium enterprises, but an uphill struggle to penetrate large EMC or NetApp users - even though the clusters can coexist in such environments and might grow there by proving to offer better value. So Nutanix, which is just starting out in Europe and growing fast in the US, is definitely a company to watch.