IT-Analysis.com
IT-Analysis.com Logo
Business Issues Channels Enterprise Services SME Technology
Module Header
Craig WentworthMWD Advisors
Craig Wentworth
16th April - Egnyte the blue touchpaper...
Louella FernandesLouella Fernandes
Louella Fernandes
11th April - Managed Print Services: Are SMBs Ready?
Louella FernandesLouella Fernandes
Louella Fernandes
11th April - The Managed Print Services (MPS) Opportunity for SMBs
Simon HollowayThe Holloway Angle
Simon Holloway
11th April - Intellinote - capture anything!
David NorfolkThe Norfolk Punt
David Norfolk
11th April - On the road to Morocco

Analysis

A flash in the SAN?
Clive Longbottom By: Clive Longbottom, Head of Research, Quocirca
Published: 22nd July 2013
Copyright Quocirca © 2013
Logo for Quocirca

Persistent storage has changed little over the years. Sure, there has been a move away from tape to disk, and direct attached storage (DAS) has moved to storage area networks (SANs) and network attached storage (NAS), and disk drives have got larger and faster, but the fact remains that what we have been working with over the last few decades has been a disk-based subsystem.

As the move to a more virtualised, data-orientated technical platform has occurred, the problems with spinning magnetic disk-based systems have become more noticeable. The capability for disk to keep pace with the number of input/output operations per second (IOPS) has been a struggle, which, when combined with other factors such as network interconnects and systems latency, has led to storage becoming the main constraint in many compute platforms.

Attempts have been made to try and overcome some of the issues. The main one has been the use of dynamic random access memory (DRAM) as a non-persistent means of dealing with some operations at high speed. This has worked at one level as the price of DRAM has fallen and the likes of Oracle (with its Times-10 acquisition), SAP (with HANA) Pentaho and QlikTech have moved to in-memory capabilities. However, as DRAM is non-persistent, it is prone to data loss on failure. DRAM needs to be backed up with battery or other power capabilities (such as supercapacitors) to allow data in the memory to be flushed to persistent storage should there be a problem. The other problem is knowing what to load up into DRAM: the growth in data volumes is still outstripping the amount of DRAM that can be thrown at the data, and getting the in-memory content wrong still needs fast disk behind it order to support swapping data in and out, along with very intelligent software.

The biggest change over the last couple of years has been the move to flash-based systems. Here, the persistent storage is no longer based on spinning disk, but is housed within the same memory modules as are found in every mobile phone, tablet, camera and other consumer device that has storage requirements. This memory, called flash memory, is quick—there is no latency in finding the data, as there is in a spinning disk where the read head has to wait until the data is under it before it can be read. Flash data can be pulled from anywhere in the system at any time—and at a basic level is therefore not much slower than DRAM memory.

However, to start off with, flash memory had a few problems. It was expensive and could only be produced in small storage volumes. The longevity was questionable—flash memory deteriorates in a completely different manner to magnetic disk, and the manner in which data writes are spread over the whole system is critical in ensuring suitable life. However, for certain storage workloads, it seemed to make sense.

With standard storage tiering, there has been the concept of tier 1 (the latest, fastest disks) providing the support for the 'hot' data that is being used the most and needing the fastest possible IOPS and throughput. Behind this would be progressively slower disk systems (tier 2, 3) and, in many cases, tape for longer term archival storage, dealing with the needs of data that wasn’t so mission critical. However, this also required intelligent data management—and some systems were less intelligent than others and the act of managing the data’s position in the tiers could provide an overall slower system than before.

Replacing tier 1 spinning disk with like-for-like flash-based solid state drives (SSDs) seemed to be the answer. Tier 1 storage becomes SSD, tier 2 the fastest spinning disk and so on. For some, this still wasn’t fast enough and a 'tier 0' persistent storage system was introduced, using on-board PCI-X flash based cards (as offered by Fusion-io and Violin Memory amongst others) to bring the data as close to the cpu as possible. However, this does mean that the tier 0 storage is dedicated to the cpu—it is not possible to virtualise this layer and, as such, is really for a very small niche of the high performance data market.

SSDs as tier 1 storage do give massive performance improvements, and are becoming common in disk subsystems being sold by the main vendors, including HP, Dell and HDS. It is getting difficult to find a disk subsystem that does not include SSDs as a part of its offering, and newer vendors such as Tintri and Tegile have come to market to offer hybrid flash/spinning disk systems, aimed at specific workloads. EMC’s acquisition of XtremIO also gives it the capability to play in the hybrid spinning disk/SSD market—and also to play in the pure SSD array market too.

Other vendors are making a play for all-flash SSD-based disk systems. For example, PureStorage states that through intelligent use of consumer-grade flash storage, it can be cheaper in what it offers as a complete flash-based system than others can offer as a similar storage capability with spinning disk, but with the massive uplift in performance as offered with flash. SolidFire and Greenbytes have similar approaches.

But, SSDs are still based on a disk approach—and this brings with it some of the problems that spinning disks have. Starting from a clean sheet and designing a flash-based storage array from the ground up should provide the capability to really optimise flash. This is the approach that Violin Memory has taken—a company that will be covered in further detail in another post. Engineered flash-based arrays are also why IBM acquired Texas Memory Systems: it believes that the longer-term future lies in such systems arrays, rather than spinning disk-replacement SSD.

Flash storage is coming: vendors are moving fast to using flash as part or all of their storage subsystems. Understanding the differences between the various flash-based systems available and how these need to work together will be key.

Advertisement



Published by: IT Analysis Communications Ltd.
T: +44 (0)190 888 0760 | F: +44 (0)190 888 0761
Email: