Technology Storage
Business Issues Channels Enterprise Services SME Technology
Module Header
Louella FernandesLouella Fernandes
Louella Fernandes
22nd April - Internet of Things: A New Era for Smart Printing?
Simon HollowayThe Holloway Angle
Simon Holloway
18th April - Virgin Media expose private email addresses
Craig WentworthMWD Advisors
Craig Wentworth
17th April - Box's enterprise customers step forward to be counted
Craig WentworthMWD Advisors
Craig Wentworth
16th April - Egnyte the blue touchpaper...


Storage technology trends to veer off course in 2013?
Peter Williams By: Peter Williams, Practice Leader - IT Infrastructure Mgmt., Bloor Research
Published: 2nd January 2013
Copyright Bloor Research © 2013
Logo for Bloor Research

At this time of year analysts often predict the technologies to watch and trends for the next year, but I am taking a slightly different slant concerning data storage in 2013, putting another perspective to show how things may not pan out as most expect.

Solid State Disk (SSD) technology will still move on apace in 2013 (with, for instance, multi-level cell (MLC) cache pushing up capacities); no surprise there. For now, SSD (flash) arrays will continue to be accessed as though they are spinning disk, purchased for tier 0/1 performance; the decision to buy will be based on up-front price (still high), running costs and space (lower than spinning disk). Yet this is a short-term phenomenon. The real SSD prize is flash being widely accessed as 'memory' (which in truth it is) bypassing multiple layers of superfluous and complex disk access. It will be simpler to manage and inherently faster. (If you see the name 'server-side cache' it refers to this way of storing data.) Be aware that the best varieties of flash for 'pretend' disk storage may not be best for use as 'memory' so cannot be migrated.

Big Data concerns how data is accessed for analytical purposes but is a response to the data explosion in recent times; now, comparatively small enterprises may need to use access techniques previously available only to the largest multi-nationals. This broadens the market for analytics software and new access techniques are also being introduced. Yet, a dichotomy remains which will probably mean Big Data fails to fulfil its 2013 expectations. Enterprises want more flexible access to their main data as part of becoming more agile so as to react to business changes, but they need this data in a different format to that required for analytics (as has always been true). So data still needs to be extracted, translated and loaded (ETL) into some variety of data warehouse or mart before meaningful analysis takes place. I dream of the day when Big Data deals in real time with a single agile dataset.

Cloud storage usage is growing but adoption is not great; the main reason is companies' concerns at the risks surrounding losing control of their data storage - and this will persist in 2013. The biggest take-up is in-house or private cloud storage (for this reason), yet clouds controlled in-house are not much more than virtualised storage and often complicated by the variety of legacy storage hardware that has to be accessed under the covers. Public clouds, of necessity, need stringent firewall-type technology to isolate each firm's own data - but are less-burdened by legacy equipment. They can also offer backup, remote replication and disaster recovery (DR) services - the latter obviating the costly need for firms to have their own DR sites. (Cloud providers should also be well-placed to exploit SSD memory.) So public clouds should grow faster in 2013, but the test for providers will be to better demonstrate that an enterprise's data is at least as secure as it is in-house.

Backup and recovery advances have been many and not least to keep pace with data growth versus shrinking or non-existent backup windows. Yet, the picture is confused and solutions invariably piecemeal: a) changes made to existing backup solutions handle the typical mix of physical and virtual storage, b) snapshots in some cases now substitute for main data to speed the process, c) de-duplication solutions massively shrink the backup data footprint (with de-duping of primary data as received not yet widely adopted), d) this enables tape to be relegated to deep archive, and e) WAN optimisation greatly assists remote data transmission speeds. Yet this is complicated, expensive and involves multiple vendors - which users do not like. Moreover, it tackles the symptom rather than the underlying problem: that 90% of the data being backed up (and recovered) has no benefit to the business (albeit identifying which 90% is hard). It's high time someone tackled the underlying cause and, unless and until they do, backup vendors can expect only low-value 'point' solution sales to keep the enterprises ticking over.

This last item highlights a fundamental difference between servers and storage. The number of physical servers needs only to expand in line with business expansion, but storage capacity is still skyrocketing out of control. For years I have suggested that resources need re-directing to software that examines and removes unwanted data at every stage from when received to achieve a similar equilibrium - that the amount of data newly stored becomes matched by the amount being removed (at the very least to off-line archive "just in case"); this would interface with policy management software which sets the rules for data removal and archiving. By this means storage costs would plummet and stay low, energy costs would be kept under control, capacity planning would become a doddle, and risky "emergency" technology purchase decisions would become a thing of the past.

I do not expect anyone to truly crack this for the benefit of business the world in 2013 - but if any company does they deserve to make a killing.

Here's to 2013 being a year of pleasant surprises for storage users.


Published by: IT Analysis Communications Ltd.
T: +44 (0)190 888 0760 | F: +44 (0)190 888 0761