Published/updated: February 2015
By Dale Vile
The flash storage market has been developing rapidly, with solutions maturing, prices coming down, and both niche and established vendors competing aggressively. In order to make objective decisions, itís important to understand the nature of the technology, the options available, and the right questions to ask.
A time of rapid change
Storage solutions are evolving rapidly with new offerings coming to market very quickly. Keeping yourself up to date with whatís available and how emerging technologies can help within your business takes time and effort. This is especially the case when new storage solutions appear to take the world by storm, as has happened with flash.
When flash and SSD (solid state disk) based systems first became widely available they were perceived to be expensive and to have a short usable lifetime, making them only suitable for a few specific purposes. Initially these were those requiring very high performance, notably desktop virtualisation, high transaction applications and some real time analytics systems.
Today things have changed dramatically even to the point where some vendors claim that developments in flash mean it is now an economically effective choice for any type of workload. Some make this argument very convincingly based on core technology attributes, but thereís a lot more to making investment decisions in the real-world, where things like management, integration, openness, future proofing and return on investment also need to be considered. There is then the fact that other storage technologies such as disk and tape are evolving quickly too.
Itís against this background that we look at the role of flash storage in the modern data centre, and examine some of the practicalities you need to consider when evaluating options and specifying solutions.
Back to basics
Before continuing, itís worth quickly reviewing some of the basics in relation to flash.
As has been mentioned, the most prominent attribute of flash technology is speed, which manifests itself in two ways. Firstly, flash is capable of supporting much higher rates of data reads and writes than traditional disk, and hence enables significantly greater raw throughput. This has obvious benefits in terms of scalability.
Perhaps more importantly, flash delivers much lower latency, which basically means dramatically better response times. This can make a huge difference when the user experience is dependent on the storage system responding quickly to requests for data. Virtual desktops are a good example, but users will perceive an enhanced experience across a broad range of applications if you reduce system lag.
Lower latency can also improve backend efficiency. A good example here is reducing the amount of time that a server CPU is sitting idle (but tied up) waiting for reads and writes to complete in a relational database environment. Alleviating this bottleneck can allow the same work to be done with fewer processors and cores, which potentially reduces both server hardware costs and database software license fees.
The other significant advantage of flash is that performance doesnít degrade as you load more data onto a device. The result is that, unlike traditional disk technologies, you can therefore use pretty much all of the Ďadvertisedí space available Ė you donít need to worry about leaving enough spare capacity to avoid things slowing down.
In terms of developments, the significant advances in flash are not to do with speed. Early experience demonstrated that in most deployment scenarios, throughput and latency levels were more than adequate, sometimes even overkill. If anything, flash manufacturers have throttled back on raw speed in favour of increased durability for solutions aimed at mainstream use.
The most important advances in flash are to do with the way in which systems are architected and controlled. Modern controllers and management software have played a big part in achieving better durability and thus increased lifetime of devices. As a simple example, more effective techniques have been developed for ensuring that the load placed on a device is spread as broadly as possible across it to avoid failures resulting from hot spot related wear and tear.
Such advances have in turn allowed manufacturers to reduce the amount of hidden spare capacity incorporated into a flash drive to accommodate cell failures and provide space for system level housekeeping. A year or two ago, for example, your flash drive might have been shipped to you with up to an additional 100% more capacity than you ordered, which would have been invisible to you but would still have added to the cost. Today the level of capacity over-provisioning has typically dropped to less than 30% (even as low as 10%) with the latest generation of devices.
This is one of the factors behind the considerable drop in prices we have seen in the flash marketplace over the past year or so. Reduced manufacturing costs as volumes have increased, along with the incorporation of data compression and de-duplication into flash configurations, have further changed the economics of flash, making it much more cost-effective than it used to be even a short while ago.
Most common deployment options
Flash has found its way into storage systems in a number of different ways. The most popular at the time of writing include:
Laid out like this, it all looks very simple; itís easy to understand the fundamental differences in the high level shape of these configurations, and you can have a good guess at how they might vary in terms of performance characteristics. The problem is that the devil is in the detail, so itís important to look at how solutions have been implemented at the next level down.
Solution practicalities to consider
In a hybrid system, one of the main questions is how much the manufacturer has modified the array controller and associated software to treat flash and disk components differently from an access and management perspective. You canít just slot flash drives into a standard array and expect to get good durability and optimum performance and capacity utilisation. When talking to potential suppliers, itís therefore important to look for reassurance in these areas and understand any relevant constraints. Also look for adequate warranties on the flash components.
Beyond these basics, you need to understand whether compromises have been made in terms of the broader feature set of the array. Do capabilities such as automated quality of service management, data replication, data-set snapshotting and other advanced features work as you would expect or even exist within the native feature- set? Beware of manufacturers who have removed (or not developed) functionality to reduce the engineering effort required to get to market with a flash enabled solution
This last point is particularly pertinent to niche manufacturers of all-flash arrays, where the richness of functionality you will have become used to with enterprise class disk-based systems may not be there. The reality is that the flash market was initially driven predominantly by the Ďneed for speedí, and early adopters were often willing to compromise the management feature set in exchange for raw performance increases. As mature storage vendors have entered the market who are more used to meeting enterprise management expectations, such trade-offs are rapidly becoming unnecessary if you work with the right suppliers.
The importance of this due diligence must not be underestimated. The chances are that you have a lot of established policies and processes that assume functionality in the array. If this is not there, there are implications in terms of re-engineering the way you do things, which can represent significant disruption, distraction and risk. There may even be additional direct costs, e.g. if you lose array-level replication capability, you may need to invest in replication tools that work further up the stack, which in turn increase management and systems complexity and overhead.
We have touched on the question of economics a number of times so far, but this is a topic that is worth looking at in more detail. From a pure capital cost perspective, flash typically looks more expensive, but this is only part of the story.
The inherent speed of flash means you can do a lot more to implement in-line compression and deduplication of data. As a conservative estimate, you can therefore assume that a flash based system will deliver at least twice the usable capacity of a traditional disk array, i.e. 1 TB of physical flash will store the equivalent of 2 TB of disk - and thatís without taking into account the need to keep disk utilisation below the level at which performance starts to degrade.
Of course compaction ratios can vary significantly depending on the type of data being handled, and some manufacturers exploit this by quoting extremely high ratios (sometimes even up to 10:1) without stating the obvious caveats. Nevertheless, the difference in effective storage capacity is generally significant, and this needs to be borne in mind when comparing options.
From a management overhead perspective, all-flash arrays can look extremely attractive because you donít have to worry about data placement, utilisation-related performance degradation, and so on; you just need to make sure enough capacity is available to keep up with growth. However, advances in automated data placement/migration and Ďlights outí quality service management in relation to traditional disk arrays and hybrid setups means the difference here is minimal if youíre comparing all-flash arrays with modern alternatives.
The same principle applies when it comes to performance. If you have an especially performance-sensitive application, particularly one that requires extremely low latency, then an all-flash array may be the obvious answer, even if it works out significantly more expensive. But with a big trend towards shared storage systems, which by definition means very mixed workloads, a modern disk-based or hybrid configuration could deliver the performance required at a lower cost, with additional benefits that stem from a richer and more mature feature set.
If you broaden the scope of your calculations, itís then often possible to identify indirect benefits and cost savings that mitigate the difference in capital outlay. The above-mentioned reduction in server hardware and database license requirements as a result of reduced latency in certain scenarios is a good example of this. Another illustration of broader savings is when flash storage represents the difference between whether a VDI implementation is considered viable or not on user experience grounds. Savings here come in the form of reduced and simplified infrastructure on the desktop itself, and lower desktop administration overheads.
From the above it will be clear that there is no simple answer to the question of whether any particular type of storage is more or less cost-effective; it really depends on the context and what youíre trying to achieve. With this in mind, the reality is that itís impossible to generalise when it comes to giving advice on where and how flash could and should be deployed. Like many things in enterprise IT, you need to assess your requirements then look at the pros, cons and trade-offs in that context.
The bottom line
Flash technology is developing at a very fast pace. Prices are tumbling, solutions are maturing and the chances are that you will find opportunities to exploit offerings in this space in the not too distant future if you arenít doing so already.
However, it would be extremely surprising if you ever got to the point in the short to medium term of concluding that flash represents the answer to all of your storage problems. Indeed when you consider advances in disk, tape and even cloud-based storage options, and look to address the huge problem of how to manage longer term storage and archiving requirements against the backdrop of relentless data growth, it is clear that there is a potential role for all types and classes of solution. In a rapidly changing market, however, it is important to define your own requirements, conduct adequate due diligence, and work with the right suppliers.
By Richard Edwards
By Dale Vile
By Bryan Betts and Dale Vile
Yesterdays software delivery processes are not up to dealing with todayís demands, but modernising you approach is not just about implementing Agile, even creating a DevOps culture. You need to focus on some specific, hard-core principles. ...more
By Dale Vile & Jack Vile
Cloud services are increasingly becoming part of the IT delivery mix, but a recent study of 378 senior IT professionals suggests a parallel commitment to ongoing investment in the datacentre. This in turn shines a light on the key role of modern application platforms. ...more
By Tony Lock & Dale Vile
Despite the advent to cloud computing the datacentre remains central to corporate IT. But with demands continuing to escalate, how do you ensure your infrastructure is powered robustly and efficiently? ...more
By Bryan Betts
Many are exploiting cloud computing to drive business advantage, while others are enjoying the flexibility and efficiency of DevOps. But what happens if you use both together in a coordinated manner? The answer is a significant amplification of the benefits of each. ...more