Is dedicated storage still the way to go, or can my server take over this role?

For most IT Pros working in smaller and mid-size businesses, ‘normal’ storage means either direct-attached disks or RAID units, or a disk array as a NAS or SAN system. This is then presented to servers as file systems, disk volumes and logical partitions. Now though, there is an alternative: software-defined storage (SDS). Whether implemented as a virtual SAN (VSAN) or as one of a number of other configurations, this involves layers of server-based software which abstract or separate the logical storage, as seen by the servers and applications, from the physical disks and other devices that store the data. Like many advanced technologies, software-defined storage started out as a large enterprise play, but versions of it are now available for businesses of almost any size. Whether you choose SDS or a more traditional storage array will of course depend on a range of factors. Let’s look at some of the arguments for and against each approach. Normal arrays For: As the saying goes, “Better the devil you know than the devil you don’t.” This is familiar and mature technology, it may not be perfect but most of us know how to deal with it. It is straightforward to implement, and well supported by operating systems and the management tools that come with them. In addition, the operational processes needed to run and maintain it are well understood and backed by years of experience, as are the tools and processes for data protection. Against: The problem with maturity is that it can also signify technology that is past its prime. For example, RAID, the historical way of protecting physical disks, is running out of steam as drives grow ever larger. RAID allows a failed drive’s data to be rebuilt from the other drives in its set, but rebuilding the data onto a new drive takes time. With ever-bigger drives, the rebuild time gets ever-longer – and critically, during that rebuild process the array is no longer redundant and is vulnerable. A few storage arrays offer a fast rebuild option to significantly reduce the recovery time, but they are the exceptions. Others use alternative RAID schemes to restore the redundancy, for example by using two extra drives per set instead of one, but these don’t solve the problem of longer and longer rebuilds – and of course application performance will suffer while the array is busy with a rebuild. There are other limiting factors on traditional disks and arrays too. For example, you typically need to pay a premium to get enterprise-grade storage, signifying the most reliable of drives. And most, though not all, vendors will also charge extra if an SMB wants enterprise-class features such as automation and high availability on an entry-class storage system. Software-defined For: As SDS abstracts the physical storage from its logical presentation, typically by assembling virtual volumes from a pool of physical blocks, it removes many of the traditional physical limitations. Most notably, like server virtualisation it makes the underlying hardware invisible to the servers. Similarly, because the virtual volumes are really just software, they can include other software-based elements, such as data replication. This means that consumer-grade hardware with SDS can provide enterprise-grade availability and performance. It also steps around the RAID rebuild problem – fault tolerance is handled at a deeper level, via error correction within the storage pool, so there is little need for redundancy at the virtual volume level. The underlying storage can be geographically distributed too, bringing resilience benefits, and there is no requirement for it to be homogeneous. That means new technology can be brought in and old technology retired without the need for downtime. One other consequence of storage being defined in software is of course that you can also re-define it in software. Whether it is changing the protection level, upgrading or downgrading performance, or moving a virtual machine, it all becomes easier if it can be managed entirely in software with no need to adjust hardware parameters, physically move data, and so on. Against: SDS makes things simpler for the user or administrator, but it does so by adding complexity under the surface. For example, there are more layers of software, so potentially more latency. Fast processors mean that this is not a problem under normal circumstances, but as scale grows there is a risk that latency might become inconsistent. The complex interactions between different layers of software also mean that there could be more to go wrong, especially as the management capabilities are still evolving. However, the technology will eventually mature, and even if you don’t have the skills in-house today, a good storage partner will help you develop them. SDS is new and scary, just as virtual servers were a few years ago, but VMs are now well understood and mainstream. SDS will win friends in time, as people who work with storage find projects where they can try it out, and discover that ‘virtual’ really can do the same job as ‘physical’.

Click here for more posts from this author

Bryan is a technology enthusiast and industry veteran. He has been analysing, explaining and writing about IT and business in a highly engaging manner for around three decades. His experience spans the early days of minicomputers and PC technology, through the emergence of cellular data and smart mobile devices, to the latest developments of the software-defined age in which we all live today. Over his career, Bryan has seen at first-hand how IT changes the world – and how the world changes IT – and he brings that extensive insight to his role as an industry analyst.

Leave a Reply