Research Library
The real value of NVMe is end-to-end
September 17, 2019

Analyst Blog

Although much is made of NVMe’s role as the modern successor to SAS and SATA, in truth it is a lot more than that. Yes, it is a faster drive interface, with layers of software latency stripped out and no concessions made for legacy rotating media.

But that’s like the early days of SSDs, where Flash storage was simply used to emulate a hard disk. You can get a useful speed boost that way, but nothing like as much of a boost as you can get if you use Flash as Flash, without asking it to pretend to be something else!

So just as Flash really started to shine once it was built into devices such as All-Flash Arrays, with the disk metaphor largely discarded, the real value of NVMe will come through as we stop focusing on local storage and take it end-to-end.

It’s still very useful as a local storage interface, of course, and NVMe storage for a server (or a PC) is relatively cheap, with a shrinking price-premium over the cost of a SAS/SATA SSD of the same size and quality. Increasingly, NVMe is becoming the norm for local SSD in many use cases. However, the reality is that in almost every aspect, an NVMe SSD is still just a faster local hard drive.

See the bigger picture

NVMe can be so much more than that, though. Shared low-latency storage that bridges the performance gap between memory and disk drives enables whole new ways of thinking when it comes to designing and building applications and services. That means we need to think about the whole solution, not just what’s inside the server. Plus what’s inside a server is usually only accessible to that server (although there are proprietary schemes to virtualize, share and pool direct-attached NVMe).

In addition, there is the question of the weakest link in the chain: where is it? Except in a few cases – applications that are especially sensitive to latency, for example – it is unlikely to be the direct-attached storage in the servers. Speeding that up might give some nice benefits in terms of boot speed and server performance, but in the main, we are not talking solid business benefits here. It’s far more likely that there will be a bottleneck somewhere else in the networking and storage infrastructure.

There are caveats, of course, not least that when we talk about end-to-end NVMe we are very likely to be talking about NVMe-over-Fabric (NVMe-oF), and that imposes requirements of its own. In particular, it may need a new or upgraded network fabric – but once that’s in place it is available to all servers on the network, with no need to remove the lid and install new cards or storage devices.

And fundamentally, as our research repeatedly shows, if your current infrastructure isn’t adequate then it’s better to build anew wherever possible, rather than trying to expand and stretch what you already have in place. If you can’t refresh everything in one go, put in as much new as you can manage, then move apps over as it becomes necessary and possible.

Re-use what you can, of course, including the Ethernet, Fiber Channel or InfiniBand cabling if it’s sufficiently up-to-date. However, you may well need to renew some or all of the switching infrastructure and interface cards to support your chosen flavor of NVMe-oF. It will most likely require additional protocols, for example, such as RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA Protocol).

All that should allow you to bring in new hardware, software and systems as necessary. As ever, the key is to think ahead and, whenever possible, build for the future rather than rebuilding the past. And end-to-end NVMe should be part of that future, the only question is when.

Other blog posts in the series

Are you ready for NVMe?

Storage investment woes


© Freeform Dynamics Limited - 2006 - 2025. All rights reserved. Unauthorised use of copy is not permitted.
Website designed by
Laura Rose