We need to think differently about data migration. It used to be something that really only happened at end-of-life. A system needed to be retired and replaced, so its data had to be moved to its replacement – a task that could be a right pain that needed specialist conversion software and considerable work, especially if the new system ran different application software.
Today, the key words are no longer data migration; instead they are data movement and, increasingly, data mobility. For example, when commercial computing started the data was intrinsic to the system. Now, its data can be an organization’s most valuable property, and its use is no longer restricted to the system and application that spawned it.
To put it another way, where migration used to mean a specific project – a movement from A to B – mobility is an ongoing process or requirement. We’ve generated that data, it has business value, and we want – regulations and governance permitting, of course! – to use it in other places and for other purposes.
The problem is of course that “data has gravity”, as the saying goes – it’s slow to move, and it’s hard to do it while it’s in use. One of the places where this can still bring major pain is in virtual machine (VM) migration.
Initially this was an awkward one-way process, because you had to reconfigure each VM’s network and storage connections after migration.
That all changed with the invention of the ‘live migration’ technology we’re used to today. When this first appeared, it was ground-breaking. Need to reduce overloading on a host server or take it down for maintenance? Simply move its VMs to another host, without the users even noticing.
And live migration technology has continued to advance – now we expect VMs (and of course containers) to be truly virtualized and mobile. Orchestration software automatically moves them around for load-balancing, or to fail-over the VMs from a dead server to a hot spare or to others in its pool. It can even automatically move them to a public cloud, for example if there’s a shortage of on-site capacity or for disaster recovery.
The sticking point remains the physical data. It’s one thing to seamlessly move a VM to another data center or into a cloud, but if its virtualized storage adapter is still linking back to its original physical data halfway around the world, then the move could make things worse, not better.
At the storage layer, we have tools to enable data mobility in various degrees, from API-managed and software-defined infrastructures through scalable storage to faster connectivity schemes such as RoCE, NVMe-oF and more.
The challenge for anyone building a truly fluid and agile infrastructure is to take that broader concept of ‘permanent data mobility’ and apply it to the VM and container world. In effect, it’s to virtualize not just the storage adapter or connection, but the actual dataset itself. Thanks in part to ever-faster networking, the technology to do this already exists – look for terms such as Live Dataset Migration.
Be warned though that the big change for some will be the mindset: in this world, data is no longer the physical foundation, the rock upon which we build. Instead, data is a moveable asset that goes where it needs to. And when you think about it, that’s the right way to think – if you want to build an organization for the information age, that is.
Bryan is a technology enthusiast and industry veteran. He has been analysing, explaining and writing about IT and business in a highly engaging manner for around three decades. His experience spans the early days of minicomputers and PC technology, through the emergence of cellular data and smart mobile devices, to the latest developments of the software-defined age in which we all live today. Over his career, Bryan has seen at first-hand how IT changes the world – and how the world changes IT – and he brings that extensive insight to his role as an industry analyst.