Are data centres heading towards a future of technology hardware monocultures?

What’s the future for ‘heterogeneous’ platforms?


Published/updated: February 2015

By Tony Lock

You have probably heard the representatives of at least a few vendors say IT is moving towards a future where the core technologies that underpin the services you run will be based on ‘commodity hardware’ components. Will you end up with IT environments based entirely on x86 chipsets and where all of your storage is composed of any old disks running underneath clever storage software?

If you are like many of your colleagues you currently run many different platforms in your data centres and computer rooms. On the server side you will likely have a large estate of x86 / x64 servers running applications sitting on Microsoft Windows or Linux operating systems.

There is a good chance you also make use of servers running a traditional UNIX OS, significant amounts of which are hosted on a non-x86 chip architecture such as IBM Power, Sun / Fujitsu Sparc or Intel Itanium.

Further, you may also have Mainframe computers, which rarely utilise x86 technology, running business critical workloads.

Clearly, the servers in your data centres today are anything but a technology monoculture. The same is true of your storage estates where you may well have platforms from many suppliers based on a variety of technologies; hard disk drives of various connectivity and speeds, rapidly expanding volumes of Flash / SSD storage probably with at least a smattering of traditional Tape.

Over recent years you probably took steps to rationalise your platforms, perhaps starting with the virtualisation of servers. This has had a dramatic impact on your x86 estate,and maybe to your UNIX environments. More recently you may have started to consolidate your storage platforms.

Our research [1] shows that you are likely to be well on the way to move from storage dedicated to a single application or service towards environments composed of storage pools shared between multiple applications. More of our research [2] indicates that this trend is likely to continue as you modernise your data centres and move towards more ‘service centric’ operations (Figure 1).

Click on chart to enlarge
Figure 1

In order for you to facilitate this infrastructure modernisation, the organisation of resources into flexible pools that can be automatically managed by policy is something you are likely to consider.

This drive has helped vendors promote the concepts defined under the terminology ‘Software Defined’, especially software defined data centre (SDDC) and software defined storage (SDS). These elements often combine virtualisation solutions and hardware management into a cohesive entity allowing you, at least in theory, to be able to manipulate the servers, storage and networking characteristics you deploy to support particular applications and services without physical intervention. (Figure 2)

Click on chart to enlarge
Figure 2

The ultimate goal is for you to use tools that can modify the resources deployed automatically by policy to ensure service levels are maintained as usage changes. Until now much of the development of the server / SDDC side of things has been built around the x86 platform, which converged systems often utilised.

At the time of writing, SDDC systems are still in their infancy and have a long way to go before they offer you great opportunity to radically change your physical infrastructure. But even as they do mature, you will still need to face the fundamental question of whether your organisation could operate the wide range of services your users require with applications running only on x86 systems?

It is fair to assume that, at least in the short and medium terms, you are likely to need applications that run on architecturally different server platforms. Each platform has its own characteristics that make them better suited to some applications rather than others. This is unlikely to fundamentally change for some time to come.

Hence while you may well end up with flexible server pools, perhaps one with x86 systems and another running your UNIX systems, you will still have significant infrastructure heterogeneity. If you currently also run a mainframe, which is in most ways already a self-contained flexible resource platform, you are very likely still to be doing so in 5 years, maybe even in ten or twenty. I doubt we will see a server chipset monoculture anytime soon.

A similar argument stands when you look at your storage requirements. Some vendors maintain that all of your primary storage will migrate to running on Flash as prices fall and management technologies are enhanced. However, it is very unlikely that Flash and spinning disk will become equally priced for some time to come, if ever. And this price difference will have an impact on the decisions you make when you acquire new storage.

Equally I also see a role for tape storage going forwards, even if only as a long term instrument to hold ‘cold’ or ‘archive’ data that you cannot delete but isn’t accessed regularly by your users.

The bottom line is to not to get caught up in industry rhetoric - trust your instincts. Always keep practicality and your own operational requirements and business agenda in mind when planning investments. I see different technologies being used in your data centres many years from now. Do you?

Reference

[1] Creating the Storage Advantage.

[2] A Vision for the Data Centre. Are you a Mover, Dreamer or Traditionalist?

ORIGINALLY PUBLISHED BY






Featured Content