Yes, you read that right. For much of the last couple of decades, it’s felt as if everyone has been talking about the impending demise of the mainframe, whilst simultaneously attempting to emulate as many as possible of its key operational characteristics.
Originally this emulation was via industry-standard servers, but in the last few years “the cloud” has taken up this challenge. It began with cloud computing promising the same level of scalability, flexibility and operational efficiency that mainframe systems have long provided, and on scalability going somewhat further. For a while these were more words than reality, but now cloud capabilities are (finally) getting close to what mainframe users have long taken for granted.
More recently, attention in cloud circles has turned to other – what we might regard as core – mainframe attributes such as security, privacy, resilience and failover. Whether you believe the marketing of cloud providers on this is up to you (as it is with any vendor marketing messages). But ensuring such things certainly requires very careful reading of the service level guarantees and contractual small print.
Today much of the focus of cloud services has switched to support for specialist workloads, and again, we see cloud following in the footsteps of the mainframe by using dedicated offload engines designed to optimise workload performance, and in many cases to minimise software licensing costs as well. But it’s always seemed as if cloud has been in catch-up mode, and the mainframe has remained in the lead. Which leads to the question, has the cloud now caught up?
Has the cloud caught up?
In many ways, the answer is “yes”, but this is a qualified yes. When it comes to scalability, throughput, operational efficiency, and arguably even resilience and failover, cloud has arguably caught up with the mainframe of the 1990s or early 2000s. But there are other factors to bear in mind as the mainframe has not stood still.
For example, it is fair to say that cloud providers have made great strides on security and privacy, but in reality the mainframe is still recognised as the gold standard, with security baked into every layer in the systems stack.
Then there are questions such as latency and data location. With the mainframe, there is no doubt where the data resides and who can access it. Managing these details and the associated operational policies has been part of the platform for over fifty years. When it comes to latency, the mainframe is probably sitting very close to the data you are working with, making latency as low as possible in terms of system response times, something reinforced when considering the system’s very powerful processors and sophisticated, mature partitioning capabilities.
And the mainframe environment is getting even stronger when you look at the announcements made at the recent launch of the IBM z16. These include quantum-safe cryptography to protect against the development of Quantum computers able to decrypt current encryption standards, on-chip AI acceleration to boost ML and AI execution, and flexible capacity combined with on-demand workload transfer across multiple locations to further reduce the chance of service disruption.
But there are places where things are arguably closer, one of which is in the area of workload optimisation, although the two environments are developing in different ways. For example, the mainframe strives to deliver a consistent environment that can handle a wide range of workloads, but managed through the same set of frameworks and tools. The cloud, on the other hand, allows you to spin up dedicated specialised environments, e.g. for AI or analytics.
What about developers?
Which leaves the question of where is “the cloud” ahead of the mainframe? The obvious place to start is in terms of the diverse geographic distribution of the major public clouds which spread across the globe with huge resources that no mainframe or mainframe cluster can match that. But this advantage is no longer quite so huge given that IBM will shortly be making “mainframe as a service” available from its IBM Cloud data centres around the world.
Not quite as a corollary, it is also fair to say that cloud was ahead for a while with regard to modern software delivery methods such as DevOps and the implementation of various agile delivery solutions. But we must recognise that it hasn’t taken long for the gap to close because the fundamental principles underlying things like DevOps, container, microservices, APIs, etc. have been intrinsic to the mainframe environment for decades, indeed pretty much since its beginning. In addition, IBM and the other software vendors in the mainframe ecosystem, such as Broadcom and BMC, have developed their offerings to such a degree that today there’s almost absolute parity.
In essence today’s mainframe environment is one where the latest generation of developers should not feel out of place. It uses the same standards-based, open tools they handle daily. And with the mainframe-as-a-service soon to be available, devs will be able to build code wherever they like and run it on the mainframe with a few clicks and no need to build a complex environment.
This is good news for the mainframe, but having the technological capabilities is less than half of the challenge. What’s really needed is for the mainframe to catch the eye of modern developers. IBM needs to ensure that developers understand that the mainframe is not a new and alien place, but instead is ready for them to exploit using the tools they are already comfortable with.
Some final thoughts
Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know.