Published/updated: March 2011
IT architects and CIOs have a number of factors to take into consideration when it comes to selecting where to run workloads and how to design systems for efficient operations over extended periods of time.
Chief amongst these are the nature of the workloads themselves, the operating systems on which they are supported and the middleware they require in order to function. These may in turn dictate the hardware platforms on which they could function. Ultimately, everything should relate directly to business requirements.
When looking at platform "hardware" selection, the choices that are front of mind for new applications are typically based on x86 or some kind of RISC-based architecture. If a mainframe is in place, that might be considered, but it is often assumed that it is there primarily to run traditional applications that are native to that platform. However, with mainframes such as the IBM System z now capable of supporting Windows and Linux, and even the native environment now supporting modern techniques, standards and programming environments, it makes sense to include the ’big iron’ option when looking at new application requirements too.
In order to evaluate the best options when placing workloads, it is also essential to consider the matters of where data currently resides, by system and geography, along with the interfaces available to facilitate systems interoperability as well as looking closely at what workload management tools are available, if any, to handle operations across multiple platforms. The role of Standards and the openness of platforms, especially around data integration and access is becoming important to be certain that workloads can be moved effectively around the broader infrastructure.
So, if you have a System z environment, which after all represents a significant investment and a high value asset, how do you assess whether it makes sense to drive additional returns by deploying new workloads on it?
One thing to bear in mind is that whilst people like the idea of making logical decisions based on objective criteria, it is fair to say that many choices, in all areas of business (not just IT), are made using less than complete sets of considerations. In addition, people being what they are, some of the justification may be made using ’convenient’ selective evidence or judgements and weightings that may be more than a little subjective. For the purposes of this discussion however, let’s assume you want to make the right decisions for the right reasons. With this in mind, what is required is to build is an application architecture that delivers the information users require, whenever and wherever they need it without being overly complex to manage or difficult to secure. A key question here is whether a given workload is best suited to run on a mainframe, on a hybrid mainframe / open systems platform, or purely in an open systems environment?
This is no easy decision, especially as the mainframe itself, especially in the shape of the IBM System z, now has the ability to host not only traditional z/OS workloads but also those that run on Linux and Unix platforms. It will also, in the near future, support Microsoft Windows environments through the use of a variety of offload engines. However, there are some rules of thumb that can help.
For example, situations that point towards a System z approach include:
•Where significant sources of data (e.g. data warehouses, transactional, operational data stores etc.) are held in System z data sources including DB2, VSAM, IMS amongst others;
•There are existing System z and associated skills available and the organisation is prepared to continue to invest in them/expand them;
•Mission critical situations where “Management”, “Security” and “Risk” drive application platforming policies;
•Organisations where System z is operationally connected to major data repositories;
•Scenarios with highly variable workload demand;
•Where continuous access to data resources and reports is essential for people, other systems and business processes to operate effectively.
Operational situations where combining a mainframe System with open systems in a hybrid approach hybrid approach might be appropriate include:
•Systems where the majority of data sources and business information is held on a variety of platforms including mainframes, Unix / Linux and Windows systems;
• When geographic distribution significantly improves performance for users who are remote from centralised mainframe resources;
•When a cost/benefit analysis determines that the complexity of a multi-platform environment is offset by the mixed price/performance profiles of the systems involved. In these situations it is now possible that use of mainframe offload engines could provide an alternative to traditional hybrid approaches.
It should also be borne in mind that employing a hybrid delivery model can make sense in scenarios where workloads span a number of platforms but where it is important to deliver high quality of service. Such situations are becoming more common as composite applications are created reusing pre-existing functionality already in place in different applications or data stores. The mainframe is now a pretty good citizen and can play a full and often central role in an SOA environment.
But forcing an solution where it doesn’t fit applies as equally to the mainframe as to other platforms., There are IT solution scenarios where it is clear that, outside of exceptional circumstances, making use of a mainframe approach does not make sense. We won’t go into detail here, as architects generally don’t have a problem dismissing the mainframe option; suffice it to say that there will be many situations in which placing workloads on distributed platforms is clearly the correct approach to take.
In all scenarios there are likely to be multiple deployment options available for workload platform selections and no system will be a perfect match for everything. The important thing is to ensure that all appropriate options are given due consideration rather than simply deploying workloads without active thought or because "that’s the way we have always done this".
CLICK HERE TO VIEW ORIGINAL PUBLISHED ON
By Dale Vile & Jack Vile
Making the right technology investments in today’s fast moving digital age can mean the difference between success and failure in pretty much any industry. But how well are decisions are actually being made in this space? ...more
By Dale Vile
Some argue that IT operations doesn’t matter anymore; it’s all about developers. Our aim in this paper is to re-balance the discussion based on research in which feedback was gathered from over 400 European IT professionals. ...more
By Dale Vile & Jack Vile
We often hear that cloud computing dramatically reduces the need for in-house IT teams, and might even lead to their ultimate demise. The research reported here provides a very different view based on analysis of real business objectives and actual experience. ...more
By Dale Vile & Jack Vile
As technology becomes smarter, more opportunities arise to exploit AI, machine learning and other forms of intelligent systems to drive efficiency and transformation. But what’s the impact on IT teams? ...more
By Tony Lock
Have we all been caught asleep at the capacity planning wheel? Business users today want, and expect new IT services to be delivered in the blink of an eye, the necessary resources provisioned instantly, and changes made “on demand”. ...more
By Dale Vile and Tony Lock
It’s easy to be caught out by a cyber attack or internal mistake that leads to your customers’ data or important intellectual property ending up on the black market. Making sure your business is adequately protected and is able to respond effectively ...more
By Dale Vile, Tony Lock & Jack Vile
Application programming interfaces (APIs) have been around for decades. In the early days of IT they were primarily used to give programmers convenient access to libraries of prebuilt functions. As systems became more distributed, APIs found their place ...more
By Dale Vile & Jack Vile
The world we live in is increasingly digital. As the smart use of technology leads to markets speeding up and becoming ever more unpredictable, a strong set of established offerings and execution capabilities only gets you so far. Feedback from 1,442 IT ...more