Published/updated: March 2012
A prompt to move to the next level of management?
The early days of x86 server virtualisation were full of promise. Consolidation of physical servers often led to immediate hardware cost savings and a reduction in administrative overhead. Around three years ago, however, our research surveys started telling us that much of the early adoption activity was hitting a wall.
Despite the initial benefits, we were hearing reports of issues arising from the uncontrolled proliferation of virtual machine images. In effect, the ease with which images could be created and cloned meant the physical sprawl that consolidation initiatives were originally helping to tackle was raising its ugly head again in a different guise – virtual server sprawl.
As a result, some were reporting that challenges around activities such as configuration management, server side security and software asset management were actually being aggravated as virtualisation activity was scaled up. Not only were there more servers to manage, albeit virtual rather than physical ones, you also had the problem of tracking, patching and otherwise maintaining dormant images, and wading through the complexities and uncertainties of licencing software in a virtual environment.
The lesson often learned was that you can only take virtualisation so far without running into complexity issues, which then start working against you unless you directly address them. At some point, this generally translates to revisiting your management environment, and beefing up both your tools and processes to deal with a combined physical and virtual server estate in a coherent manner.
But this is easy to say and hard to do if you are not starting from a sound management footing in the first place. The truth is that even before virtualisation-related challenges are taken into account IT professionals are already typically relying on a fragmented and disjointed set of facilities and procedures to keep things running and to implement changes. Ironically, new management solutions introduced specifically to deal with virtual servers therefore often just add to the tooling tangle.
Apart from increased management friction over time, it also becomes more difficult to identify candidate applications for virtualising as consolidation initiatives progress. Once you have picked the low hanging fruit of small-footprint departmental and workgroup related applications, you might move on to some of the larger virtualisation-friendly packages and systems. But the more you push into the bigger and more critical systems space, the more dubious the returns, and there’s no point in virtualising for sake of it.
No wonder then that we see so many organisations reaching a certain point then essentially stalling with their virtualisation programmes. Whether it’s at the 50%, 60% or 70% fully-virtualised level, management complexity together with tools-related constraints and fewer obvious targets means the law of diminishing returns ultimately kicks in and puts a stop to significant further activity.
So where do you go once things start to stall, or more to the point for most who are still in the relatively early days of virtualisation, how to you prevent things getting to that stage?
A lot of people are talking about ‘private cloud’ as the natural next step that follows traditional x86 virtualisation initiatives. The idea is to pool servers and storage to form a single logical resource that can be used to support everything from small-footprint apps that would historically run on a single server, to large scale workloads requiring the power of many servers. With an ability to rapidly allocate resources to any given workload, or reclaim resources from it, private cloud architecture enables genuinely dynamic workload management, resource optimisation and resiliency.
It is beyond the scope of this article to go into the anatomy of a private cloud, but suffice it to say that a lot of it boils down to clever management and automation of provisioning, configuration and resource optimisation when complex dependencies exist between servers, storage, networking, platform software and applications. And this is not dissimilar to the situation people stumble into a short time before their traditional virtualisation activity becomes complexity bound and stalls.
The smart money is therefore now is on appreciating that moving as swiftly as possible to dynamic workload management with as much automation as possible, which is really what private cloud represents, is the real key to achieving sustainable results. It really isn’t necessary, or even helpful, to ‘finish’ your consolidation-centric virtualisation programme before getting into private cloud. In fact it’s arguable that you could save yourself a lot of interim hassle by just cutting to the chase.
The pre-requisite, however, is a willingness to adopt a new and more holistic approach to systems design and management, which often means creating a parallel environment that lives alongside traditionally architected systems. And while we might think of running multiple virtual machines on a single physical server as new or modern, it isn’t really all that different in principle to way things were done before. Most existing virtual environments can therefore be put into the ‘traditionally architected’ bracket when viewed from either a management or execution perspective.
So, no matter where you are with virtualisation, even if you haven’t done that much of it so far, it’s worth taking look at what private cloud has to offer, and whether it’s worth jumping to that stage.
CLICK HERE TO VIEW ORIGINAL PUBLISHED ON
By Dale Vile
Some argue that IT operations doesn’t matter anymore; it’s all about developers. Our aim in this paper is to re-balance the discussion based on research in which feedback was gathered from over 400 European IT professionals. ...more
By Dale Vile & Jack Vile
We often hear that cloud computing dramatically reduces the need for in-house IT teams, and might even lead to their ultimate demise. The research reported here provides a very different view based on analysis of real business objectives and actual experience. ...more
By Dale Vile & Jack Vile
As technology becomes smarter, more opportunities arise to exploit AI, machine learning and other forms of intelligent systems to drive efficiency and transformation. But what’s the impact on IT teams? ...more
By Tony Lock
Have we all been caught asleep at the capacity planning wheel? Business users today want, and expect new IT services to be delivered in the blink of an eye, the necessary resources provisioned instantly, and changes made “on demand”. ...more
By Dale Vile and Tony Lock
It’s easy to be caught out by a cyber attack or internal mistake that leads to your customers’ data or important intellectual property ending up on the black market. Making sure your business is adequately protected and is able to respond effectively ...more
By Dale Vile, Tony Lock & Jack Vile
Application programming interfaces (APIs) have been around for decades. In the early days of IT they were primarily used to give programmers convenient access to libraries of prebuilt functions. As systems became more distributed, APIs found their place ...more
By Dale Vile & Jack Vile
The world we live in is increasingly digital. As the smart use of technology leads to markets speeding up and becoming ever more unpredictable, a strong set of established offerings and execution capabilities only gets you so far. Feedback from 1,442 IT ...more
By Dale Vile
Advances in digital technology create significant opportunities to transform both customer engagement and business operations. As the trends in these areas continue, feedback from 1,442 respondents in a recent survey highlight 10 key traits of the highest achievers. ...more