Provisioning – how do you approach it?

Buying new physical servers has always taken time and effort. Unfortunately virtualisation has managed to create the perception that the provisioning of virtual machines is quick, easy and – very unfairly – free of charge. How has this expectation changed the necessary processes when new physical servers have to be acquired?

Ask any IT manager and they will tell you that when it comes to acquiring new physical servers, it takes time to get new systems delivered, never mind getting through the interminable internal sign off procedures required to spend any money in the first place. With the spotlight still targeted at keeping a tight grip on any capital spend, how is it to possible today to specify the physical characteristics of a server in an era when such machines may be called upon to support a wide variety of services over the course of their lifetime?

In days gone by, the process was straightforward, or at least relatively so. You looked at the application to be run, calculated (usually via guesswork) how many users would have to be supported concurrently and spoke with the ISV and did some rough and ready calculations. These defined the processor speed, memory, disk space and I/O characteristics needed, to which the prudent administrator would add a “contingency” factor. Naturally enough, this took time.

Next in line was a large chunk of time and labour to get through the internal procurement and vendor approval processes, the purchase order signed off and the order to the supplier. Finally came the long, long wait for hardware delivery and, perhaps an engineer to do the installation work.

Clearly this methodology is not entirely appropriate when it comes to working out just what configuration of a server is needed to support variable virtualized workloads. Is it possible to try and work out just what is likely to be needed to run and size appropriately? Or is it a better idea to buy the biggest server that fits the available budget and work on the premise that workloads will inevitably grow to fill the beast?

Buying the biggest server possible has much to commend it, assuming that the way IT projects are financed makes it possible for such server acquisitions to be funded. Now if a large machine is purchased it makes sense to make certain beforehand that the physical resources can be managed effectively, especially in these days when the operational costs of systems are coming under more scrutiny.

We know, from your feedback, that a significant number of organisations (but far from the majority) are now approaching application and server deployments with consolidation and virtualisation in mind. Hence service deployment and delivery is now slowly becoming separated from decisions concerning hardware acquisition.

But of course this requires the use of some form of internal cross-charging models and a sufficiently far-sighted and determined IT manager or CIO to make it happen. Of course there are still companies, some of whom should perhaps know better, who still cling to the one application, one server, one budget philosophy and who cannot provision anything much inside of a couple of months.

One area where good management is becoming more important concerns assessing if the tools allow physical resources to be powered down if they are not required to run a workload. Can disks be spun down? Can unused processors be powered down? Perhaps more importantly, are there monitoring tools available on the server that highlight underutilised resources allowing administrators to actively manage the physical resources of large servers? These are challenges that will face more and more IT professionals as more powerful x86 servers are deployed inside computer rooms and data centres, especially as business and external pressures mount to control carbon footprints and electricity bills

Another approach is to buy smaller servers or blades that are capable of hosting moderate workloads without having excess capacity and that can be bought when required, assuming that the supplier still delivers such kit, as resource demands grow. Clearly, if the workload requires larger physical resources than smaller servers can host on their own, some form of resource pooling virtualisation technology will have to be deployed.

There is no doubt that the physical provisioning of servers is becoming more complex as the choices available expand. How are you managing things in a world where the virtualisation vendors have succeeded in building an expectation that workload provisioning is nearly instantaneous? Have you found a good way to keep expectations under control? Please let us know in the comment section below.

Click here for more posts from this author

Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know.