The feedback on virtualization experiences from those participating so far in our latest online workshop has been generally very positive.
A main focus of the comments has been on server consolidation and the cost savings that come with it, and some of the results achieved seem pretty impressive:
We’re running about 15 VMs per server: a mix of Windows and FreeBSD mostly, some high power (e.g. mail), some low power, but there’s still plenty of room for more. Virtualization rocks.
Yeah, it’s great. We’ve squished over 50 intermittent-use/low-load internal servers into 6U of space.
“I work for local government, and we have consolidated close to 25:1 on x86 Windows servers over a 2 year period. We also have been re-deploying virtualised hardware servers (if decent spec/age) instead of purchasing new hardware servers.”
We have been studying the benefits of virtualization and have started deploying it at a larger scale since last year. We were planning for a 10:1 to 16:1 consolidation ratio. I believe we are targeting 14:1 to 20:1.
However, whether high consolidation ratios are more an indication of how poorly the server environment was previously run is something that a couple of readers came back with as a challenge:
How are people running boxes running at <5% capacity before this? If you do though run lightly loaded apps then you don’t need a VM to run several of them on the same physical server; you just install them directly. For managing resources, then, VMs seem to gain you nothing. So what’s the win? Do departments just get sloppy with resource management or what?
Those with experience stood their ground, though, and came back with points highlighting some of the operational, risk and quality of service related benefits:
…Which is all well and good until two vendors’ packages conflict. Or you have to tell the people using the other 10 applications you installed ‘Sorry, rebooting the server, nothing to do with your stuff, it’s the other guy’s, but it’s all on the same box…’. Virtualization minimizes the hardware while still keeping each vendor’s tech support happy and minimizing conflicts and single-points-of-failure.
25 apps on the same OS install, with overlapping ports, libraries, web servers, drivers… one vulnerability on 1 app and a hacker has all of your infrastructure, nice. Need to do a hardware update, just your entire business down while you re-install 25 apps. Have a poor app with a memory leak = crash entire business for a while, instead of 1 app down.
The benefit of virtualization to our disaster recovery solution can’t be overstated. We backup virtual machine folders on to disk and tape. Simple, fast, no expensive ‘backup agents’ or other complexity required and can be restored onto any hardware.
Other benefits highlighted include the ability to run a higher proportion of your applications on high availability hardware, something which can be cost-prohibitive when your server estate is fragmented across a large number of under-utilised single function servers. The ability to run up test and development environments on demand, and respond quickly and easily to other requirements for other tactical applications without having to procure and provision new servers, was also cited.
And we also shouldn’t forget that in a smaller environment with relatively light loads, even if you are still left with spare capacity following a consolidation exercise, there is nothing wrong with this. The point being that the kind of operational and service delivery benefits called out by readers still exist, even if you aren’t squeezing every last usable cycle out of your kit:
Currently running 4-5 vms per physical server and seeing very low usage as we have a small user base 12-15% cpu.
But the feedback wasn’t all positive. One of the downsides mentioned during the discussion, for example, was the real cost of implementing a virtualised environment, which sometimes only becomes apparent down the line:
Didn’t spend enough on the disk storage, and now we have run out. Upgrading this is going to cost lots, possibly more than the initial roll out. We [also] maxed out our memory at the time. Sadly we have used pretty much all of it and again this will cost lots to upgrade.
…we may save tons of money on the server hardware, but we spend the savings on the software and supporting [network and storage] hardware… every four physical servers we buy needs to come with a networked disk, and a switch (or two)
The only blockage at the moment is cost; once it is cheaper to virtualise all our kit than to maintain the existing setup, we’ll probably end up with one massive NAS and virtual suite, and the only external remaining part being backups 🙂
The points coming across here are very important to take on board for anyone just getting into the virtualization game. While it can be very easy to get going with free or low-cost hypervisors offering basic functionality, as you scale up your activity in a production environment, significant additional demands will be placed on both your storage and network infrastructure, which may mandate upgrades.
It is also worth thinking ahead about some of the operational implications in terms of execution and administration. If you get to the point where you need to think about load balancing and enhanced management capability – eg to avoid lots of manual overhead – then you may need to stump up for licences to run more advanced versions of software and tools.
With Microsoft and Citrix now challenging VMware in this space, one of the biggest discussions in the industry at the moment is how vendors will package, bundle and price the various components required for customers as they virtualize on a larger scale.
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch