Virtualisation and security – the two-edged sword

All new innovations in IT are a double edged sword – with the benefits come challenges and unintended consequences. Not least server virtualisation, which does have a number of security advantages over running software directly on servers. While it’s worth considering these, it’s also worth weighing them up against the challenges, particularly given the relative immaturity of the technology.

To be fair, virtualisation has been around ever since the dawn of computing – what is an electronic computer other than a virtual environment? I did get into trouble a few years back for crying foul when Microsoft claimed, “We’ve been doing virtualisation for many years,” but to an extent they were right – as soon as there is layering or abstraction in a computer system, we have something that could be termed ‘virtualisation’. So, we have virtual memory, virtual disks, and indeed virtual machines.

It’s this latter version of virtualisation that’s garnering most interest currently, and to be more specific still, virtualisation when applied to X86 (i.e. commodity) servers. Until this side of the millennium, server computers didn’t really have the horsepower to run multiple, virtual machines (mainframes did of course, but were still a bit pricey – a factor which is notably changing). Now, with multi-core processors that build in virtualisation hooks (essentially, enabling instructions to be run by the virtual machines in a fashion which makes them pretty much as fast as running on physical machines), server virtualisation has crossed into the mainstream.

From a security perspective, virtualisation has a number of advantages. The first, almost a by-product, is how virtualisation adds to the fundamental security principle of ‘defence in depth’. The virtualisation layer provides an additional level of abstraction which needs to be cracked if the core application is to be reached. In this way it’s a bit like Network Address Translation (NAT) in that it keeps core applications one step further away from the bad guys.

Virtualisation also offers what’s referred to as a ’separation of concerns’, That is, different workloads (i.e. applications) can be run within their own virtual machines, such that if there is a problem with one, then others should not be affected. Building on top of both of these concepts, security features can be built into the virtualisation layer – in principle (see below).
However, virtualisation does have its security downsides. I’ve already mentioned the additional virtualisation layer – this can either exist as a hypervisor (for example that from VMware or Microsoft) or as an extension to an operating system kernel (for example using KVM in Linux). For the additional layer to be effective, it needs to be secure – in some ways more secure than the operating systems and applications it hosts, given that if it gets hacked or goes down, they all go down.

Without dwelling too long on specific vulnerabilities (there’s a handy summary of some here), suffice it to say that the presence of an additional layer adds to the security burden rather than reducing it. Not only is it necessary to secure the hypervisor, but also the management tools that go with it (which may be, for example, susceptible to brute force attacks to attempt a login). There are a number of ways of mitigating these risks, both in terms of patching against specific vulnerability, and also defining building security into the virtual architecture with appropriate use of firewalls and other protective measures. Baking such capabilities into the virtualisation layer is still, admittedly, a work in progress as illustrated by recent announcements such as VMsafe.

There are some additional risks resulting from the increased flexibility that virtualisation brings. For example, a virtual machine may be moved from one, highly protected server, to another far less protected server with out it being absolutely clear that anything untoward has happened. This scenario becomes even more likely if there is insufficient controls over the provisioning and/or management of virtual machines. A virtual machine could even be moved off-site, onto a third party server (at a hosting site or ‘in the cloud‘, to coin a phrase.)

Perhaps one of the biggest security risks at the moment is that organisations are deploying virtualisation without always considering the security implications. At a panel I hosted at Infosecurity Europe a few weeks ago, one security pro in the audience explained that in his organisation virtualisation was being brought in primarily for cost reasons (nothing wrong there), but also that the rush towards savings was made without taking security into account (e.g. by costing it into the business case). Security comes at a cost, and like fault tolerance and other risk management approaches, it never works quite so well when it is retrofitted; the knock-on effects of rushing towards virtualisation may also include the aforementioned proliferation of virtual machines, resulting in a more complex (and therefore riskier) environment.

This factor is borne out when we consider recent Freeform Dynamics research suggesting that less than a quarter of organisations feel they are operating at ‘expert level’ when it comes to virtualisation – the impact is that knowledge of security best practice for virtualisation will still be lacking for many.

In conclusion, then, it is important to remember that these are still early days. Virtualisation undoubtedly has its benefits, not least from a security perspective. However organisations adopting virtualisation today would do well to ensure they do not increase the level of security risk they face. A simple risk assessment at the start of any virtualisation deployment, together with an appropriate level of vendor and product due diligence from a security perspective, could be the stitches in time that save a lot of heartache later.

Click here for more posts from this author

Through our research and insights, we help bridge the gap between technology buyers and sellers.