So far in this series we’ve looked at where virtualisation is at, and where it’s going in terms of both benefits and operational challenges.
Like many newly adopted technologies, the law of unintended consequences comes into play – virtualisation will undoubtedly be used for a raft of previously unimagined things.
Similarly, however, it creates a whole bunch of risks never considered by either its original designers, nor those putting virtualisation in place.
Some of these topics came to light during a panel session at Infosec Europe this year. The highly participatory audience described risks such as the popular assumption that the virtual world is in some way secure by default. While the hypervisor may provide an additional, securable layer, the downside is that if the hypervisor (or indeed, the management tool overseeing a number of hypervisors) is compromised, then the virtual environment can be left wide open to attack.
Risks do not have to be purely technological either. One benefit of virtualisation is that logical workloads no longer have to be tied to specific physical machines.
However, this constraint has traditionally been used to security’s advantage – for example by locking specific servers in a machine room and only providing access to specific staff. It is not hard to imagine a scenario in which a virtual machine was relocated to a server that was less well protected; indeed, if one is to believe the rhetoric, it is not infeasible to imagine a high-risk workload being allocated to the public access terminal in the foyer!
Surely it is just a case of preventing such things from happening, I hear you say. But as we discussed in a previous article, the level of knowledge and experience around managing virtualisation is currently quite low. This couples with the comment by one participant at the Infosec panel, from a security manager – that when virtual environments were being implemented in his organisation, he was having to conduct a rearguard action to ensure that any holes were shored up. In the drive to reduce costs as quickly as possible, a number of risks may be left untreated.
What virtualisation brings is an additional layer, which itself needs to be secured, managed, operated. However, this is not all bad – indeed, while virtualisation does increase the threat surface of the IT environment, it can bring with it a number of security advantages over using physical systems.
One particular benefit comes almost as a spin-off. In order to reach a given workload, not only does the physical environment have to be breached, but then so does the virtual layer. As already mentioned, such additional layers are not an automatic protection against a sustained attack. However they do provide a layer of abstraction which can reduce the chances of data leaks or ‘accidental’ prying.
Virtualisation also brings with it an additional degree of resilience. Virtual environments can be configured to incorporate fail-safe mechanisms, so if a virtual machine goes down, it can be started up elsewhere (or indeed, two machines can be running in parallel with replicated state). VMs can also deliver what is known in security parlance as ‘separation of concerns’ – specific applications can be run in their own virtual machines, meaning that if one is compromised or goes down, it is less likely to bring others down with it.
Virtualisation is a work in progress, particularly with respect to security, as both technology vendors and implementors agree. It brings a number of benefits that are too compelling to avoid. But we need to face facts: organisations are not yet in a position where they can claim to understand all the risks or operating a virtual environment, either within their own data centres, across the organisation or using external, cloud-based resources.
Perhaps the only guidance that can really be given at this stage is around due diligence – at the heart of security best practice is the eyes-wide-open mindset, in which risks are clearly understood and appropriately dealt with.