It used to be the case that the role of the operating system (OS) was pretty well defined as a layer of software responsible for controlling the use of and access to physical machine assets such as CPU, memory, disk, network, and so on.
As the industry has evolved, however, so too has the role of the OS. Today, for example, when you install an OS, a whole range of high-level features, functions and tools often come along with it, from enhanced security and access, through various management and admin tools, to full-blown application and web serving.
While this much more comprehensive and coherent approach to delivering platform capability has made many aspects of the lives of IT professionals much easier, the gradual ’raising of the water line’ in terms of what’s included in the OS creates some interesting discussions when we move into the world of virtualization.
There are a number of dimensions to this.
Firstly, there is the question of efficiency. One of the big advantages of virtualization is being able to run multiple workloads on the same box, with each supported by an appropriately configured software stack running in a discrete virtual machine (VM). If each virtual machine is required to run a general purpose OS, even though that VM is essentially single purpose in nature, that arguably represents unnecessary complexity and overhead that needs to be resourced and managed.
Using ’leaner’ versions of operating systems, which is now a possibility with both Linux and Windows Server, for example, supports the notion of building simpler and more efficient stacks when the job at hand is very specific. The counter-argument to this, however, is that consistency has its advantages, and that implementing too many OS variants creates a different set of complexities and management issues. Provided unused functionality is not consuming an excessive amount of resource, perhaps it’s better to live with it.
That is, of course, not the whole picture, as unnecessary services sitting there just idling can increase the attack surface of an operating system from a security perspective, so there is clearly a balance to be struck in terms of strip down and/or configuration.
The efficiency argument also comes into play when considering the way in which hypervisors are implemented. Intuitively, running a stand-alone hypervisor on ’bare metal’, i.e. directly on the hardware, would seem to be the best option from a performance perspective. Some argue, however, that there is little or no practical difference in performance between this and having the hypervisor sitting on top of (or embedded within) a host operating system.
But again we need to consider the management dimension. Bare metal hypervisors represent independent entities in the infrastructure that need to be managed as such, which is why some recommend dedicated management tools for the virtualized environment. Hosted hypervisors can often be managed via the operating system upon (or within) which they sit, allowing at least a basic level of management to take place via the tools and processes already being used, with more capability coming from extension rather than duplication of management solutions.
Unfortunately, there are no black or white outcomes to any of the above discussions, and the choices people make often come down to context, familiarity and philosophy.
If you are a smaller IT shop manned predominantly by multi-functional staff, then the embedded or hosted route might make sense because it is relatively straightforward and more likely to fit with what you are doing already. If you are lucky enough to have a lot of specialist resource, as is typical of larger enterprise IT environments, and areas of your infrastructure that are totally virtualized, then a finely tuned bare-metal approach with dedicated management tools might be more appropriate.
Even this is a generalisation though, as options can be mixed and matched, sometimes easily, sometimes less so, based on specific need.
With this in mind, we would be interested in what you, the readers, think on this topic. What, in your experience, are the pros and cons of bare-metal versus hosted hypervisors? What are the performance and management implications, for example? And have you developed a philosophy that is being used as the basis for your virtualization investments and initiatives?
Coming back to where we started, perhaps you even regard the bare-metal hypervisor as the operating system of the future? And to throw one last idea into the mix, do you see a role for so called ’application virtualization’, whereby applications are captured in a container that plugs onto a hypervisor?
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch