Published/updated: March 2012
Freedom to move, not bogging things down
The road to hell, they say, is paved with good intentions, and never more so than when it comes to virtualisation.
Many companies embark on virtualisation because they think it will make IT better, cheaper and faster. There is no denying that it helps initially, reducing costs through consolidating servers and making other areas such as rebuilds and backup easier.
But our research shows that unless steps are taken early on to manage the shift that accompanies virtualisation, then the outcome can actually be a more complex and fragile infrastructure that doesn’t respond well to change.
A common result is that companies reach a natural plateau where their skills, tools and operational processes are overwhelmed by virtual machine sprawl and unpredictability.
With this in mind, we will focus on some of the key lessons gleaned from those who have already suffered the pain of virtualisation and emerged victorious.
Now’s your chance
For a start, it is important early on to change the footing on which IT projects are planned. In the world of physical systems, hardware and software are usually funded as part of a dedicated project budget.
Virtualisation breaks this dependency and is an opportunity to separate the underlying hardware from the end customer – but unless you take advantage of this shift you risk losing control. Rather than seizing the initiative to provide better services, you may find cost cutting is imposed.
One way to approach this was outlined to me by a CIO who foresaw that virtualisation was the ideal pretext to change the way IT provided services to the business.
Rather than just consolidating the company’s systems, passing the savings back and looking like a hero in the short term, he fought to work the anticipated cost reduction into a business case for investing in something more future-proof.
He proposed the creation of a new virtualised service pool containing servers, storage and networking, with licensing optimised for highly virtualised workloads. All of this was underpinned by integrated management and comprehensive monitoring and reporting.
This enabled him to go back to the application owners knowing what it cost to provide IT services, both physically and virtually.
Dive into the pool
Instead of force-fitting applications onto highly consolidated servers, the IT department gave service owners a choice: they could continue to fund their own projects and systems in the old manner using dedicated kit, or they could run them in the new virtual pool.
The cost difference between the two meant that unless there was some compelling counter argument, most services quickly moved to the new virtual infrastructure, which was more manageable and flexible than the old static one.
This highlights two other areas to consider when choosing the virtual infrastructure route. The first is that when things are shared, costs and service expectations can quickly become a political hot potato.
Complete visibility is needed about what is being delivered and what it costs to do so when demonstrating to the business the implications of various requests.
Our research has shown that putting in place at least a basic billing or cost-reporting capability can go a long way towards creating a much better experience all round.
The second point is that when it comes to service delivery in a virtualised infrastructure, nothing matters more than the experience at the point of consumption.
Whatever service level agreements are in place on individual components of the service, what really needs to be monitored and managed is what is actually being delivered to the business.
We touched on this briefly in the previous article in this series, but few companies have proactive service monitoring in place.
When the service is provided by dedicated physical systems they can be sized reasonably effectively and don’t have to contend all the time for resources to meet targets.
But when things are shared and changed regularly, failure to take ithe potential impact into account can cause real issues for users or customers.
Without timely feedback, it is difficult to see the real service situation until the phone starts ringing.
CLICK HERE TO VIEW ORIGINAL PUBLISHED ON
By Richard Edwards
By Dale Vile
By Bryan Betts and Dale Vile
Yesterdays software delivery processes are not up to dealing with today’s demands, but modernising you approach is not just about implementing Agile, even creating a DevOps culture. You need to focus on some specific, hard-core principles. ...more
By Dale Vile & Jack Vile
Cloud services are increasingly becoming part of the IT delivery mix, but a recent study of 378 senior IT professionals suggests a parallel commitment to ongoing investment in the datacentre. This in turn shines a light on the key role of modern application platforms. ...more
By Tony Lock & Dale Vile
Despite the advent to cloud computing the datacentre remains central to corporate IT. But with demands continuing to escalate, how do you ensure your infrastructure is powered robustly and efficiently? ...more
By Bryan Betts
Many are exploiting cloud computing to drive business advantage, while others are enjoying the flexibility and efficiency of DevOps. But what happens if you use both together in a coordinated manner? The answer is a significant amplification of the benefits of each. ...more