A study we undertook in 2008 acknowledged what we all knew to be true: contrary to the hype, IT wasn’t in fact broken/on-fire/rubbish, it was actually doing OK.
However, those working in the field happily acknowledged that things could be better. The burden that IT works under doesn’t look like much when broken down into a set of individual concerns, but when rolled up and examined at a macro level, it’s easy to confuse the difficulty of making IT work with what IT actually does.
Individually, day to day challenges such as security, desktop maintenance, help desk activities and information management are a bearable pain. En masse, they cause significant headaches to the majority of IT departments throughout the world. To the uninitiated, the sum of all these ‘things’ is seen as simply ‘what IT is and does’. To those in the know, however, it’s the difference between looking at what the outputs are versus the effort required to achieve and administer them.
One of the major causes of inertia is the lack of joined-up IT management capability. The risk of missing incidents as they arise and of duplicating effort is high. Working across multiple disparate tools and systems is inefficient, requires a broad range of skills and costs more. Then there is the negative impact this fragmentation has on capability and performance. But as IT professionals told us, it’s not that it doesn’t work – more that there’s a pretty thick ‘overhead’ layer to get through.
And now we’re bringing virtualisation into the mix. Judging from the feedback we’ve been getting there’s a significant body of organisations gearing up to take their virtualisation activities to the next level.
However, without making some quite fundamental changes to the way they approach the management of their IT environments, they are running the risk of adding a whole new layer of complexity – one which requires clarity, consistency and expedience in management terms – into an environment which generally lacks these sorts of capabilities.
So the question to round this section of the workshop off with is a pretty simple one: when adding virtualisation into the mix beyond testing or other closed off IT-only initiatives, what considerations have you given towards the management side of things?
There could be a few angles to cover here: there are the tools on offer from the virtualisation vendors, then there are tools from the established systems management software vendors, and there are those from the start-ups looking to fill the gaps between the two. Then there’s all the stuff you already have.
Of course, some might say this is a solved problem – there are management frameworks and best practices galore, such as ITIL and COBIT. But as my colleague Jon has been pondering (there are no hard and fast rules here yet): are such frameworks really up to the job if they were formulated in a time when physical systems held sway?