Published/updated: June 2008
by Dale Vile and Jon Collins
The modern business is highly dependent on IT. When systems go down, the disruption can be widely felt, and even lead to tangible damage to the business or its brand. Against this background, it doesn’t make sense to gamble with systems availability. So why do so many take risks?
Systems failures occur frequently and impact the business in multiple ways
When more than 1,200 IT professionals were asked about the frequency with which IT systems failures impact their businesses, more than half (57%) alluded to disruptions occurring on at least a monthly basis. The end result is a direct hit on business productivity, increased IT overhead and knock on effects as delays impact processes, schedules and plans. Beyond this general disruption, one in five organisations suffers brand damage or tangible financial loss on at least a quarterly basis.
Application availability hotspots differ by organisation size
Larger enterprises are more inclined to identify core business applications as an availability hotspot, as highly integrated in-house developed systems and heavily customised software packages create a complex landscape with many potential points of failure. Small and medium-sized organisations call out horizontal applications such as email as being particularly troublesome from an availability perspective, as a result of rapid growth in demand and underinvestment in platforms.
Lack of resiliency planning often leads organisations to gamble on availability
Much of the exposure leading to high failure rates comes about because system availability is only considered towards the end of the project lifecycle. This often results in having to choose the lesser of two evils: either slipping delivery times to retrofit resiliency measures, or taking the gamble and putting the system live with vulnerabilities. Even if the will is there to do the right thing, unfortunately the money may not be, as the cost of implementing resiliency will not have been budgeted.
Dealing with the challenges requires a balanced approach
Whether it’s poor planning or simply a lack of appreciation of the need to invest, in most organisations, a significant gap exists between the resiliency measures the business requires and those that are actually in place. Issues range from the fundamental such as inadequate controls during the application lifecycle leading to software that isn’t ‘operations ready’, to simple things like the absence of failover solutions for key applications or the lack of effective monitoring to pre-empt potential failures. While the research suggests that addressing such issues individually will pay back significantly, the real aim has to be incorporating resilience and availability into all aspects of IT.
But don’t try to boil the ocean, start with the simple stuff
An obvious step to take, if you have not already done so, is to involve IT operations staff early in the project lifecycle. This will highlight resiliency requirements and allow dependencies and conflicts with the existing infrastructure to be understood up front so plans and budgets can be set appropriately. Addressing some of the hotspots identified above is also a good move. Simply stabilising an email or collaboration system, for example, will be a step in the right direction, freeing up resources and getting the business to appreciate the value of uptime, which is a great foundation to lay for the future.
The research on which this report is based was designed, executed and interpreted independently by Freeform Dynamics. Feedback was gathered via an online survey (1223 respondents, predominantly IT professionals from the UK, USA, and other geographies). The study was sponsored by Neverfail.
This report is free of charge. Click above to download the PDF or view the interactive e-document.
If you experience any problems during this process please contact us at;
email@example.com or call +44 (0)1425 626501 / 620008
By Dale Vile
In a recent online survey in which responses were gathered from almost three hundred IT and business professionals, we investigated the related set of challenges and risks arising from fragmented cloud adoption in the area of security ...more
By Dale Vile
Organisations are increasingly turning to multi-tiered shared storage environments to deal with growing data volumes, changing business requirements, and IT delivery models. Managing Quality of Service (QoS) at scale, however, requires a highly automated approach ...more
By Tony Lock
VMware Virtual Volumes (VVOLs) are due to be released in 2015 and have been designed to help streamline deployment, make ongoing administration of VMs more straightforward, and to allow each VM to be managed individually. ...more
By Dale Vile
By Dale Vile
Creating a more customer centric business environment has historically been hard to achieve. In this paper, we will examine how technology and market trends, together with changes in the regulatory landscape, are elevating the status of customer centricity from ‘aspirational ideal’ to ‘business critical imperative’. ...more
By Dale Vile, Tony Lock, Jack Vile
With the phenomenal rise in the adoption of smartphones, tablets and other desirable devices, many pundits predict that the direction of corporate IT will increasingly be defined by end users. But does this make sense? ...more
By Dale Vile & Tony Lock
If it has been a while since you thought about your DR measures, or a review has been prompted by a risk assessment, compliance audit, actual disaster or some other scare, it’s worth taking some time to understand what can be achieved in light of important changes that have taken place over the past few years. ...more