Published/updated: June 2008
by Dale Vile and Jon Collins
The modern business is highly dependent on IT. When systems go down, the disruption can be widely felt, and even lead to tangible damage to the business or its brand. Against this background, it doesnít make sense to gamble with systems availability. So why do so many take risks?
Systems failures occur frequently and impact the business in multiple ways
When more than 1,200 IT professionals were asked about the frequency with which IT systems failures impact their businesses, more than half (57%) alluded to disruptions occurring on at least a monthly basis. The end result is a direct hit on business productivity, increased IT overhead and knock on effects as delays impact processes, schedules and plans. Beyond this general disruption, one in five organisations suffers brand damage or tangible financial loss on at least a quarterly basis.
Application availability hotspots differ by organisation size
Larger enterprises are more inclined to identify core business applications as an availability hotspot, as highly integrated in-house developed systems and heavily customised software packages create a complex landscape with many potential points of failure. Small and medium-sized organisations call out horizontal applications such as email as being particularly troublesome from an availability perspective, as a result of rapid growth in demand and underinvestment in platforms.
Lack of resiliency planning often leads organisations to gamble on availability
Much of the exposure leading to high failure rates comes about because system availability is only considered towards the end of the project lifecycle. This often results in having to choose the lesser of two evils: either slipping delivery times to retrofit resiliency measures, or taking the gamble and putting the system live with vulnerabilities. Even if the will is there to do the right thing, unfortunately the money may not be, as the cost of implementing resiliency will not have been budgeted.
Dealing with the challenges requires a balanced approach
Whether itís poor planning or simply a lack of appreciation of the need to invest, in most organisations, a significant gap exists between the resiliency measures the business requires and those that are actually in place. Issues range from the fundamental such as inadequate controls during the application lifecycle leading to software that isnít Ďoperations readyí, to simple things like the absence of failover solutions for key applications or the lack of effective monitoring to pre-empt potential failures. While the research suggests that addressing such issues individually will pay back significantly, the real aim has to be incorporating resilience and availability into all aspects of IT.
But donít try to boil the ocean, start with the simple stuff
An obvious step to take, if you have not already done so, is to involve IT operations staff early in the project lifecycle. This will highlight resiliency requirements and allow dependencies and conflicts with the existing infrastructure to be understood up front so plans and budgets can be set appropriately. Addressing some of the hotspots identified above is also a good move. Simply stabilising an email or collaboration system, for example, will be a step in the right direction, freeing up resources and getting the business to appreciate the value of uptime, which is a great foundation to lay for the future.
The research on which this report is based was designed, executed and interpreted independently by Freeform Dynamics. Feedback was gathered via an online survey (1223 respondents, predominantly IT professionals from the UK, USA, and other geographies). The study was sponsored by Neverfail.
This report is free of charge. Click above to download the PDF or view the interactive e-document.
If you experience any problems during this process please contact us at;
firstname.lastname@example.org or call +44 (0)1425 626501 / 620008
By Bryan Betts and Dale Vile
Yesterdays software delivery processes are not up to dealing with todayís demands, but modernising you approach is not just about implementing Agile, even creating a DevOps culture. You need to focus on some specific, hard-core principles. ...more
By Dale Vile & Jack Vile
Cloud services are increasingly becoming part of the IT delivery mix, but a recent study of 378 senior IT professionals suggests a parallel commitment to ongoing investment in the datacentre. This in turn shines a light on the key role of modern application platforms. ...more
By Tony Lock & Dale Vile
Despite the advent to cloud computing the datacentre remains central to corporate IT. But with demands continuing to escalate, how do you ensure your infrastructure is powered robustly and efficiently? ...more
By Bryan Betts
Many are exploiting cloud computing to drive business advantage, while others are enjoying the flexibility and efficiency of DevOps. But what happens if you use both together in a coordinated manner? The answer is a significant amplification of the benefits of each. ...more
By Dale Vile
Securing the applications and services that underpin your online and mobile presence is one thing, but keeping them secure secure on an ongoing basis is another. How well do your business execs understand this? ...more
By Dale Vile & Jack Vile
Keeping up with escalating storage demands is not just about managing the growth in data. A survey of over 360 senior IT professionals tells us that speed, efficiency and predictability are also important to keep up with evolving application needs. ...more
By Bryan Betts
In today’s digital age, an information archive should not be the equivalent of a dusty tomb for documents. Active archives make data held in long term storage more accessible so it can remain an integral and valuable part of the business. ...more