When it comes to supporting business activities, the primary role of IT is to ensure that the applications on which the organisation depends are available and operating at an acceptable level of service. Beyond this there is the unsung, but absolutely essential, matter of protecting both the application and the data it collects, creates and stores over their collective lifetime.
Traditionally, more emphasis has been placed on the protection of data than applications, with the standard approach being backup to tape. This takes time to perform (the ‘backup window’) and, to ensure data consistency, it often requires that access to the data be suspended while the backup takes place.
This can be quite a challenge, particularly for transactional applications (i.e. most of them) – it requires stopping the application program, getting all transactions to complete and then initiating the protection process. Once completed, the application is restarted and thereafter the users can begin to use the software again as part of their business processes.
When recovery of data is required, the reverse process would take place with administrators seeking to restore data sets from one or more tapes. Both of these processes can take a considerable amount of time, even in the best of cases.
Today’s pressures for data to be available for longer have put additional constraints on the backup window, at the same time as reinforcing the need to recover data quickly. As a consequence, a number of alternative technologies have sprung up for use in data protection and recovery processes. These include backing up to disk, Continuous Data Protection Systems (CDP), point-in-time copies (snapshots), and replication solutions. Each of these offers different capabilities and will be suited to a range of data protection scenarios.
Despite these advances, the application still needs to be in a logically consistent state prior to the data protection process commencing, both to enable a recovery process to be operated and to leave users in a position to carry on work.
Today it is becoming unacceptable to bring down some applications to allow a backup, snapshot or replication process to take place, so another means must be found to ensure that a self-consistent data set is protected. This challenge is becoming further exacerbated as complex, composite applications are used that may run using data from multiple systems, some of which may extend beyond the borders of the enterprise.
This raises the question: who is responsible for ensuring application and data consistency in data protection and recovery scenarios? As applications become more complex architecturally, and in line with business demands for high availability, IT infrastructure administrators and application developers/implementation consultants need to find ways to work together to ensure that rapid application protection and recoverability are built into systems right from the word go.
From our research, we know how much of a challenge this can be. In a study conducted last year, it became clear just how much of a gamble organisations were prepared to take with the reliability of their IT systems. But more recent work has shown us the gulf between different factions in IT, which can only exacerbate the situation. We’re not going to attempt glib answers – although if you have any, do let us know – but we can say that all the tools in the world won’t help if the right pieces aren’t being put in place from the outset.
Where to start? Probably the developers themselves, who will need to know what the data protection/recovery solution needs from it so that the storage administrators can ensure that the desired quality of service is achieved with minimum risk and, naturally, the minimum of cost. It’s time to make the planning for recoverability an integral part of the system design and build process, and development is as good a place to start as any.