The overlooked element of DevOps
Many words have been written about the need for automation, for DevOps, for Continuous Delivery, and for all the other buzzwords that people use when they talk about evolving IT and managing change. Yet research suggests that few have paid enough attention to why change fails – and that’s a failure to engage the people, rather than the technology.
In some respects, IT infrastructure has changed rapidly over the last 5 years – in particular, virtualisation and Cloud have become widely accepted, almost standard approaches. Yet significant numbers of organisations are still struggling to get to grips with how these are changing their data centres. Another major change beginning to impact data centres is the adoption of DevOps. But while DevOps and Continuous Delivery garner considerable media attention, the results of a survey recently carried out by Freeform Dynamics shows they are still relatively young in their widespread adoption in the enterprise (Figure 1).
As DevOps usage grows and organisations seek greater flexibility from their IT systems, it brings with it the need to automate a considerable number of processes that were formerly labour intensive. These include the provisioning of infrastructure resources to services as they are deployed, potentially re-provisioning them far more frequently than in the past and, perhaps the most overlooked element of all, de-provisioning them perhaps after just a few days or weeks of use. Of course, all of these processes must be managed with a strong focus on security.
These new requirements are already presenting challenges in the data centre as shown in the results of another survey, where the flexibility required of IT resources is highlighting the weakness in many existing monitoring tools, especially when the impact of Cloud usage is considered (Figure 2).
With so much change happening, how can data centre managers and IT professionals ensure everything runs smoothly? Clearly attention, and investment, are required to bring monitoring up to scratch in the dynamic new world. The same can be said for the management and security tools used to control the data centre’s IT infrastructure.
But most importantly of all, it is people and process matters that require immediate attention. In many, if not most organisations with large IT teams, it has long been recognised that the days of having narrow specialist teams looking after particular technology silos were coming to an end in order to allow staff to be able handle workloads. But the results of the survey mentioned earlier in Figure 1 tell us that getting different parts of IT working together well is a challenge for many (Figure 3).
The survey returns show that a clear majority of IT teams or groups have experienced trust issues with other teams. This is despite almost every respondent acknowledging that it is critically important for different groups to work together to meet existing challenges, especially in terms of security. There are many possible reasons for the lack of trust, sometimes it might simply be down to history or personal issues. But it is also very likely that many organisations do not have good processes in place to enable people to work effectively.
It could be that Operations and Developers rarely speak or meet, that Operations and Security teams only get in touch with each other when something has ‘gone wrong’, or simply that Security professionals and Developers only communicate in terms of commands. In the dim and distant past when IT systems were relatively static, communications may not have needed to be open all of the time, but even then, there was a need to believe that what one group told another could be trusted as a base from which to act. In the far more dynamic environments that are being built today, good communications and trust are even more crucial if serious problems are to be avoided.
In essence “feedback” must be at the heart of any dynamic system, and the DevOps approach to creating, modifying and running software and systems is a case in point. Operations needs to understand the type of infrastructure that developers require in order to deploy their systems, and they have to put in place the monitoring tools to keep it functioning effectively and to highlight any issues that need to be addressed. Data Centre Ops staff then must be able to feed back the results of the monitoring and management of the systems they run to allow developers to fix bugs, optimise code that isn’t running well or that is consuming more physical resources than expected.
These people skills and processes are often overlooked but are becoming increasingly important. The benefits can be difficult to measure exactly, but the consequences of getting them wrong will become visible almost immediately. Until these people and process issues are tackled, problems will continue (Figure 4).
The Bottom Line
Complexity in the data centre continues to increase. To keep everything operational needs not just good technology but processes that have been modernised to handle the challenges of today, not those of a decade ago. Communication, both verbal and electronic, is essential to improving IT service delivery and avoiding failures and outages. But improving how Ops speaks with Devs, developers speak with ops and security speaks with everyone can also allow each team to have a positive influence. People and communications are easy to overlook. Don’t.
Article originally published on DCS UK
Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know.