How to gear up for dynamic service delivery

If servers and applications form the heart of the IT infrastructure, then the network is the lifeblood that carries the oxygen of information through the organs of the business, allowing it to think, respond and adapt. As organisations make more use of IT for enabling business processes, the network is becoming an increasingly core element of successful service delivery. Whether you’re upgrading your network infrastructure after years of budget restraint, or continually investing over time, the nature of IT applications and service delivery is changing and will demand new ways of looking at the network to keep up with these changes.

For years, many companies took the approach of “set it and forget it” when it came to networks. The architecture was decided early on and – once implemented – was difficult and costly to change; applications just had to make do with what was on offer.

Networking equipment tends to have a long lifespan, typically six to seven years. Recently there has been a tendency to stretch this even longer. The end result is that architectures have tended to be set in place for a significant period of time. This approach worked reasonably well in the era of static workloads, where applications ran on dedicated servers and change was infrequent.

The trend in modern datacentres is towards a more dynamic environment. Virtualisation is now firmly established as the preferred approach to workload deployment, with most of the companies Freeform Dynamics talks to having consolidated many of their servers. This consolidation is having a knock-on effect on the network.

With consolidation ratios of 10:1 being common, and ratios of 20:1 and higher not unheard of, individual servers have the tendency to work the network hard in terms of traffic. With so many services running simultaneously, network failures have a widespread and noticeable impact. Attempting to move to high consolidation ratios without changing the approach to the networking infrastructure is likely to result inbottlenecks and a increasingly unmanageable workload environment.

To support the move to consolidation, we see a need to implement 10Gbit/s and in some cases even 40Gbit/s interfaces to cater for the new computing demands, especially as virtualisation enhancement to areas such as I/O enable even more utilisation of the network. Arguably as important is a requirement that reliability and serviceability improve too.

We’ve seen that consolidation is well under way, but there is also a small but growing trend towards a dynamic IT infrastructure where resources may be pooled together or become totally flexible and able to move around the datacentre from server to server across the network. The traditional three-tier architecture – with access, aggregation and core layers – that served well in a static environment will be a struggle to manage and service in a dynamic situation. These tiers result in additional ports, power, latency, unpredictability and management overhead, increasing significantly the cost of buying and operating the network. The effect is to make the job of moving workloads seamlessly from one region of the network to another tougher than it needs to be.

When looking to modernise the core of the network, it’s advisable to try to simplify the architecture, reducing the tier count where possible using higher capacity networking equipment even if initially it ends up being a larger capital investment. This can help to make the network more flexible and responsive, mirroring the changes that have been happening in the application environment. This approach will also help to overcome the issue of being locked into an inflexible architecture for half a decade or more in a fast-changing world.

A further complication to consider is the role of the storage network, which invariably inches towards the top of the list of datacentre challenges for many IT managers. The traditional SAN will remain relevant and popular for years, if not decades, to come. However, it does add to the complexity of the network architecture and can hamper flexibility. When looking at new storage investments, it is worth considering moving to a converged data and storage network now that the Ethernet technologies to support it, such as Fibre Channel over Ethernet (FCoE), are becoming proven and mature.

Arguably the biggest thing to consider is how the move to a more dynamic infrastructure is placing new demands on the manageability and security of the network. At the lowest level, the main challenge is to get the management of the network on a more tightly integrated footing, so that most – if not all – the elements such as switches and routers can be seamlessly configured, and where hardware features and compatibility issues that can hinder flexibility are minimised. Security should ideally be baked in as a core function supporting the network rather than implemented as a separate layer.

At a higher level, a tight integration of network management into the overall service management platform, including performance management, workload migration and virtual machine management can enable the network to play an equal role in delivering IT services to agreed levels.

One of the major difficulties in modernising the network is choosing the appropriate scale of change. In most companies, change is gradual and the move to dynamic IT starts as a small evolution driven by new projects rather than as a big-bang or fork-lift upgrade. This presents the opportunity to create small “islands” of dynamic IT with the new network architecture, and then grow them over time as more workloads are implemented and as the skills and experience mature to run them effectively.

CLICK HERE TO VIEW ORIGINAL PUBLISHED ON

Content Contributors: Andrew Buss

Click here for more posts from this author

Through our research and insights, we help bridge the gap between technology buyers and sellers.