Published/updated: August 2011
If servers and applications form the heart of the IT infrastructure, then the network is the lifeblood that carries the oxygen of information through the organs of the business, allowing it to think, respond and adapt. As organisations make more use of IT for enabling business processes, the network is becoming an increasingly core element of successful service delivery. Whether you’re upgrading your network infrastructure after years of budget restraint, or continually investing over time, the nature of IT applications and service delivery is changing and will demand new ways of looking at the network to keep up with these changes.
For years, many companies took the approach of “set it and forget it” when it came to networks. The architecture was decided early on and – once implemented – was difficult and costly to change; applications just had to make do with what was on offer.
Networking equipment tends to have a long lifespan, typically six to seven years. Recently there has been a tendency to stretch this even longer. The end result is that architectures have tended to be set in place for a significant period of time. This approach worked reasonably well in the era of static workloads, where applications ran on dedicated servers and change was infrequent.
The trend in modern datacentres is towards a more dynamic environment. Virtualisation is now firmly established as the preferred approach to workload deployment, with most of the companies Freeform Dynamics talks to having consolidated many of their servers. This consolidation is having a knock-on effect on the network.
With consolidation ratios of 10:1 being common, and ratios of 20:1 and higher not unheard of, individual servers have the tendency to work the network hard in terms of traffic. With so many services running simultaneously, network failures have a widespread and noticeable impact. Attempting to move to high consolidation ratios without changing the approach to the networking infrastructure is likely to result inbottlenecks and a increasingly unmanageable workload environment.
To support the move to consolidation, we see a need to implement 10Gbit/s and in some cases even 40Gbit/s interfaces to cater for the new computing demands, especially as virtualisation enhancement to areas such as I/O enable even more utilisation of the network. Arguably as important is a requirement that reliability and serviceability improve too.
We’ve seen that consolidation is well under way, but there is also a small but growing trend towards a dynamic IT infrastructure where resources may be pooled together or become totally flexible and able to move around the datacentre from server to server across the network. The traditional three-tier architecture – with access, aggregation and core layers – that served well in a static environment will be a struggle to manage and service in a dynamic situation. These tiers result in additional ports, power, latency, unpredictability and management overhead, increasing significantly the cost of buying and operating the network. The effect is to make the job of moving workloads seamlessly from one region of the network to another tougher than it needs to be.
When looking to modernise the core of the network, it’s advisable to try to simplify the architecture, reducing the tier count where possible using higher capacity networking equipment even if initially it ends up being a larger capital investment. This can help to make the network more flexible and responsive, mirroring the changes that have been happening in the application environment. This approach will also help to overcome the issue of being locked into an inflexible architecture for half a decade or more in a fast-changing world.
A further complication to consider is the role of the storage network, which invariably inches towards the top of the list of datacentre challenges for many IT managers. The traditional SAN will remain relevant and popular for years, if not decades, to come. However, it does add to the complexity of the network architecture and can hamper flexibility. When looking at new storage investments, it is worth considering moving to a converged data and storage network now that the Ethernet technologies to support it, such as Fibre Channel over Ethernet (FCoE), are becoming proven and mature.
Arguably the biggest thing to consider is how the move to a more dynamic infrastructure is placing new demands on the manageability and security of the network. At the lowest level, the main challenge is to get the management of the network on a more tightly integrated footing, so that most – if not all – the elements such as switches and routers can be seamlessly configured, and where hardware features and compatibility issues that can hinder flexibility are minimised. Security should ideally be baked in as a core function supporting the network rather than implemented as a separate layer.
At a higher level, a tight integration of network management into the overall service management platform, including performance management, workload migration and virtual machine management can enable the network to play an equal role in delivering IT services to agreed levels.
One of the major difficulties in modernising the network is choosing the appropriate scale of change. In most companies, change is gradual and the move to dynamic IT starts as a small evolution driven by new projects rather than as a big-bang or fork-lift upgrade. This presents the opportunity to create small “islands” of dynamic IT with the new network architecture, and then grow them over time as more workloads are implemented and as the skills and experience mature to run them effectively.
CLICK HERE TO VIEW ORIGINAL PUBLISHED ON
By Dale Vile & Jack Vile
We often hear that cloud computing dramatically reduces the need for in-house IT teams, and might even lead to their ultimate demise. The research reported here provides a very different view based on analysis of real business objectives and actual experience. ...more
By Dale Vile & Jack Vile
As technology becomes smarter, more opportunities arise to exploit AI, machine learning and other forms of intelligent systems to drive efficiency and transformation. But what’s the impact on IT teams? ...more
By Tony Lock
Have we all been caught asleep at the capacity planning wheel? Business users today want, and expect new IT services to be delivered in the blink of an eye, the necessary resources provisioned instantly, and changes made “on demand”. ...more
By Dale Vile and Tony Lock
It’s easy to be caught out by a cyber attack or internal mistake that leads to your customers’ data or important intellectual property ending up on the black market. Making sure your business is adequately protected and is able to respond effectively ...more
By Dale Vile, Tony Lock & Jack Vile
Application programming interfaces (APIs) have been around for decades. In the early days of IT they were primarily used to give programmers convenient access to libraries of prebuilt functions. As systems became more distributed, APIs found their place ...more
By Dale Vile & Jack Vile
The world we live in is increasingly digital. As the smart use of technology leads to markets speeding up and becoming ever more unpredictable, a strong set of established offerings and execution capabilities only gets you so far. Feedback from 1,442 IT ...more
By Dale Vile
Advances in digital technology create significant opportunities to transform both customer engagement and business operations. As the trends in these areas continue, feedback from 1,442 respondents in a recent survey highlight 10 key traits of the highest achievers. ...more
By Dale Vile
IT infrastructures are often coping pretty well with current business requirements, but many IT professionals are aware that new and changing needs will lead to future capability gaps. They also know that more of the same is not the answer ...more