The modern enterprise network is at the heart of IT service delivery, and consideration needs to be given not only to what impact new architectures, applications or services will have on it, but also what changes are needed to service these new requirements. There are a number of developments that are in early days but have the potential to radically change the economics, demands and architecture of the networks of the future.
While still in the early phases of adoption, the uptake of video content has the potential to radically alter the nature of networks and – if the adoption is not thought through – to overwhelm them. Video is an immersive medium, and has many applications. It has a role as part of an interactive communications and collaboration solution, building on email, IP telephony and collaboration tools in general.
Video may also form a content delivery system. This may include internal content, or come from external parties. Content delivery is also likely to have a major role in education and training, augmenting or in some cases replacing instructor-led classes. Sites such as YouTube started by serving low-quality, personal clips but have started to morph into content aggregators and delivery channels for content, much of which is starting to become available in high definition (HD).
Video files tend to be large, especially if they are HD. Videos are often streamed rather than played from a local copy, resulting in an increase in network traffic and possible congestion and contention for service with other traffic every time they are played. A lot of bandwidth is needed to transmit the stream, and they are sensitive to service interruption and latency. Ultimately, the impact will be a lot of extra traffic that – compared to “traditional” business data – has a high bit count and relatively low intrinsic value as, once delivered, it has been consumed, and needs to be re-transmitted should it need to be viewed again.
A challenge will be integrating this wave of traffic with the existing base without having to architect and maintain different physical or logical networks, and without increasing the cost of the network prohibitively. Modern, HD video collaboration services, such as Halo or TelePresence, typically require a managed service and dedicated network. This may be acceptable at the high end, but for mainstream implementations these service will need to run on the same networks as the rest of the enterprise traffic. Comprehensive management and monitoring capabilities together with quality of service (QoS) will be of paramount importance in supporting the move to video networks.
Video poses additional problems, too. Consumption of video is very much client driven. These clients come in all shapes and sizes, and also connect in very different ways. More and more devices are connecting wirelessly, from PCs, through projectors and displays, to the new generations of tablets, making wireless video delivery a critical part of the solution. Many of the devices will need power, either directly, or for charging, and perhaps in inconvenient places, making Power over Ethernet (PoE) an important consideration.
More change is coming in other areas. Virtualisation is very much part of the IT infrastructure. Even in its most common form of server consolidation, there is an impact on the network.
Multiple virtual machines are able to work the server hard, and that includes increased networking as well as storage traffic. With multiple workloads, the reliability and availability demands on the network also increase, as a failure will affect many different workloads. The end result will be a shift towards higher bandwidth interfaces, with acceleration of 10Gbit/s and even beyond, combined with advanced features such as in-service upgrades and maintenance that decrease downtime. Network interfaces will be shared, making proactive management a vital constituent of service delivery.
Moving beyond consolidation are new cloud approaches that are enabled by virtualisation. For most companies, cloud will mean the “internal private cloud”, which, in reality, is really a dynamic IT infrastructure. This approach envisions a service-based network that is as free as possible from topology constraints and physical form, but instead is based on the needs of the moment, and on moving workloads to where it makes most sense from a performance, cost or risk perspective.
Removing topology constraints will take some time due to existing investments, architectures and experience, not to mention politics. One of the fundamental changes is the convergence of different physical networks, such as data and storage, onto a common fabric. This will be a multi-year journey for most, rather than a big bang, but long term is likely to become an accepted part of the mainstream. Moving traffic of all types onto a common fabric has the potential to simplify the architecture, as well as enable a level of dynamic flexibility that is either not achievable or affordable with separate and rigid topologies.
The network has an essential role to play in the dynamic infrastructure. Previously, network architecture and topology characterised elements such as security, policy or performance. In a dynamic architecture, where workloads are able to move with freedom, the network must be able to adjust and support these activities, reconfiguring in real time to enable seamless transitions – and to do so in an automated manner without the intervention of administrators, except to deal with exceptional issues.
Looking beyond the company’s infrastructure are “external cloud” services, both hosted and public. These have the potential to relieve some of the burden on the datacentre, but they do have an impact on how we have to think about networking and traffic. In the internal network, traffic is essentially free once the kit is in place.
Moving that traffic to the cloud is a different thing. Bandwidth is usually a lot less available, and increasingly is a variable and expensive cost. Latency suffers, with the result that optimisation is a lot more difficult to achieve.
Content Contributors: Andrew Buss