Story image

Opinion: Interconnectivity must scale in line with bandwidth demand

22 Nov 18

Article by Corning Optical Communications technical sales manager Clive Hogg

As network traffic in data centres grow, data centre interconnectivity needs to seamlessly scale to keep pace with the massive increase in demand for bandwidth. The data centre interconnect application has emerged as a critical and fast-growing segment in the network landscape.

This article will explore several of the reasons for this growth, including market changes, network architecture evolution, and technology changes. 

The growth of data has driven the construction of data centre campuses, notably hyperscale data centres. To keep information flowing between the data centres in a single campus, each data centre could be transmitting to other data centres at capacities of up to 200 Tbps today, with higher bandwidths necessary for the future (see Figure 1).

Figure 1. Conceptual campus layout. DCI requirements and distances are unique. Bandwidth demands can range as high as 100 Tbps and even 200 Tbps.

Traditionally, data centre architecute is a three-tiered topology, which consists of core routers, aggregation routers, and access switches. The three-tiered architecture doesn’t address the increasing workload and latency demands of hyperscale data centre campus environments. Today’s hyperscale data centres are migrating to spine-and-leaf architecture (see Figure 2) where the network is divided into two stages. The spine is used to aggregate and route packets towards the final destination, and the leaf is used to connect end-hosts and load balance connections across the spine.

The large spine switches are connected to a higher-level spine switch, often referred to as a campus or aggregate spine, to tie all the buildings in the campus together. By adopting flatter network architecture and high-radix switches, we expect the network to get bigger, more modular, and more scalable.

Figure 2. Spine-and-leaf architecture and high radix switch requires massive interconnects in the data centre fabric.

Which mix of technology is required to deliver DCI connectivity?

Multiple approaches have been evaluated to deliver transmission rates at this level, but the prevalent model is to transmit at lower rates over many fibres. To reach 200 Tbps using this method requires more than 3000 fibres for each data centre interconnection. When you consider the necessary fibres to connect each data centre in a single campus, densities can easily surpass 10,000 fibres. 

A common question is when to use DWDM or other technologies to increase the throughput on every fibre as opposed to constantly upgrading the number of fibres. Data centre interconnect applications up to 10 km often use 1310-nm transceivers that don’t match the 1550-nm transmission wavelengths of DWDM systems. So the massive interconnects are supported by using high-fibre-count cable between data centres.

The next question becomes when to replace 1310-nm transceivers with pluggable DWDM transceivers in the edge switches by adding a mux/demux unit. The answer: when or if DWDM becomes cost-effective for these on campus data centre interconnect links. Once this happens, the same bandwidth will be achieved by using DWDM transceivers associated with much lower fibre count cables.

The current prediction is that connections based on fibre-rich 1310-nm architectures will continue to be cheaper for the foreseeable future (see Figure 3).

Figure 3. Pluggable DWDM transceivers vs. 100G CWDM4.

Best practices to build extreme-density networks

It is important to understand the best ways to build out extreme-density networks as these networks present new challenges in both cabling and hardware. For example, using loose tube cables and single-fibre splicing is not scalable.

New cable and ribbon designs have doubled fibre capacity from 1728 fibres to 3456 fibres within the same cable diameter or cross-section. These generally fall into two approaches: one approach uses standard matrix ribbon with more closely packable subunits, and the other uses standard cable designs with a central or slotted core design with loosely bonded net design ribbons that can fold on each other (see Figure 4).

Figure 4. Different ribbon cable designs for extreme-density applications.

Leveraging these extreme-density cable designs enables higher fibre concentration in the same duct space. Figure 5 illustrates how using different combinations of new extreme density style cables enables network owners to achieve the fibre densities hyperscale-grade data centre interconnections require.

Figure 5. Using extreme-density cable designs to double fibre capacity in the same duct space.

When leveraging new ribbon cable designs, network owners should consider the hardware and connectivity options that can adequately handle and scale with these very high fibre counts. It can be easy to overwhelm existing hardware, and there are several key areas to think through as you develop your complete network. 

If you are currently using 288-fibre ribbon cables in your inside plant environment, your hardware must be able to adequately accommodate 12 to 14 cables. Your hardware also must manage 288 separate ribbon splices. Using any single-fibre type cables and a single-fibre splicing method in this application is not feasible or advisable because of massive prep times and unwieldy fibre management. 

Another challenge is keeping track of fibres to ensure the correct splicing. Fibres need to be labelled and sorted immediately after the cable is opened because of the magnitude of fibres that must be tracked and routed. In most installations, redoing cable prep is manageable. In the case of extreme-density networks, a mistake could cost a week’s delay for just one location.

What does the future hold for extreme-density networks?

The most important factor right now is whether counts will stop at 3456 fibres, or go higher. Current trends suggest there will be requirements for fibre counts beyond 5000. With fibre packing density already approaching its physical limits, the options to reduce cable diameters in a meaningful way becomes more challenging. 

Development has focused on how to provide data centre interconnect links to locations spaced much farther apart, and not co-located within the same campus. In most data centre campus environment, data centre interconnect lengths are 2 km or less. These relatively short distances enable one cable to provide connectivity without any splicing points. With edge data centres being deployed around metropolitan areas to reduce latency times, distances can approach up to 75 km. Extreme-density cable design makes less financial sense because of the cost to connect the high number of fibres over a long distance. In these cases, more traditional DWDM systems will continue to be the preferred choice, running over fewer fibres at 40G and higher.

As network owners prepare for 5G, demand for extreme-density cabling will migrate from data centre environments to access markets. It will continue to be a challenge in the industry to develop products that can scale effectively to reach the required fibre counts while not overwhelming existing duct and inside plant environments.

The next gen is oriented around the support for up to 600G per wavelength and would only be used in shorter reach metro applications with flex grid line systems capability. Data centre network operators are adopting 100G inter data centre networking technologies today, and will soon be moving to adopt 400G.

MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill. 
Schneider Electric's bets for the 2019 data centre industry
From IT and telco merging to the renaissance of liquid cooling, here are the company's top predictions for the year ahead.
China to usurp Europe in becoming AI research world leader
A new study has found China is outpacing Europe and the US in terms of AI research output and growth.
Google says ‘circular economy’ needed for data centres
Google's Sustainability Officer believes major changes are critical in data centres to emulate the cyclical life of nature.
52mil users affected by Google+’s second data breach
Google+ APIs will be shut down within the next 90 days, and the consumer platform will be disabled in April 2019 instead of August 2019 as originally planned.
Ramping up security with next-gen firewalls
The classic firewall lacked the ability to distinguish between different kinds of web traffic.