Article by Corning Optical Communications technical sales manager Clive Hogg
Over the past 10 years, the rapid advancement of technology has resulted in not only an explosion of data, but a reliance on data centres never before seen. The emergence of disruptive technologies such as virtualisation, IoT devices and 5G are coalescing into huge amounts of bandwidth, requiring data centres to quickly evolve to meet escalating new demands and support latency-sensitive communications.
For data centres to provide consistent performance to customers and easily meet their ever-growing demands, data centre operators need to bring in new technologies and plan for their future capacity needs. Ensuring that the data centre is future-ready requires intelligent design and implementation of new technology.
Data centre and enterprise network facilities today are under unrelenting pressure to deliver higher capacity, highly reliable systems with sound technology robustness for the future. High data rate scalability, reduction in pathway and space utilisation, low latency and ease of testing and installation are all critical in meeting these demands.
Demand for capacity is constantly increasing. By not planning for future capacity demands, data centre operators risk “death by patch cord”, constantly adding new fibre cables with increased capacity to the same space. While increasing capacity, this practice in effect reduces the data centre’s long-term ability to meet the needs of the future. There will come a time where there is no room or space available to add more cables, and the data centre operator would need to overhaul the entire cabling infrastructure of the facility.
Data centre operators need to understand the throughput between facilities via the point-of-presence (POP) room to within the data centre. This interconnection is where density is key and capacity is required. It’s important to leverage options that provide high capacity within the one cable. High fibre count (HFC) trunks with 1728 fibres within the one trunk, and now extreme-density cables with 3456 fibres, provide future-ready capacity while reducing duct utilisation in the outside plant.
As we look to the future, data centre operators need to start exploring technology solutions to address the increasing need for greater capacity.
Vertical-cavity-surface-emitting lasers (VCSEL) have long supported the low-cost deployment of multimode fibre in the data centre. 10G and 25G lanes can be run in parallel via quad small form-factor pluggable (QSFP) transceivers to efficiently achieve 40G and 100G links suited for breakout into the individual lanes. Short wavelength division multiplexing (SWDM) and bidirectional (BiDi) options exist to maximise legacy OM3/4 infrastructure, however these do not support breakout. Roadmaps to 400G exist and distances are typically less than 150 metres, placing them generally within the server area of data centre activity.
When connecting at greater distances, single mode offers better options. Within the single mode camp there are two transceiver styles available, simplex and parallel. Dense or coarse wavelength division multiplexing (DWDM, CWDM) can deliver very high levels of traffic up to 10km. While costs of these transceivers are reducing, the cost remains a multiple of the 8-fibre version parallel single mode 4 lane (PSM4) which offers connectivity up to 2km. For large data centre campuses, PSM4 is the favoured option driving up facility connectivity fibre count towards extreme-density cables.
Traditional 3-tier switching model is giving way to 2-tier spine-and-leaf architecture within the data centre industry. Spine-and-leaf architecture helps to facilitate faster movement of data across physical links in the network, significantly reducing latency when accessing data. Every spine switch is connected to every leaf switch, allowing for the easy deployment of additional cables when required due to its high density. Spine-and-leaf architecture is increasingly the networking architecture of choice for cloud providers as it is a massively scalable, future-ready infrastructure.
While spine-and-leaf architecture offers smarter, faster systems, the migration dramatically increases the number of fibres required to serve interconnection in the data centre campus. Only a few years ago, 864 fibres were standard in campus networks, today 1728 fibres are common. Even higher fibre counts, such as 3456 fibres, are now available – all within standard duct systems. Extreme-density cables offer easier and faster installation, faster cable restoration in the event of cuts, reducing downtime.
Together, these technologies are enabling data centre operators to optimise connectivity density and enable next-gen architectures, to plan for future capacity now.
Data centre managers that fail to embrace available new technologies or provision ahead will find themselves behind the competition very quickly. Quite often, it’s less about installing the actual cabling, but more about provisioning the spacing and ducting within your data centre.
When you’re in the planning and design stages of a data centre, it’s important to consider your facility’s desired life-span and end-point capacity. Technology has demonstrated its capacity for rapid change and increasing demand. Telecommunications companies plan ahead, often forecasting “day 1, day 2, day N” and then doubling it to address future congestion issues, and this concept is important for data centre operators too.
Considering the longevity of your cabling when migrating to higher speeds, and opting to ensure your infrastructure can support all network architectures and speeds to 400G will ensure that your data centre can easily keep ahead of future demands. If there’s no real reason to delay, then don’t delay.