Demand for bandwidth is on the rise, and the reasons are numerous: home offices, streaming offerings for games, music and movies, and data-intensive applications such as ML and AI in industry and the medical sector are just a few examples. These developments pose challenges for data center operators – of hyperscale as well as enterprise and colocation data centers – because in addition to increased capacity requirements, they must also ensure ever lower latencies while meeting climate targets.
One way to do this is to make more efficient use of existing switch architectures (High Radix ASICS). For example, 32-port switches offer up to 12,800 Gb/s bandwidth (32 x 400G), and versions for 800G transmissions of up to 25,600 Gb/s are also available. These high-speed ports can be easily divided into smaller bandwidths. This enables more energy-efficient operation while increasing the packing or port density (32 x 400G = 128 x 100G).
The need to support low latency, high availability and very high bandwidth applications will continue to grow in the future. The question is not whether data center operators need to upgrade to meet the increasing demand for bandwidth, but when and how. Operators should be prepared and adapt their network design now. After all, with a flexible infrastructure, it is possible to upgrade from 100 to 400 to 800G, for example, with surprisingly few changes.
Network design is becoming increasingly complex
However, higher data rates also increase the complexity of solutions and offerings. As mentioned at the beginning, it is not necessarily a matter of fully utilizing 800G for each port, but of supporting the bandwidth requirements of the end devices. Examples of this are Spine-Leaf connections with 4 x 200G or Leaf-Server connections with 400G ports, operated as 8 x 50G ports, which at the same time makes the network more energy efficient. To achieve this, a variety of solutions exist, as well as new transceiver interfaces.
LC duplex and MPO/MTP connectors (12/24 fibers) are the well-known interfaces for transmission speeds of 10, 40 and 100G. For higher data rates such as 400G and 800G and beyond, additional connector faces such as MDC, SN and CS (Very-Small-Form-Factor connectors), as well as MTP/MPO connectors with 16 fibers in a row have been introduced.
For network operators, it is a challenge to keep track and choose the right technology and network components for their needs. Requirements for increasing bandwidths in network expansions also often conflict with a lack of space for additional areas or costs incurred as a result. Network equipment suppliers are therefore constantly working on new solutions to enable more density in the same space and to keep the network design scalable and at the same time as simple as possible.
Port breakout applications for more sustainability
In addition to better utilization of the high-speed ports and the associated port density, port breakout applications can also positively influence the power consumption of the network components and transceivers.
The power consumption of a 100G duplex transceiver for a QSFP-DD is about 4.5 watts, while a 400G parallel optical transceiver operated in breakout mode as 4 ports with 100G each consumes only 3 watts per port. This equates to savings of up to 30 percent, notwithstanding the additional savings in air conditioning and switch chassis power consumption and their contribution to space savings.
Effects on the network infrastructure
Scalable use of the backbone or trunk cabling is given when the lowest common multiple serves as the basis. For duplex applications, this would classically correspond to “Factor 4”, i.e. base-8 cabling, on the basis of which -R4 or -R8 transceiver models can be mapped. This type of cabling thus supports both current technologies and future developments.
In addition to the selection of a granular, scalable backbone, it is also important to plan sufficient fiber reserves for future upgrades or to implement expansions with the least possible change effort. With sufficient fiber reserve planned, network adjustments can be implemented by replacing a few components: For example, an upgrade from 10G to 40/100G or 400/800G can be implemented by replacing MPO/MTP to LC modules and LC duplex patch cables with MTP adapter panels and MTP patch cables without making any changes to the backbone (fiber plant).
Modular fiber housings also allow a mix of different technologies and the integration of new mating faces (very-small-form-factor connectors) with a few simple steps. Options for mapping are also available today: 8-, 12-, 24- and 36-fiber modules. The use of bend-insensitive fiber also helps make the cabling infrastructure durable, reliable and fail-safe.
Being prepared pays off
Data rates of 400 or 800G are still a long way off for most enterprise data center operators, but bandwidth demand is growing – and fast. Sales of 400G and 800G transceivers are already on the rise. So it pays to be prepared, rather than having to upgrade later under time pressure. Data center operators can make their facilities ready for 400G and 800G now, with just a few changes, to be optimally prepared for the future. Of course, this also applies to Fibre Channel applications.
Discussion about this post