The need for low latency and quality of service is driving cloud traffic ever closer to the edge of the network. In response, cloud providers are moving toward a new distributed data center architecture of multiple edge data centers rather than a single mega-data center in a geographic market. This distributed data center model requires an orders-of-magnitude increase in optical connectivity among the edge data centers to ensure reliable and robust service quality for the end users.
As a result, the industry is clamoring for low-cost and high-bandwidth transceivers between network elements. The advent of pluggable 100G Ethernet DWDM modules in QSFP28 form factor holds the promise of superior performance, tremendous cost savings, and scalability.
Moving data to the edge
With 5G on the horizon, bandwidth will continue to be a major challenge. Cisco predicts that although 5G will only be 0.2% of connections (25 million) by 2021, it will generate 4.7 times more traffic than the average 4G connection.
The exponential increase in point-to-point connections and the growing bandwidth demands of cloud service providers (CSPs) have driven demand for low-cost 100G optical communications. However, in contrast to a more traditional data center model (where all the data center facilities reside in a single campus), many CSPs have converged on distributed regional architectures to be able to scale sufficiently and provide cloud services with high availability and service quality. Pushing data center resources to the network's edge and thereby closer to the consumer and enterprise customers reduces latency, improves application responsiveness, and enhances the overall end-user experience.
In addition to these performance enhancements, the deployment of multiple metro-distributed data centers increases the level of resiliency/redundancy in the network in the event of a catastrophic site failure. Cloud service providers may also find it easier to overcome geographical infrastructure availability, by acquiring multiple smaller parcels of land versus one much larger one. Power can also be easier to obtain from utilities with smaller, more distributed power needs over multiple data center locations than a single mega data center with truly mega-power demands.
Data center virtualization
Applications and virtualization are driving the need for low-latency network requirements, further propelling the need for data to be stored closer to the user. For example, with the increased popularity of software as a service (SaaS) applications such as Microsoft 365 and Salesforce.com, enterprises are replacing proprietary, on-site applications and workloads with third-party alternatives hosted in public cloud data centers. This shift requires optical connections in both private buildings and data centers in addition to the external workloads and applications being processed, effectively creating a virtual enterprise campus. This rise in migrating application workloads is increasing the demand for a fast, reliable, and cost-effective optical connectivity approach.
Overcoming the fiber bottleneck: 100G DWDM
The recent bandwidth surge often leads to available fiber pairs becoming fully consumed. The result is fiber exhaustion, a condition that can be a particular issue in dense urban areas where the data centers tend to be smaller and segmented over several discrete sites.
Adding more fiber may be prohibited by conduit size, permit requirements (right of way), service startup time, or most importantly construction cost, which can add up to millions of dollars depending on location and distance. Any one or combination of these factors can prevent operators from scaling their network quickly and efficiently to meet their user’s growing demands.
Over the past several years, 100G DWDM has been the key technological innovation driving performance in optical transport networks. Deploying DWDM technologies can help alleviate fiber exhaustion and avoid this bottleneck.
Figure 1. A segmented four-building campus network scenario where, for example, fiber is widely available between DC 1 and 2, so parallel connections are established. But between DC 2 and 3 fiber is scarce, so DWDM is preferred to minimize fiber usage.
DWDM uses multiple wavelengths to provide separate, parallel connections within a single duplex singlemode fiber (SMF) pair. Incoming optical signals are assigned to specific frequencies of light (wavelengths or lambdas) within a certain frequency band, typically the C-Band as defined by the ITU. In a DWDM system, each wavelength or channel is launched and combined into a single fiber via a multiplexer, and the signals are demultiplexed at the receiving end (see Figure 2).
Figure 2. A typical point-to-point DWDM link configuration.
Wavelengths in the C-Band range from 1520 nm to 1577 nm, and typical low-cost multiplexers/demultiplexers operate on a 100-GHz grid supporting up to 48 independent channels in a single fiber. The 1550-nm C-Band window also leverages the capabilities and cost of Erbium-doped fiber amplifiers (EDFAs) to account for optical system losses. Although an investment in a DWDM line system is required, the payback can be in the order of months depending on the bandwidth and fiber availability.
Traditionally, 100G DWDM technology is optimized for transport applications that can connect data centers at hundreds to thousands of kilometers. These 100G DWDM offerings require up to 25 W per 100G and are available in large-chassis transport boxes or telecom CFP/CFP2 modules, rather than the data center industry standard 100G QSFP28 form factor.
Recently, a new breed of DWDM QSFP28 module, based upon silicon photonics and PAM4 modulated transmission, has been introduced in the market. This transceiver enables IP over DWDM (IPoDWDM), a paradigm for cost-effective, scalable DWDM interconnect for distributed data center architecture. Use of such pluggable modules enables convergence of the optical layer inside as well as between edge data centers, enabling switch-to-switch connectivity up to 80 km without the need for a dedicated transport layer.
These modules can also support up to 40 DWDM channels on a single fiber, giving network operators a 40x increase in fiber utilization or spectral efficiency (4 Tbps versus 100 Gbps in a single fiber pair).
100G DWDM as enabler for campus connectivity
Traditionally, operators have not considered DWDM a suitable technology for data centers located within a campus environment due to the higher cost of DWDM versus grey optics. However, when evaluating optical connectivity alternatives, it is critical to consider both capex and opex to calculate the total cost of ownership.
When comparing DWDM with 100G-LR4 optics, which are inherently lower cost than DWDM optics, opex also has to be carefully considered in the calculations. As an example, if the monthly fiber rental cost is $500 per fiber pair, after just six months of operating five parallel LR4 links, it becomes more cost-effective to switch to a DWDM architecture over a single fiber.
The combination of high-bandwidth connectivity and a small, cost-optimized, low-power form factor can expand metro/edge data center options, and meet the growing needs for distributed applications and workloads. Therefore, a DWDM 100G QSFP28 pluggable module approach potentially could offer the lowest-cost alternative to connect edge data centers located within an 80-km distance or even in a campus environment.