The previous posts in this series outlined how coherent optics stretch the capacity of existing 10 Gbps DWDM systems to 100 Gbps per channel without major surgery on the fiber network. But that is probably as much as commonly deployed 50 GHz channel DWDM systems will carry, at least over any meaningful distance on existing fiber. So how can exponential bandwidth growth continue at reasonable cost?
The new standard 100 Gbps PM-DPSK technology exploits phase and polarization dimensions to quadruple the number of bits transmitted per symbol interval compared to standard 10 Gbps OOK encoding. Coherent receivers using sophisticated DSP algorithms provide the additional performance improvements needed for a 10x increase of throughput on a DWDM channel engineered to carry 10 Gbps. But that brings us close enough to the theoretical channel capacity of existing systems to make further dramatic improvements untenable.
Stepping back in time, recall that DWDM was originally a disruptive technology that dramatically increased the capacity of each fiber (or more specifically, the optical amplifiers needed to offset fiber attenuation). Channel spacing of 200 GHz initially provided enough wiggle room for drift in the early lasers. As laser stability improved, the window size was reduced to 100 and then to 50 GHz, which is now the most common format. A further reduction to 25 GHz was never really fully realized, at least in part because it became obvious that channel capacity and not laser stability would become the limiting factor.
To increase DWDM capacity beyond 100 Gbps per 50 GHz channel, what are the options?
- 400 Gbps waves may never be widely deployable in 50 GHz due to OSNR.
- 200 Gbps in 50 GHz may be possible with a lot of work, but cost/benefit is iffy.
- 400 Gbps in 100 GHz is a better bet, but only for older installed systems.
- Deploying 100 GHz now would just take us back in time, leading to a dead-end.
One problem with increasing throughput within the DWDM channel grid is the unusable dead-bands between channels in the optical filters. These can waste about 30% of the available bandwidth. Four adjacent channels with 200 Ghz spacing could support one terabit per second (Tbps). This would be applicable to installed systems because they typically incorporate band splits or channel groups of four 50 GHz channels. So, 1 Tbps in less than 200 GHz bandwidth is a logical next step that would provide a further 2.5x improvement in overall DWDM spectral efficiency. But that really is the end of the line for existing DWDM systems.
The DWDM channel grid was established to standardize components that had to be factory tuned to specific wavelengths. The standard grid allowed these components to be mass-produced, reducing costs. This paradigm has enabled tremendous expansion in optical networking for over a decade. But in the future we will move to grid-less multi-terabit transmission.
Tunable lasers in the transmitters have since alleviated problems associated with producing, distributing, and sparing a multitude of fixed wavelength laser modules. Now coherent detection can use these tunable lasers to create a tunable receiver. So there is no need to maintain the fixed DWDM grid. Once we drop the DWDM framework, we can move to a more flexible network architecture. We will soon be able to eliminate fixed channel optical filters, and move to dynamic optical multiplexing.
DWDM, which gave us two orders of magnitude improvement in fiber capacity in the past, will become a hindrance in expanding system capacity in the future. Instead of being an enabling technology moving us forward, conventional DWDM will become a legacy technology. It will continue to be a workhorse enabling bandwidth expansion in the near term, but its long term prospects are limited. Instead of being deployed on the network side of transmission systems operating above 1 Tbps, fixed grid DWDM will only be seen on the client facing sub-rate interfaces.
Ed note, this is the third post in a series. The previous post is here.