Enterprise demand for higher bandwidth, increased throughput and ultra-low latency is unrelenting. Cloud-based applications, work-from-home and IoT were early culprits. Now, intense cloud gaming requirements, emerging industrial connectivity needs and a new wave of applications powered by AI and ML are setting new expectations.
Hyperscalers like Amazon, Google and Microsoft are counting on 800G high-speed Ethernet (800G) technology to double workloads that data centers can support.
Over the last year, we’ve seen 800G escape from the drawing board to reality in record time. This is even as its precursor, 400G, was still moving from production to deployment.
The rapid pursuit . New 800G optical transceivers and switches were demonstrated, along with displays of critical multivendor interoperability.
Of course, any move to a new generation of network is challenging. But the fact that stakeholders are still sorting 400G complexities means the entire industry is working overtime to keep 800G moving at a quick pace.
It will need all the help it can get.
That’s because while 800G is based on 400G technology, doubling capacity has required major optical and electrical innovations. It has also introduced new issues to be overcome, such as power consumption and heat. Meanwhile standards, which are key to successful interoperability, can’t be released fast enough.
Testing keeping 800G on track
Early, comprehensive testing has become a critical factor in accelerating 800G toward production and deployment to meet market demand.
Chipset, transceiver and cable vendors are testing and validating individual solutions and technologies. As one layer in the protocol stack is deemed successful, the next can be tested, while also reverifying the performance of the layers below.
Here, multivendor interoperability testing is key, especially when standards are fragmented, in flux or immature. Any one of these scenarios can lead to varying interpretations and interoperability failures.
As network equipment manufacturers incorporate 800G, they have to verify components while also focusing on application performance and latency. Importantly, they’ve got to do it under real-world conditions, requiring insight into how these near-term application loads will tax the network.
Hyperscalers and service providers need to validate and qualify vendor solutions, ensure system interoperability and assess behavior of the network and applications when faced with emulated real-world traffic.
The success of 800G depends on the industry’s ability to efficiently validate and deploy every aspect of 800G. State-of-the-art testing and validation are the best tools vendors, network equipment manufacturers and hyperscalers have to demonstrate that new innovations and solutions are ready to enable 800G and introduce newfound speed and performance to the marketplace.
You can learn more about the race to 800G in these articles, and , where we explore the trends driving 800G, swirling standards issues, how 800G differs from 400G and some of the technical hurdles that need to be overcome to bring 800G to market.
Want to go even deeper on the technical challenges? The Spirent white paper explains why the jump from 400G to 800G is happening so quickly, how testing is speeding 800G deployment along with the technical challenges that are driving new testing requirements.