Edge hype is all around us. Edge reality? That’s harder to spot.
The market is eager to deploy multi-access edge computing (MEC) networks to bring visions of remote surgery, autonomous cars, and immersive gaming to life.
In fact, our latest findings show 56% of enterprises would be willing to pay for a service-level agreement (SLA) with guaranteed latency.
Achieving this performance will require running applications closer to the customer for optimized latency and reduced congestion to boost application performance.
But how close is close enough? How much performance will actually be needed to support near-term use cases?
We’ve long suspected a disconnect between how much performance enterprise users think they need from edge and what operators are planning to deliver.
We sent our engineers into the field to test the performance of edge hotspots in North America and Asia. And we teamed with STL Partners to interview more than 150 end users on app deployment plans and anticipated network requirements.
We were searching for answers to questions that have persisted for years:
What latency rates are actually required to support near-term and long-term use cases?
How consistent and deterministic does latency need to be?
What is the actual demand from end users? How will the demand be monetized and assured?
How will operators go to market based on the number of edge locations needed to serve demand?
What latencies can be delivered by various existing MEC network architectures and how do they compare to what is actually needed?
What latency-driven services can be delivered now with the existing infrastructure?
Much of what we learned was eye-opening. And it’s all captured in, available for download now.
Real data, gathered far and wide
Our benchmarking analysis provides critical insights into MEC performance measurements that matter most, what can already be achieved via today’s networks, and use cases with the greatest potential for near-term monetization.
After crunching performance numbers and mapping against responses STL received from its industry interviews, these are some of the top takeaways to help guide collaboration between operators and edge customers:
Supply and demand side need alignment. There is a disconnect between the performance end-user customers believe applications will require and performance levels network operators are getting ready to deliver. Enterprises sometimes overestimate required latencies, so operators and customers need to collaborate closely to determine precise needs.
Consistent latency is required. While there’s been lots of discussion about latencies required by different use cases, it turns out latency consistency (jitter) is the most important measurement. 56% of enterprises would be willing to pay for an SLA with guaranteed latency that is never outside a predefined window, not simply “low latency.” They also require reliability and consistency of uplink and downlink latency for many of 5G’s most promising use cases.
Uplinks and downlinks are not symmetrical or consistent. Benchmarking results revealed that real-world edge network uplink and downlink performance are not symmetrical. They also fluctuate from one region to another. Geographical differences in latency, globally and within nations, will impact the performance of applications intended for broad use.
Edge can provide value now. The study and benchmarking data show that 90% of the early use cases the market demands can be supported by today’s network capabilities. This could open the door to early monetization opportunities, but only if latency SLAs can be met consistently. Longer-term use cases will require further optimization in the network and future upgrades to the forthcoming 3GPP standards releases.
Latency must be managed holistically. While MEC can reduce latency, operators must also make their networks more efficient across the RAN, transport and new 5G Core architectures if they are to deliver consistent and deterministic latency. Numerous factors beyond the network, such as application processing overheads, can also impact latency. So, latency must be managed holistically, end-to-end, to achieve desired customer experiences and to meet SLAs.
Our report dives into the rationale and raw data behind all these takeaways.
Establishing the right test regimen
As with any new service, improvement starts with measurement. This means measuring from the end user or application perspective and modelling the testing after the data footprint of the MEC applications.
Testing programs should prioritize latency consistency, not just latency means and medians, for both uplinks and downlinks. Performance consistency should be tested in all target customer markets.
Dive into the details of our analysis and benchmarking data, with our new report, .