Spirent 원형 로고
클라우드 & 가상화

Why Timing and Synchronization in 5G Is Harder than Ever

:

Synchronization testing of servers in a lab environment will not necessarily reflect the actual results in the field. Read why verifying synchronization in the live, working network is the only way to assure timing accuracy and performance.

Obstacles to achieving 5G’s promised performance lurk around every corner. Let’s face it, distributed and disaggregated architectures that lean heavily on open core and radio networks are just not going to play nice with each other out of the gate.

In previous posts, we’ve covered the advanced test and validation work required to get ahead of many of these obstacles. Now, deployments have progressed to the point where we’re starting to dig well below surface issues. Where the especially complex challenges arise.

Lately, we’ve spent more time helping customers understand requirements around synchronization for these distributed networks. In previous mobile generations, millisecond-level accuracy was good enough. But now? The currency is microseconds and even nanoseconds. Even one skipped beat means interference and inaccurate timestamps wreak havoc on performance.

Datacenter timing and synchronization at microsecond accuracy

Datacenters are being challenged by 5G and the accelerating growth of data generated by video streaming, social media, online gaming and a plethora of new applications. That’s to say nothing of the resource-intensive apps we know are on the way.

Maintaining precision timing between servers in a datacenter and across datacenters is important because:

  • Timestamp accuracy is required by regulators

  • Root Cause Analysis needs precise event timestamps

  • AI analytics require accurate synchronization and order of events

  • Better accuracy means fewer bulk data transfers to recover from synchronization errors

Maintaining sync is becoming a constant battle

To prep for burgeoning markets while providing lower latency and higher throughput, service providers are moving datacenters closer to the customer, at the network edge. Higher data volumes and decentralization of databases means that data is being generated and stored in multiple locations at high speeds.

Datacenters used to be primarily north-south data transfers, from one datacenter to another. Now, there is just as much traffic east-west, between servers within large datacenters. As a result, a transaction is likely to be processed by multiple devices with a timestamp stored every time the data is transferred.

At the same time, industry standards and government regulations have raised the bar on precise and accurate timestamps. While the Network Timer Protocol (NTP) synchronized clocks between datacenters on the internet with millisecond-level accuracy, datacenter synchronization is now required to the microsecond (a thousand times more accurate) or even nanosecond (a million times) level.

More precise time synchronization between physically separated devices is requiring datacenters to migrate to Precision Time Protocol (PTP), which can provide microsecond accuracy under ideal conditions.

The pressure is on to dramatically increase datacenter synchronization accuracy. To maintain accurate timestamps, each datacenter needs to be synchronized to UTC and then that time distributed through the datacenter within a few microseconds.

If synchronization is not working well, the data is not properly correlated across multiple locations. Operators typically attempt to resolve this by consolidating all data in one location with bulk data transfers. This requires large numbers of additional servers. All those extra servers increase operating costs and undo the efficiencies gained from the distributed datacenter architecture.

Assuring accurate datacenter synchronization

Planned datacenter timing and synchronization approaches may look great on paper and in the lab, but there is no room for error once they are deployed.

As more datacenters are moved closer to users and into more remote locations with less-than-ideal conditions, synchronization testing of servers in a lab environment will not necessarily reflect the actual results in the field. Verifying synchronization in the live, working network is the only way to assure timing accuracy and performance.

In our next post, we’ll talk about how datacenter timing and synchronization testing is accomplished at a more technical level. In the meantime, learn more about the synchronization challenges of 5G and best practices for meeting 5G enhanced time requirements in the eBook The Need for Timing and Synchronization in a 5G World.

Guest contributor: Bryan Hovey, Product Manager, Calnex Solutions

콘텐츠가 마음에 드셨나요?

여기서 블로그를 구독하세요.

블로그 뉴스레터 구독

Malathi Malla
Malathi Malla

Malathi Malla leads Cloud, Data Center and Virtualization segment for Spirent. Responsible for the Product Marketing, Technical Marketing, and Product Management, she drives go-to-market strategy across Cloud and IP solutions. She has over 14 years of hi-tech experience at both Silicon Valley start-ups and large companies including Citrix, IBM, Sterling Commerce (software division of AT&T), Comergent Technologies. Malathi also represents Spirent as Marketing prime through various open source communities like Open Networking Foundation and OpenDayLight. Join the conversation and connect with Malathi on LinkedIn or follow on her on Twitter at @malathimalla.