In a previous post, we covered some of the bumps in the road Open RAN must overcome to succeed. Everything from achieving true interoperability and meeting high performance demands to offering bulletproof robustness and true cost efficiencies. What we didn’t really hit on? That a considerable amount of effort required to get past these obstacles and bring Open RAN over the finish line will fall to operators.
In traditional engagements, operators bought fully-integrated RAN stacks from hyper-scrutinized vendors. When there was a problem, they knew where to point the finger. The network tech was tightly controlled, fine-tuned with custom algorithms, validated, and still, problems were bound to arise. Open RAN will see operators leave this comfortable relationship dynamic to become full-fledged solutions architects themselves. They will try to make perfect decisions in a new environment that is anything but. And they’ll be leaning more heavily than ever on the testing partners that support them to bring much-needed visibility to a murky new equation.
Initial operator Open RAN engagements will be defined by deep due diligence and cautious planning at every stage of the buying process:
A solid TCO case will need to be made, accounting for the OpEx reality that every aspect of rollouts will need to be hyper-optimized and then hyper-validated with steps taken to ensure continuously optimized performance.
Vendor selection begins based on cost considerations, and vendor strengths for functions and capabilities that will be deployed. At this stage, operators are evaluating options based on the promises made on paper. Do these promises stand up to real-world network conditions? Will it make sense to buy an RU, DU, and CU all from best-of-breed vendors or slightly diversify? What regional considerations exist for the products chosen? Here, operators will determine how much mixing and matching will be needed to achieve performance and TCO goals.
Isolation testing kicks off to test the performance of individual components. Are these components delivering what has been promised? What performance changes become evident when validation tools are used to introduce real-world traffic patterns? At this stage, issues typically already begin to arise that need to be sorted before moving to the next phase of testing.
Multi-vendor environment testing kicks off. When components are deemed “plug-and-play,” this only specifies that they should be technically compatible based on a common standard. However, “plug-and-play” does not necessarily mean “plug-and-perform.” This is where more performance issues are discovered as systematic testing of hundreds of different scenarios takes place. Can a subscriber successfully register when a UE connects? Are they able to establish a data session or voice call? How many subscribers can attach and at what rate? How many data and voice sessions can be sustained without quality deterioration? Can they move seamlessly between 4G and 5G? How about between two different Open RAN systems? Or between Open RAN and traditional environments?
The process is certainly manageable, but success will depend on operators having unprecedented clarity into expected performance at each stage. Of course, cloud considerations, unaligned software release schedules and dynamic network realities will add further complexity. But that’s for another post.
In the meantime, it’s critical for operators to start getting their arms around the testing and validation methodologies that they will use to determine if Open RAN can be successful in their networks. For a closer look, we dive into detail on how key trends like Open RAN are shaping 5G’s next chapter in our latest 5G report.