for devices is critical whether it’s for Wi-Fi 6/6E or 7, which is on the horizon. As a result, there are new test plan approaches being developed for the new challenges that are being introduced as these technologies advance.
Wi-Fi Alliance members, including Spirent, have collaborated to produce a Wi-Fi performance test plan describing a relatively simple testbed that provides all the means necessary to perform a wide variety of Wi-Fi tests. The testbed provides a foundation to ensure repeatable and reproduceable RF testing and will likely be the cornerstone of further revisions, enhancements, and expansion of the test plan.
While small chamber over-the-air (OTA) testing is becoming the de facto method for testing, theintroduces approaches that we believe are unique from other test plans in the industry.
The testing methodology limits the number of test parameters and avoids needless nested parameter loops by specifying fixed “external” parameters like topology, number of devices etc., and allows the tester to save time and money by focusing on “internal” parameters like security, NSS, and others, as appropriate for their organization. Since more and more consumer devices are pre-, or self-configuring, Wi-Fi Device Metrics encourages out-of-box testing to emulate a customer’s experience.
Real-world customer environments produce results with wide variation. Wi-Fi Device Metrics recognizes this and recommends presentation of results in a statistical manner, rather than just a single number. Each test produces a wealth of data and Wi-Fi Device Metrics details ways that this test data can be efficiently analyzed to provide statistical analysis that more accurately illustrates the customer’s experience.
In the beginning
In the early days of Wi-Fi, RF performance was characterized in the lab using RF cabled setups usually housed in shielded RF rooms. Testing tended to be very specifically PHY related, concentrating on TX power, RX sensitivity, spectral mask, EVM and so on. MAC/PHY testing was covered by Wi-Fi Alliance interoperability testing but testing of radio performance in the real world was limited.
OTA RF performance testing is complicated. Often, it was performed in shielded RF rooms, but generally the results were unreliable because standing waves and movement of people within the chamber caused signal variations as much as 70 dB. Open air testing was often performed in a dedicated test home, which gave some degree of real-world assessment, but again reflections and interference almost guaranteed Never The Same Result Twice (NTSRT).
Nearly all of these legacy test methodologies were aimed at point-to-point connections, which is in line with the goals of all Wi-Fi generations up to Wi-Fi 5.
Wi-Fi 6 was designed to give better user performance to a community of concurrent users in the presence of overlapping basic service set (OBSS) interference. This broke the legacy mold for testing.
State of the art testing today
Now, a testbed needs to contain multiple devices, up to 37 in the case of OFDMA. The testbed must reproduce spatial diversity for MU-MIMO to be tested, there must be facility to generate OBSS traffic, and there is a need for multiple sniffer devices. Cabling such a setup is impractical and all the same NTSRT problems still remain in an OTA environment.
Theis built using relatively small anechoic chambers and uses directional antennas to “couple” RF energy into, and out of, the chamber. This approach limits the signal variability due to standing waves, avoids the need to disassemble the device to make a physical connection, and provides good spatial diversity to support MU-MIMO operation, as explained further in and . The chambers are interconnected with RF cables through variable attenuators to simulate distance.
This type of testbed is relatively new in the industry and has been shown to give results that are repeatable (on the same system), and reproduceable (on a remote system). It is rapidly becoming accepted as the de-facto way to test Wi-Fi as evidenced by other standards bodies such as Broadband Forum, ETSI, and others.
Along with the testbed, the test measurement methodology also needs to change. Methodologies like RFC 2544, which rely on the system to be in equilibrium for a measurement to be performed, are no longer useful because of the continuous variation of the measured parameters caused by all the other devices also contending for the channel.
Better performance indicators
Performance indicators of interest need to be more sophisticated. The traditional raw throughput with one device is less useful in the context of real-world testing where we need to assess the combined experience of a community of users. Instead, we see aggregated throughput, individual device latency, and roaming performance becoming more important.
Wi-Fi Device Metrics use modern traffic generation tools such as multiPerf that provide second-by-second key performance indicator data so that the variability of results can be captured and analyzed. MultiPerf also provides more sophisticated analysis capabilities, for example, measuring the packet-by-packet distribution of one-way delay (OWD).
MultiPerf also has more sophisticated traffic generation modes that help the tester more accurately mimic real-life applications. For example, multiPerf allows one to generate traffic with a specified data rate mean and variance to mimic a user. Indeed, it goes further to produce isochronous traffic at a specified frame rate to mimic voice or video.
Comprehensive presentation of results
As mentioned earlier, the presentation of the metrics needs to be revised. A single throughput number, or a single delay number is not that useful because, in practice there is always a distribution of results, and it is often the spread of those results that affect users more adversely.
Wi-Fi Device Metrics recognizes this fact and proposes a much more statistical analysis of the results, which are presented in various levels of detail.
The first level will typically be in tabular form presenting the mean, standard deviation, coefficient of variance, and performance at certain percentiles. This is useful for simple reports, perhaps for regression testing, where the numbers can be compared, or matched up with the testers’ own criteria for pass/fail.
The next level will be to generate Probability Distribution Functions (PDF) and Cumulative Distribution Functions (CDF) to visualize the spread of results so that anomalies can be illustrated. Depending upon the test case, other detailed analysis and visualization is performed, as appropriate.
An example of packet delay results
An example of this analysis is shown in the image below where the latency test produces a PDF of packet delays on a packet-by-packet basis. Here we mimic multimedia by generating isochronous, variable bit rate traffic.
Generally, real time video or voice is fairly tolerant to a reasonable mean delay, but the tails of the delay spread are the parts that irritate the customer.
Explicit examination of the spread is important to gauge customer experience. This graph shows a mean delay of about 8 ms. Delays around 20 ms are unlikely to cause problems, but there is a finite probability of delays at 50 ms, 55 ms, and beyond, which might disrupt a voice or video stream occasionally and would be irritating to the user.
Spirent Wi-Fi testing
As a leader in Wi-Fi performance testing for devices and access points, Spirent is a key member of the Wi-Fi Alliance task group and regularly contributes to their evolving standards. In doing so, we continually gain insights for developing our automated testbeds to meet the demand for the ever-changing Wi-Fi technology industry.