In order to bring as much testing into the lab as possible, a GNSS test environment should be able to reflect the infinite circumstances of the real world as closely as possible.
To achieve that realism, test teams rely on radio frequency (RF) simulators to generate faithful replicas of real-world GNSS signal environments. But how “real” is the environment created with a simulator? And is every simulator equally capable of faithfully representing the real world?
In this blog, we expand on a subset of the topics covered in the Spirent eBook, by outlining six ways in which the design and build of the RF simulator can affect the realism of the test environment.
If you’re currently evaluating GNSS simulators, or if you’re curious about anomalies in your test results that don’t seem to have been caused by the device under test, it’s definitely worth asking the manufacturer about how these issues are handled in the simulator.
1. Quality of signal modelling
RF signal generators, like our Spirentand simulators, use both digital and analogue technologies to produce the replica signal. The characteristics of the signal are modelled in software in the digital domain, before being converted into analogue RF signals that are emitted on the correct frequency by the system hardware.
The digital modelling of the signal is fundamental to the accuracy of the signals emitted – and thus to the accuracy of the test results. Each constellation’s interface control document (ICD) describes the way the signal should be seen by the receiver, taking into account real-world factors like atmospheric interference, clock bias and ephemeris errors. A good simulator will accurately implement the parameters set out in the ICD, and make any updates to the ICD available in the simulator as they are published.
A very good simulator, though, will take into account the fact that the parameters set out in the ICD may not always be “real” enough for the test at hand. For example, orbital navigation data is typically populated in the simulator directly from the satellite motion definition in the ICD – but some complex test scenarios can expose limitations in this approach.
In particular, problems can arise when motion is defined using parameters not present in a particular navigation message, causing the satellite position computed by the user equipment to differ from that generated by the simulator. In this case, a curve-fitting approach produces more ‘realistic’ results - by allowing users to see ephemeris and clock errors that are highly representative of those observed under live sky conditions.
Curve-fitting of this sort is extremely computationally intensive, creating trade-offs in terms of simulator cost and power consumption. But for tests that require it, not having it may impact the realism of the test conditions and the value of the test results.
2. Temperature sensitivity
Another potential issue affecting test realism is frequency instability. While this can be caused by faulty signal modelling in the software, it’s more likely to be an issue with the way the signal is generated in the analogue domain.
The digital signal is converted to analogue RF in a component known as the upconverter, and the appropriate frequency is applied to it using an oscillator. However, this delicate time-keeping device can be sensitive to environmental factors, especially temperature. With some simulators, even small fluctuations in room temperature can affect the stability of the signal frequency.
If the oscillator is affected by temperature, the result will be a progressive drift in frequency accuracy. Impacts on the receiver may include loss of signal tracking, inability to find the signal at low power levels, and potentially a complete inability to decode the data, resulting in no fix at all.
The type of oscillator in the simulator has a significant bearing on its temperature sensitivity. Cheaper temperature-compensated crystal oscillators (TCXO) are more sensitive than larger and more costly oven-controlled crystal oscillators (OCXO), which are capable of maintaining temperature within a specific range. If you suspect that temperature-related frequency instability may be affecting your test results, it’s worth checking which type you have.
Note that the oscillators used in navigation satellites are rubidium crystals rather than the quartz crystals used in most simulators, so both TCXO and OCXO represent compromises in terms of realism. However, OCXOs more closely mirror the performance of satellite clocks, as long as they are operated within the temperature range specified by the manufacturer.
3. Phase noise
Phase noise may be the result of thermal noise and flicker noise in the simulator, caused by poor-quality system design or poorly isolated components. Rather than the gradual drift of frequency instability, phase noise manifests as short, random jumps in frequency.
As it introduces artefacts not present in the real-world signal, phase noise reduces the realism of test results, while also diminishing user control over factors that could impact performance. The impact on the receiver under test are similar to those outlined in point #2 above – ranging from temporary loss of lock to an inability to lock on to the signal at all.
4. Carrier phase alignment
Some simulators – including our own GSS7000 and GSS9000 series – offer multiple channel banks, enabling signals to be generated coherently on different frequencies. In these models, each channel bank will have its own local oscillator (LO) to generate the frequency for that channel bank.
If the LOs are not all operating at the same temperature (for example, if one is warmer as a result of being located closer to the processor) the combined output from all of the channel banks may include some phase noise – even if, individually, all of the LOs are operating within the range specified by the manufacturer. If not addressed, this can cause a carrier phase misalignment when the signal is output, which risks producing misleading test results.
5. Spurious emissions, harmonics and aliasing
Spurious emissions, harmonics and aliasing are all terms relating to unwanted frequencies issuing from components within the simulator that interfere with the wanted received signal. A poorly designed or poorly assembled simulator can produce such emissions from components such as the system reference clock or the local oscillator.
Spurious emissions, harmonics and aliasing can reduce the carrier-to-noise ratio (C/N0) of the emitted signals, potentially preventing the receiver under test from tracking or locking on to it. It’s possible that the receiver may lock on to this noise instead. At the very least, attempting to distinguish between real and spurious emissions will consume receiver computing resources, diminishing performance.
Similarly to phase noise, spurious emissions damage test realism by introducing artefacts into the test environment that the receiver would not encounter in the real world. Accredited manufacturers will ensure spurious emissions are suppressed through the use of techniques like selection filters, sample rates and careful placement of components within the simulator.
6. Amplitude calibration
In the real world, a receiver may experience fluctuations in signal power levels – if it’s subject to a spoofing attack, for example – so the ability to control power levels is a critical element of creating a realistic test environment. It’s important that the power level specified by the simulator user in the control software is exactly the power level at which the signal is emitted. Any difference can undermine the accuracy of the test and produce misleading results.
This makes it incumbent on the manufacturer to precisely calibrate the passive RF combiners in the simulator and the connection cables supplied with the simulator, and to make the resulting calibration files available so that the user can recalibrate the simulator if needed. A good simulator will auto-recalibrate between test runs, so check that yours has this function.
Two Spirent eBooks to help with test realism
If you’d like to learn more about realism in GNSS testing, we recommend. And if you’re currently evaluating simulators and you’d like to compare their handling of some of these factors, provides some good guidance.