How to Test a Cognitive EW System

Reliably operating electromagnetic (EM) systems including radar, communications, and navigation, while deceiving or disrupting the adversary, is critical to success on the battlefield. As threats evolve, electronic warfare (EW) systems must remain flexible and adaptable, with performance upgrades driven by the constant game of cat and mouse between opposing systems. This drives EW researchers and systems engineers to develop novel techniques and capabilities, based on new waveforms and algorithms, multifunction RF systems, and cognitive and adaptive modes of operation.

Why is Cognition Important to EW Systems?

Cognition in EW systems is an important element in achieving dominance of the electromagnetic spectrum. Systems that can observe, decide, act, learn, and apply experience have an advantage over less adaptable systems on the battlefield. Non-cognitive systems can still apply elements of this – observing, deciding, and acting – but the real innovation is in giving systems the ability to learn and apply experience that makes it more effective against the threats of tomorrow.

EW applications where cognition can be applied include cognitive signal classification and cognitive jamming. Taking jamming as an example, the traditional approach would be to detect a threat and measure its parameters, identify that threat by cross-checking a database or library, selecting the appropriate jamming function to counteract the threat, then executing the electronic attack. This approach relies heavily on curating an extensive library of known threats (and the specific countermeasures that work against them). Adding cognition to this process allows the jammer to adapt to environmental changes more quickly, applying experience and improving threat detection, identification, and suppression.

New Test Challenges for Cognitive EW

Cognitive radar and EW systems clearly contribute to dominance of the EM spectrum, but they do provide challenges when it comes to validating their capability. The traditional approach to testing has been to define several test cases or scenarios, simulate threats in a system integration lab or open-air range, and assess how the EW systems dealt with those threats.

This can work for a legacy system that has a fixed set of capabilities. It is pre-programmed to deal with a specific set of threats or engagement scenarios, which helps set a boundary on the number of test scenarios. For a novel, cognitive EW system, which learns and adapts its behavior as it encounters new scenarios, there are infinite possible test cases, and it must handle as-yet-unknown threats, making it very difficult to assess its performance.

Figure 1. ADAS systems are run through thousands of hours of simulated tests before they make their way to market.

EW can resemble a game of chess – with move and countermove, counter-measure and counter-countermeasure. How do you test whether a child is good at playing chess? One method could be to quiz them on the best move for every board configuration. This would provide maximum test coverage, however given that there are estimated to be around 1045 unique board configurations, this would clearly take far too long. Additionally, not every configuration has a single best next move – many moves could be almost equally effective towards achieving victory – so it’s very difficult to gauge performance based on this criterion.

More feasible would be to have the player compete to see how they perform. This achieves a dual purpose, acting both as a way of rating the child’s performance, but also giving them experience and training, which they can use to learn and improve. Do they need to be capable of beating World Chess Champion, Magnus Carlsen? Probably not. They need to be tested against adversaries that they could reasonably encounter – their peers. If they can consistently defeat their peers, then they could be considered to have passed their “testing” phase.

Translating this back to EW, how do we test systems against their peers? We don’t necessarily have access to adversary systems that will be faced in the field. Field-deployed electronic support measures may have detected and stored the waveforms from red team systems, logging them to database for future identification purposes, or for playback in a test scenario. A cognitive system will, however, need to face threats that it has never seen before. The best we can do this is to test against systems that represent what could be encountered in theater.

Learning from ADAS Test in Automotive

Thankfully there are lessons we can learn from another industry, automotive, which has faced many similar challenges in testing advanced driver-assistance systems (ADAS). Automotive suppliers have focused on moving autonomous vehicle test as early as possible in the design cycle, divesting in road testing, and moving test cases into modeling and simulation.

In ADAS test (Figure 1), a software simulator generates synthetic inputs for the vehicle’s cameras, making the ECU think it is driving, forcing it to process the inputs, deciding how the vehicle should react, feeding back into the vehicle bus, and controlling the vehicle in simulation. ADAS systems must go through thousands of hours of simulated driving before they’ll hit the road. One limitation that should be considered, however, is that with real-world, recorded drive data being played back into the simulator, the data is exceptionally high fidelity, but you do lose the ability to run closed-loop since the data is pre-recorded.

Enabling the Virtual Battlefield

Figure 2. Cognitive EW system tests can resemble ADAS tests, with a virtual battlefield replacing the test drive environment, and I/Q data directly injected into the system under test.

Cognitive EW systems (Figure 2) will require similar treatment before they are deployed. Rather than a virtual test drive environment, a virtual battlefield is needed. Automotive sensors-in-the-loop are replaced with RF receive and transmit functionality. This may be ideal for ease, but there are challenges to overcome:

  • Computing and data movement requirements are high, for simulating all parameters in a small amount of time, including:

    • Emulation of the behavior of advanced artificial intelligence/ machine learning-based threats across gigahertz of RF bandwidth

    • Modelling high-fidelity RF propagation for these threats – more than just point targets with doppler plus clutter

    • Running the model under test in real-time, reacting with the latency required for tactical operation

  • Signal representation – pulse descriptor words (PDWs) may not capture the full action space of waveforms that a cognitive threat may use. Constraining signal representations within a lossy format limits the information that can be supplied to a cognitive system, whilst that system may have been trained against lossless signals. It is important to maintain the integrity of signals through I/Q representation.

Threat Emulation

The most reasonable approach to solving these challenges is to perform threat emulation. Threat emulators would sit between the “virtual battlefield” implemented in modelling and simulation tools, and the EW system under test. The threat emulator would need to simulate known red team threats, as well as potential future red threats based on artificial intelligence and machine learning. Since friendly “blue” systems may also unintentionally interfere with allied systems, threat emulators must also replicate the presence of blue systems on the battlefield. And on top of that, there may be myriad “grey” emitters such as 5G, broadcast signals, and even cosmic radiation. It is not easy to develop threat emulators that are cost effective to procure, maintain, and operate, while keeping them scalable to evolve with the threat landscape.

Figure 3. Threat emulators must cover the breadth of EM emitters in the surrounding environment, including adversarial jammers and communications systems, allied systems, and commercial communications and broadcast signals.

There are some technical advantages we can realize, however. Since the threat emulator need only be used in a lab, it is not beholden to the same size, weight, and power (SWaP) or environmental constraints as tactical hardware. Therefore, the threat emulator can be overengineered with extreme modularity and the latest data converters and processing technologies to maximize performance. (Figure 3)

Channel Emulation

Once the emulation of the threat stack is covered, the next consideration is how to emulate the environment around it. Channel emulation is not uncommon in other areas of EM spectrum operations, including for communications, navigation, and radar. For cognitive EW, the fidelity of the emulator will limit the ability to adequately represent the environment. Most existing hardware-in-the-loop (HIL) solutions will simulate range, doppler, and a simplified RCS (radar cross section). When simulating cognitive threats, we need to do better. Let’s say we want to test a system that utilizes automatic target recognition (ATR) with machine learning or neural networks. How can you assess whether ATR is working correctly with only point targets and basic Swerling models? Modelling of accurate skin returns from targets is necessary.

Looking Ahead: Digital Transformation for EW Test

As mentioned earlier, the automotive industry has pulled a greater level of testing into the modelling and simulation phase, rather than predominantly relying on drive tests. There is a similar opportunity for more EW test cases to be covered in simulation before reaching the hardware integration lab or open-air range – allowing for greater test coverage at lower cost. We need to be able to trust mission-critical systems in the battlefield, and so knowing which test cases are adequately covered in simulation is important.

Figure 4. Vector signal transceivers are ideal instruments for cognitive EW system test, with RF I/O for I/Q data generation and analysis, and an onboard FPGA for real-time, inline signal processing.

The automotive industry doesn’t necessarily have a perfect blueprint to follow on where to cover specific test cases – there is an element of learning through experience of running tests in simulation versus HIL versus drive tests. For EW, maximum reliability could be achieved by saving hundreds of tests for the open-air range, but that’s not feasible from a cost and pace perspective. Test cases will need to be ranked by risk, in severity and probability, to determine the stringency of test required.

Another factor in the decision will be what elements of the EW system specific tests are stressing. For example, a test that stresses the system’s ability to perform direction finding of an emitter would likely require sensor-in-the-loop treatment, because parametrics will affect the ability to measure the angle of arrival of the signal. On the other hand, a test that stresses a system’s capacity for making decisions based on adversarial action may be sufficiently covered by running scenarios in modelling and simulation.

Moving forward, we need to establish a set of rules for EW that helps to determine what test cases to assign to modelling, and which deserve the increased rigor of HIL or open-air range testing. Digital transformation initiatives drive towards a continuous test construct, where data and models transition between modelling and simulation, HIL, and open-air ranges. Test can no longer be an afterthought for EW systems.

This article was written by Jeremy Twaits, Solutions Marketing Manager for Aerospace, Defence and Government, NI (Austin, TX). For more information, visit here .