Features

Designers of virtually all electronic warfare system applications exploit CPUs and FPGAs, each offering unique strengths and advantages for handling a wide range of tasks. This diversity arises from fundamental differences in the devices. FPGAs consist of hardware logic, registers, memories, adders, multipliers and interfaces connected together by the user to perform a specific function. CPUs consist of ALUs, instruction execution engines, cache memory, dedicated I/O and memory ports all connected in a fixed architecture, whose resources are driven by program execution.

Figure 1. Typical electronic warfare system task allocation for CPUs and FPGAs.
Electronic warfare systems impose some of the toughest restrictions on latency within the landscape of military electronics. For example, systems to defeat RCIEDs (radio controlled improvised explosive devices) must identify a signal that could detonate the device, and then immediately disable that communication through countermeasures. Essential tasks in the chain from receiving the signal, analyzing it, deciding which countermeasure to deploy, and then transmitting the jamming signal at the correct frequency and bandwidth, all must conform to an extremely strict time schedule. Thus, orchestrating the necessary FPGA and CPU resources becomes a critical design effort.

High level decisions and complex data analysis are usually easier to implement in a CPU. If a complex signal processing task can be handled by the CPU, it is usually easier to develop a C program for it instead of trying to develop IP for an FPGA.

An FPGA is typically much better at handling compute intensive signal processing or data crunching tasks because of its DSP blocks. Tasks like FFTs, matrix processing, and digital filtering can exploit the benefits of thousands of DSP blocks operating in parallel. Furthermore, FPGA hardware surrounding these blocks can be tailored for each application. This includes local data buffers, specialized FIFOs, and optimized interfaces to and from external sensors, storage devices, networks, and system components.

The choice of using an FPGA or a CPU for a given task may be obvious because of its nature, but other times it could go either way. If so, the decision is often made for the CPU because a C program is easier to develop, maintain and upgrade. And, from a human resource perspective, it is often easier to hire a C programmer than an FPGA designer.

Next Generation SoC FPGAs

Over the years, CPUs and FPGAs have proven their effectiveness as team players in electronic warfare systems. Because of this complementary relationship, many FPGA vendors now offer SoC (system-on-chip) devices combining CPU and FPGA resources within a single monolithic silicon device. The industry leaders in this market are Xilinx and Altera.

Xilinx offers its Zynq family of SoCs that combine ARM processors with Xilinx FPGA resources. Their most recent offering is the Zynq UltraScale+ series, whose CPU resources include a Quad-Core ARM Cortex-A53 application processor, a Dual- Core ARM Cortex-R5 real-time processor and a Mali GPU (graphical processing unit). The FPGA section includes a different mix of 16 nm resources in each of the eleven members of the series to cover a wide range of complexity. The largest member offers significant computational power with nearly a million logic cells and over 3,500 DSP slices.

A competing family of SoCs devices from Altera is the Stratix 10 series, also using the Quad-Core ARM Cortex-A53 CPU. Based on advanced 14 nm FPGA technology, the Stratix 10 offers ten different resource-balanced versions, with over 5 million logic cells and 5,760 DSP blocks in the largest device. As a major benefit over Xilinx’s SoCs, the Stratix 10 offers DSP blocks that can handle not only single- and double- precision fixed point operations, but also single-precision IEEE 754 floating point functions. Floating point math saves the often tedious task of scaling optimization to avoid saturation and underflow conditions which often plague fixed point hardware. Thus, designers can more easily boost dynamic range for sensitive signal processing applications.

Figure 2. Different types of possible AXI connections for a software radio transceiver.
FPGAs excel at processing and delivering high-rate continuous streaming data because of parallel hardware structures connected through FIFO buffers to sensors or dedicated links. CPUs are much more effective when processing blocks of data located in system memory. Because they must be interconnected, this disparity in the way CPUs and FPGAs prefer to accept and deliver data poses a fundamental challenge within SoCs.

Making the AMBA Connection

In order to mitigate this problem, the Advanced Microcontroller Bus Architecture (AMBA) was developed by ARM, Ltd. nearly two decades ago. AMBA is now widely adopted as an open source, well documented, license free interface protocol between CPUs and peripherals, including FPGAs.

A popular derivative of AMBA is the AXI4 (Advanced eXtensible Interface Rev 4) specification. It defines a comprehensive standard for transferring data between master and slave devices for data widths from 32 to 1024 bits in burst lengths of 1 to 16. A master and a slave device, both having AXI4 compliant interfaces can be connected together and communicate, regardless of the nature or function of the devices.

Because simple devices may not need the extra interface overhead required to meet the full AXI4 specification, the AXI4-Lite specification restricts data widths to either 32 or 64 bits and limits the burst length to single transfers. This is ideal for reading and writing to memory mapped status and control registers, often fully satisfying the needs of most small peripheral devices. Still another derivative is the AXI4-Stream specification, which eliminates the addressing overhead of AXI4 and AXI4-Lite. AXI4-Stream supports only unidirectional transfers from the master device to the slave device.

« Start Prev 1 2 Next End»