Articles

Significant advances in digital and RF/microwave technologies are leading to more diverse radar applications as well as greater commercialization. This article discusses some of the fundamental research and development challenges in both the digital and RF/millimeter-wave domains, as well as current and future directions in design, system integration, and test.

Radar is used to detect and/or track target objects and their attributes, such as range, speed, and other information obtained through signals at RF and microwave frequencies. The broad classes of radar systems are active and passive (Figure 1). Passive radar systems use non-cooperative source(s) of illumination, such as a target’s emitted signals, broadcast signals, or cellular communication signals, to obtain information about the target. Since radar performance relies on the sensing capabilities of the receiver, significant innovations have been made in areas such as phased array antennas, digital beam-forming, detection algorithms, and source separation algorithms. Active radar uses cooperative sources of illumination by generating its own signal(s) to illuminate the target. Within the class of active radar, there is monostatic radar, where the signal source is collocated with the receiver, and multistatic radar, where there are two or more receiver locations.

Figure 1. Passive radar (left) and active radar (right).

Among active radar systems, there are several common signal types. The most basic is continuous wave (CW) radar, where a constant frequency sinusoidal signal is transmitted. The CW signal allows the receiver to detect phase/frequency variations (Doppler shift) from the target reflection. Unless a special provision for absolute time marker is used, however, range detection is not possible. A modified CW signal using a stepped frequency modulated (SFM) signal obtains a better range estimate by hopping over multiple discrete frequencies. A further modification of the CW signal to linearly ramp up and down a range of frequencies is called linear frequency modulation (LFM) or frequency modulated CW (Figure 2). An LFM radar allows detection of Doppler as well as range by observing the frequency difference of the time-delayed received signal from the transmitted signal. If a stationary object is detected, a constant beat frequency (transmit to receive frequency difference) is observed.

Without loss of generality, today’s active radar system design can be broken into two major components: the baseband signal processing and the RF/microwave front end (including the antenna). Figure 3 shows a high-level block diagram of an active radar system.

Digital Signal Processing

Figure 2. Example LFM waveform where a pulse ramps from high frequency to low, and back again.
Thanks to widely available commercial processors, embedded processors, field-programmable gate arrays (FPGA), digital signal processors (DSP), and, more recently, graphics processing units (GPU), radar signal processing engineers now have a breadth of platforms from which to choose. The choice largely depends on the type of signal processing that is needed and the cost of implementation. General-purpose, personal computer-type hardware may be sufficient for radar systems with relatively low throughput and simple signal processing requirements. An FPGA or GPUbased processor might be needed if large parallel processing is needed. In this case, however, the cost of the hardware increases substantially. In most platforms, pulse generation and receiver algorithms can be implemented with appropriate software, bringing benefits such as programmability and reuse of intellectual property. At the same time, radar signal processing engineers are faced with the challenge of incorporating more and more sophisticated algorithms, consuming longer simulation times within the system and exhausting the available computational resources.

Figure 3. General radar block diagram.
While each of the processing blocks has relatively simple functions, it becomes a complex task to integrate these algorithms, partition them onto appropriate platforms, coordinate and communicate with the RF/microwave front end, and compensate for its non-idealities. Does it make sense to implement all of the processing in the FPGA at the cost of hardware, development time, and less flexibility? Does it make sense to implement all of the processing on the CPU, perhaps at the cost of performance? Or does it make sense to partition the algorithms for the CPU and FPGA (and maybe DSP) such that each algorithm is run in some optimal way? If so, what are the throughput, latency, and synchronization requirements between these processing units? These are some of the challenges faced by radar signal processing engineers in developing the next generation of radar systems.

RF/Microwave Front End

The transmitter and receiver unit (Figure 4) plays a key role in acquiring the information for processing. Many radar systems today operate at S-Band (2 to 4 GHz), X-Band (8 to 12 GHz), and higher. Design choices for the transmitter upconverter and receiver downconverter depend on many factors, such as the target frequency range, available local oscillators, interfaces to the DAC and ADC, phase/amplitude control, and cost. Perhaps the most heavily researched areas are high power amplifier (HPA) and low noise amplifier (LNA) design.