High-speed recorders represent a major share of electronic instrumentation found in commercial, industrial, scientific research, government, and defense applications. Each system captures specific classes of analog and digital inputs from among an enormous range of sensors, networks, and interfaces.
When discussing high-speed analog IF and RF recorders for defense and signals intelligence applications, several critical aspects of the design must be considered such as the selection of hardware components capable of handling the functions at the required speeds and operating conditions, sufficiently fast interconnection paths between system components, software architectures capable of managing real-time operation with no data loss, and packaging techniques that assure continuous and reliable operation within the required environment.
Hardware Components and Interconnects
The essential hardware blocks of an RF/IF analog record/playback system are shown in Figure 1. Typical A/D converters in these systems generate 16-bit samples at a rate of 200 MSamples/sec, producing a data stream of 400 MB/sec. D/A converters in these systems usually require the same data rates in the reverse direction.
Digital interfaces for these data converters are either high-speed parallel LVDS or gigabit serial, and the configurable interfaces found in FPGAs represent an ideal connection solution. FPGAs are also a natural choice to handle most of the other tasks associated with recording system front ends, including clocking, synchronization, triggering, gating, data formatting, memory buffering, and system interface functions.
As the emerging industry standard for embedded systems, OpenVPX specifies products for extended environments, making it an ideal platform for rugged recorders. Three 200 MHz 16-bit A/D and two 800 MHz D/A converters can fit on a single 3U OpenVPX module, with room for an FPGA, memory buffers, and a PCIe system interface connector.
Most OpenVPX designs favor x4 and x8 ports on the backplane gigabit serial connectors for moving data between modules. Although any serial protocol is allowed on the backplane, PCIe (PCI Express) is an increasingly popular choice. For example, at 400 MB/sec for each A/D channel, the module must deliver real-time streaming data at 1200 MB/sec to the rest of the system. A single PCIe Gen 2 x 4 port provides a comfortable margin, delivering peak data rates of 2000 MB/sec.
The CPU, bridge, and memory blocks in Figure 1 can be satisfied with a 3U OpenVPX single board computer (SBC) with PCIe links to the backplane. A popular architecture uses the Intel Core i7 processor with the X58 chipset. It includes a fast DDR3 memory controller and four x4 PCIe ports, each connected across the backplane to a different OpenVPX slot. This allows the CPU to access the modules in each slot, and it also allows the modules to move data to and from SBC system memory.
The RAID controller manages multiple SATA disk drives through individual SATA 2 or SATA 3 links, each supporting up to 300 or 600 MB/sec using gigabit serial technology. The RAID controller stripes data simultaneously across multiple disk drives to boost read/write rates for the system. Secondly, it aggregates the storage capacity of all the disks to handle longer recordings and larger files. Lastly, it supports several redundancy modes (RAID levels), to ensure against data loss if one or more disk drives should fail.
Typical RAID controllers use a PCIe Gen 2 interface with x4 or x8 ports to support peak data rates of 2000 or 4000 MB/sec to the system. This ensures that the overall throughput to the disk array is limited by the disks and not by the system interface. Ruggedized 3U OpenVPX RAID controllers are now available from several vendors.
Conventional rotating magnetic drives are extremely sensitive and require elaborate shock mounting and protection against temperature extremes. For - tunately, SSD (solid state disk) technology has advanced rapidly on several fronts. The typical SSD form factor matches 2.5-inch notebook drives and uses SATA power and data interfaces. Storage capacities have now reached 1 TB and higher, and read/write rates are as high as 500 MB/sec using SATA 3. Because they use solid state memories instead of moving mechanical structures, SSDs are inherently immune to shock and vibration and also operate over an impressive range of temperatures. Although still several times more expensive than magnetic drives per GB of storage, SSDs offer significant savings in weight and space, eliminate the need for bulky shock mounting structures, and are 2 to 4 times faster. These drives are now available as ruggedized 3U OpenVPX modules.
Assembling these components to form a ruggedized 3U OpenVPX recorder requires a backplane with appropriate links between the modules. Also, the chassis must be matched to the application environment to provide mechanical support, power, thermal management, and I/O. The most demanding thermal environments, such as a high-altitude, unpressurized equipment bay in an aircraft, cannot depend on airflow for cooling. Instead, heat is conducted from the modules into the solid metal card guide channels of the enclosure.
The chassis can then be bolted to a cold plate or surrounding water jacket to remove the heat. For very low temperature, the chassis can be heated with electrical resistance heaters or circulating liquid. Because the conduction-cooled modules track the external chassis temperature so closely, system designers can ensure overall system operation by guaranteeing minimum and maximum chassis temperatures. Some 3U OpenVPX systems are rated for temperatures from -40 °C to +85 °C.
When the conduction-cooled modules are covered with thermal plates and then constrained along both edges within the metal channels, these products perform extremely well under shock and vibration. Because of this, performance values of 40g shock and 0.1 g2/Hz vibration are quite achievable.
Figure 2 shows an OpenVPX implementation of the recorder block diagram shown in Figure 1. The SBC connects to the A/D and D/A converter module and the RAID Controller through two PCIe Gen2 x4 backplane links for initialization, status, and control. These same links also allow streaming data transfers between system memory and the data converters, and between system memory and the RAID controller. Each PCIe Gen2 x4 link can move data at peak rates of 2000 MB/sec.
Because OpenVPX supports multiple backplane protocols, four SATA links from the RAID Controller flow across the backplane to each of the four 256 GB SSD disk modules. Not only do these four SSDs provide over 1 TB of storage, they are also fast enough to support aggregate read/write speeds of 1000 MB/sec.
A conduction-cooled 3U OpenVPX chassis for airborne applications is often sealed with circular MIL connectors for digital I/O and bulkhead coaxial connectors for analog I/O. An internal 28 VDC switching power supply develops all of the required voltages for the system. Figure 3 shows an example of a ½ ATR 3U OpenVPX system.
Software Architecture and Strategies
Recording real-time data streams from high-speed A/D converters stretches the raw performance levels of the hardware components and system interconnects discussed above.
Even more difficult is working under the Windows operating system, which is inherently not well suited for real-time applications. Nevertheless, Windows supports a tremendous wealth of applications, including analysis tools, system utilities, network support, and other applications.
The key strategy is to keep the CPU from touching the data and instead let DMA controllers perform all streaming transfers. These hardware engines are driven by programmed parameters such as block transfer length, and source and destination memory addresses. Once started, DMA controllers directly harness the PCIe interfaces for optimum speed, independent of CPU activity. They can be driven from a table of transfer parameters, also known as a “linked list,” so that when one transfer is complete, the next one starts automatically.
After setting up the linked list parameters, the CPU can sit back and follow the activity through hardware interrupts that report progress until the recording or playback session is complete. The data converter module houses one set of DMA controllers, while the RAID controller houses another set. This strategy is the basis for Pentek’s SystemFlow™ recording system software.
SystemFlow creates a number of buffer tables system memory. They are arranged in a circular buffer ring designed to absorb worst case latencies in the system. For recording, DMA controllers in the A/D converter module start filling the first buffer and then continue on to the next. After being notified by interrupt that the first buffer is full, the RAID DMA controllers start reading buffer data and writing it onto the disk array. The two sets of DMA controllers chase each other around the buffer ring in this fashion until the recording is finished. For each recording scenario, SystemFlow carefully selects the quantity and size of each buffer to ensure that new data from the A/D converter always finds an empty buffer next in the ring. Playback operates the same way by reversing the read/write roles of the DMA controllers.
Unlike most recording software, SystemFlow uses the NTFS file system instead of a proprietary format. This allows any Windows application to immediately access the recording as soon as it is complete, without waiting for a file conversion.
Successful design of rugged recorders mandates a diverse range of engineering expertise and requires careful attention to each critical aspect. Not only must these systems withstand the required environmental operating conditions, they must also guarantee loss-free recording and playback across a wide range of operational modes. As higher speed data converters and SSDs emerge, faster gigabit serial links and memory buffers need to support them. Finally, the recording software must manage these hardware resources with judicious control and minimal overhead.