Features

The days of proprietary embedded computing systems in military systems are numbered. Proprietary systems, with the attendant vendor lock-in, tend to be platform specific, increasing the development and long-term maintenance costs. New platform: new design. The military has realized that this is an untenable approach.

Electronics are increasingly more sophisticated, more prevalent, and more mission critical. The ideal approach is to create standards for open-architecture systems that leverage commercial off-the-shelf (COTS) technology and provide the flexible building blocks to meet a wide range of needs. Systems should be modular, allowing a module to be upgraded without requiring other modules to be replaced.

The U.S. Department of Defense’s commitment to Modular Open System Architecture (MOSA) form the high-level approach seen in many current initiatives. The Army’s VICTORY (Vehicle Integration for C4ISR/EW Interoperability) is developing standards for interoperability between Line Replaceable Units (LRUs) on combat vehicles. By defining intra-vehicle networking, VICTORY drives interoperability and drastically reduces component redundancy caused by “bolt-on” subsystems.

The concepts developed in VICTORY have been expanded in NAVAIR’s HOST (Hardware Open Systems Technologies), SOSA (Sensors Open Systems Architecture), and SpaceVPX (VITA 78). All share the goals of MOSA in achieving flexible modularity, interoperability, and scalability. HOST, for example, defines a three-tiered system. Tier I provides the overall conceptual framework; Tier 2 defines the core hardware and software. Tier 3 is the component level where various suppliers can apply the secret sauce to differentiate their products, while maintaining compatibility at the module level. SOSA is similar to HOST, but focuses on the special needs of high-capacity imaging systems.

OpenVPX

The open-system architecture chosen for HOST, SOSA, and SpaceVPX are based on OpenVPX. VPX was originally defined in VITA 46 as a high-performance switched fabric backplane. VPX primarily defines a backplane/daughter-card architecture for high-speed digital signals and supports such protocols as VME, Serial RapidIO, PCI Express protocols, Ethernet, and InfiniBand protocols. The backplane system is based on the MULTIGIG RT 2 connector from TE Connectivity (TE).

To assure interoperability on an architectural level, OpenVPX (VITA 65) has been established as the governing standard defining profiles for various configurations at the chassis, backplane, slot, and module levels. To enhance the application flexibility, OpenVPX also recognizes the need to support optical and RF signals and power. A new series of standards has evolved, shown notionally in Figure 1, including VITA 42 (XMC mezzanine), 62 (power), 66 (optical), and 67 (RF). The ultimate goal is to create compatibility between products from different vendors, enabling open architecture, in addition to two-level maintenance and system upgrades, allowing users to swap out line-replaceable modules (LRMs) in the field.

Figure 1. OpenVPX provides an open-architecture, COTS-based solution for high-performance embedded systems. (TE Connectivity)

MOSA standards basically define what’s inside the box. They usually stop at the input/output (I/O) connector and do not consider box-to-box or box-to-sensor interconnections. These interconnections, however, are critical to achieving reliable system level performance. At issue is the link budget, which is critical to ensuring that the output of one system is delivered with sufficient signal integrity and power at the input of the receiving system. Too often, designers concentrate on the boxes and give scant attention to the I/O interconnections in series between the boxes until late in the design cycle. When the boxes are cabled together, if the chosen interconnect for the system is inefficient, the result is signal degradation and the signal’s eye pattern being near fully closed once implemented.

Interconnections

As I/O speeds or bandwidth increase, so do the challenges of the interconnection cables. Both cables and connectors must be carefully evaluated. Many connectors and cables have both military and commercial counterparts. Some COTS connectors have been standardized in military specs. TE’s CeeLok FASX connector, which was designed for 10-Gb/s Ethernet (Figure 2), arrived on the market as a COTS connector; the new MIL-DTL-32546 will give the soon-to- be-qualified connectors mil-spec standing. This example shows how the line between COTS and military is not clear cut.

To avoid late glitches in system operation, you should always consider the cabling as part of the system and not an afterthought. While embedded computers can be designed and tested by themselves, the interconnection design must also consider real-world application needs. Are there production breaks? How many? Each production break adds additional loss to the budget, which in turn will shorten the length of the box-to-box cabling distance.

Bandwidth is an important issue with copper cables. Attenuation increases with the signal frequency. A low-frequency control signal can travel further than a high-speed signal. High-speed signals may require a controlled-impedance interconnection to prevent reflections, reduce signal distortion, and maintain signal integrity. Highly concatenated cable runs can degrade performance if connectors and cables are not designed properly. A cable assembly that can handle 1 to 10 Gb/s signals in a homerun (no breaks) may not be up to the task if it needs to be divided into three or four small cables to accommodate production breaks.

Figure 2. The choice of cable and connectors can determine whether the link budget is met. (TE Connectivity)

The surrounding electromagnetic environment must also be considered. Are there noise sources nearby that can interfere? Electromagnetic interference (EMI) is combated with differential signals, cable shielding, and controlled impedance.

Additionally, the physical environment can also degrade performance and affect the link budget. Extreme temperatures, high levels of vibration, and mechanical stresses are important factors here. Such considerations affect the choice of both cables and connectors. And the choice of cable connector affects the I/O connector at the box. Don’t expect to connect a shielded, controlled-impedance cable assembly to an unshielded, uncontrolled-impedance I/O connector and achieve optimum performance.

Figure 3 shows in simplified form the concept of link budgets. The choice of cable and connectors and the number of breaks in the link will determine whether the power at the receiver’s end is within the acceptable range. The blue line represents a link that works well. Lower-performance cable and connectors, as represented by the red line, may not deliver enough signal strength. As shown by the green line, it is also possible to overpower the receiver.