Home

The OpenVPX Industry Working Group, a 28-company team founded by Mercury Computer Systems, collaborated with a common goal and accelerated the completion of a system architecture specification for open system COTS suppliers and integrators to specify, design, and build multi-vendor interoperable solutions.

Timeline:

  • January 2009 — OpenVPX Specification effort by Mercury Computer Systems begins, based on VPX embedded community’s need to accelerate multi-vendor interoperable solutions.
  • March 2009 — First open membership face-to-face meeting and call for membership.Spring/Summer 2009 — Ongoing meetings, conference calls and discussions.
  • October 2009 — OpenVPX specification V1.0 completed and transition to VITA 65 working group.
  • January 2010 — VITA 65 Working group completed comment resolution, balloting and ratification of the specification.
  • June 2010 — ANSI VITA 65-2010 ratification of the OpenVPX System Architecture Specification.
Figure 1. Cascading Challenges

Countless hours were spent with embedded community technical and business leaders (suppliers and integrators) to come up with a system-level architecture specification, dedicated to creating well-defined interoperability points for multi-vendor, 3U and 6U VPX integrated solutions. The inter-company marathon was a testament to what can be done when experts are dedicated to solving a significant industry issue for the good of the ultimate primary customer — the warfighters.

So What?

If the following is important to your company or your customer, then OpenVPX should be important to you.

  • Reduce TCO in integrated systems life cycle;
  • Use of common language for simplified RFP generation;
  • Choice of ecosystems to lower costs, get best-of-breed capabilities;
  • Technology refresh possibilities with reduced obsolescence hurdles;
  • Highly interoperable, multi-vendor integrated solutions development;
  • Open standards, performance migration, and proliferation;
  • Reduced risk to deployment for QRC programs;
  • 1 GiGE, 10 GiGE, sRIO, PCIe gen 2.0 fabrics;
  • Optimized SWaP smart processing via open architectures.

VPX vs. OpenVPX

Figure 2. Determine the backplane profile and chassis topology required for the development chassis, and then select a standard OpenVPX reference chassis or create a custom configuration development chassis for design and integration.

A board-level specification approach for VME bus technology was suitable due to its architectural simplicities, and it was logically brought forward as an approach for VPX specifications. While significant VITA standards work was in process, many technology users felt that the focus on the board-level specification(s) was not suitable for creating interoperable solutions for the complex application space that VPX technology is designed to serve. This includes a next generation of complex, rugged, integrated assets that consist of high-speed backplane fabrics and new processor technologies like multi-core x86 and GPGPUs.

In late 2008, VITA’s Executive Director, Ray Alderman, and Mercury Computer’s industry research determined a need for a new systems approach to specifying VPX. In January 2009 the Open VPX Industry Working Group was formed as the first step on the path forward to add system-level clarity to the specifications to accelerate the commercial benefits of VPX technology for integrated, multi-vendor systems. Available today, the ANSI-approved, OpenVPX systems architecture specification builds upon VPX technology (VITA 46 and dot specs) but does so from a top down, system engineering approach to specify interoperability points at the slot, module and backplane level.

OpenVPX Taxonomy

The group created a common building block language to convey the key attributes of the OpenVPX specification. The definition of Planes, Pipes and Profiles were key taxonomy definitions allowing the user to specify a wide range of “building blocks” with a common set of intersections.

Planes: Segregated architecture boundaries for backplane and module connectivity

  • Control Plane — dedicated to application software control traffic (1GE pipe).
  • Data Plane — dedicated to application and external data traffic ( eg: 10GigE switch fabric).
  • Expansion Plane — dedicated to communication between logical controlling system element and a separate, but logically adjunct, system resource (ex : PCIe lanes between multi-core and GPGPU modules).
  • Management Plane — dedicated to supervision and management of hardware resources (eg. I2C).
  • Utility Plane — dedicated to common systems services or utilities (Eg. SYSRESET, Power, Gnd, distribution, ref clocks).

Pipes: A collection of differential pairs assigned to a plane and used by slot profiles. Pipes are protocol-agnostic.

  • Ultra Thin Pipe (UTP): 2 differential pairs (e.g. 1000BASE-KX Ethernet or 1X Serial RapidIO)
  • Thin Pipe (TP): 4 differential pairs (e.g. 2x PCIe interfaces)
  • Fat Pipe (FP): 8 differential pairs (e.g. 10GBASE-KX4, 4x PCIe)
  • Double Fat Pipe (DFP):16 differential pairs (e.g. 8x PCIe interfaces)
  • Quad fat Pipe (QFP): 32 differential pairs (e.g. 16x PCIe interfaces)
  • Octal Fat Pipe (OFP): 64 differential pairs (e.g. 32x PCIe interfaces)

Profiles: The OpenVPX specification uses profiles for structure and hierarchy. Three Profile types exist: Slot, Module and Backplane.

  • Slot Profile: Physical mapping of ports to a slot’s backplane connectors, using planes and pipes.
  • Module Profile: Extends a slot profile by adding protocols, as well as thermal, power, and mechanical requirements.
  • Backplane Profile: Physical backplane mapping of number of slot profiles and topology of slot interconnects.

Backplane Topologies: Different applications require different backplane topologies. OpenVPX supports Centralized Switching, Distributed Switching, and Master/Slave Topologies.

  • Centralized Switching: Uses dedicated switch modules in multiple types of switched configurations (e.g. Dual Star).
  • Distributed Switching: Full or partial mesh switching. May require switch logic on each card for larger slot count chassis (e.g. 5 slot sRIO mesh).
  • Master-Slave: Generally Master host SBC with Slave I/O cards connected by PCIe fabric. (e.g. SBC root complex connected to I/O cards via PCIe fabric.

Using the OpenVPX specification taxnomy and the architectural building block language of connectivity, a system engineer or architect can now leverage the specification’s content and rules for their unique application.