Inside Story: The Role of the SFFs in Rugged HPEC Computing

For more than 30 years, Aitech has worked with its customer to develop high reliability embedded systems for rugged commercial, military, defense and space applications. Recent company initiatives have focused on the integration of Intel-based rugged PCs and GPGPUs into small form factors, meeting the growing need for inherent security features within hardware platforms and advancements in power, processing and communications in mission-critical systems. Here we talk with Doug Patterson, VP of Global Marketing for Aitech to look at how shrinking system size, coupled with increasing computing density, is impacting embedded computing design principles.

Doug Patterson

SFFs are bringing embedded technologies to a wider number of industries, enabling computing intelligence to be placed on mobile systems and in remote locations. What are some important things for design engineers to consider when integrating these systems into their applications?

Doug Patterson: Testing, qualification, company/product heritage and pedigree, product flexibility and technical support are vital to every mission. At the system-level the application always rules the options available related to the hardware and software selection. Is there a processor or real-time operating system preference? Is the application Earth terrain-based or Space. If it’s Space, is the application Earth orbit (LEO, MEO, GEO) or deep space? Are there specific radiation flux, type, or any specific mission endurance issues that must be considered too?

For Earth- or Space-based systems, what is the available system power, is the system mission- or flight-critical? What is the mission altitude as this can affect the needed systems radiation-tolerance and ultimately, system reliability. Are there particular or unique environmental considerations in the application?

The caveat emptor here is that everyone’s definition of a “real time” system is different and is completely dependent on the application. For example, for a GPGPU-based system, is the system’s reaction to an processed and identified image and performing some dependent action upon the image of 2 seconds acceptable, or does the system need to react and respond in 10mS, 100mS, 1 second, (etc.)? With over 1 TeraFLOP of processing performance, today’s GPGPU-based systems can process, track and react to literally 100’s of images at once, again depending on the level of image processing and identification is needed.

All these – and more – will help select the right SFF system needed for the widest range of applications.

Can you give us a comparison of HPC computing a few years ago, and what it looks like today in terms of size-to-performance?

Doug Patterson: By comparison, 3 to 5 years ago, yesterday’s modular, high performance Intel Corei7, 3U VPX-based HPEC system would take approx. 400 in3, dissipate close to 75W and weigh over 13 lbs (6.0 kg) and provide 60-70 GFLOPs of processing performance – at best. Today’s high performance, NVIDIA Jetson GPGPU-based systems take up a measly 50 in3, dissipate no more than 17W, weigh less than 2.2 lbs and pack over 1 TFLOP (that’s 1,000 GFLOPS) of performance. That’s over 14 times the performance of the HPEC system at less than 1/3 the price. Anyone can do the math here; the cost/performance benefits of GPGPU-based systems performing similar applications as the HPEC are huge.

Describe the increasingly-important role software plays in SFF computing, and the need integrate it with the hardware side of the equation. Why has this become an area to consider?

Doug Patterson: Hands down, software is the real cost-driver of any of today’s modern, embedded computing system. And maintaining that software investment is of paramount importance to our customers as the software costs can easily dwarf the hardware development and recurring costs by 10x, or usually more. Software portability across multiple generations of system hardware can also be a heavy cost-driver if the hardware platform upgrades are incompatible with previous generations. For today’s systems engineers, developing your application program in a high-level programming languages is no longer a luxury, it’s mandatory if you want to maintain some semblance of sanity in today’s ever-changing market of hardware component obsolescence, driven mainly by Harvard Business School graduates who only look at their component company’s bottom line. As long as component longevity remains a constant, ever-changing variable, picking the right embedded computing hardware platform – with roadmaps to software compatible, next generation CPUs – will save you many headache’s and ulcers in the following decades as technology marches on to the beat of its own drum. Aitech’s Intel-based HPEC and SFF computer platforms, like the A172 for example, along with GPGPU products using NVIDIA’s Jetson with hundreds of CUDA-core parallel processors, provides this compatibility and transportability across future GPGPU platforms thus protecting your CUDA-based software through multiple hardware upgrades.

Software is therefore a critical piece that must be considered carefully. Is a commercially-available RTOS preferred or Linux, or some other option? What programming languages will be used, C/C++, Ada, native CPU assembly language, or something else? Does the system need to support Built-In Test (BIT), and if so to what levels of system maintainability? Real-time Linux software from various vendors can often provide solutions for hard-real-time Linux applications.

Are there any Cybersecurity concerns? What are the options available to solve the most demanding Cyber requirements?

Aitech can provide answers to all these questions in detail – and more - guiding and advising our customer to the right hardware/software solution for their application. Or perhaps advising on options of using other company’s solutions if the products we have, or can tailor from existing products, don’t fit all the requirements. We believe in supporting our customers, even if they chose a different hardware/software partner.

What has SFF done for modularity in embedded computing architectures?

Doug Patterson: SFFs have allowed systems engineers to finally realize the dream of true, distributed processing and putting the computing resources out to the edge, closer to the sensors where the hard work needs to be done, passing pre-processed data back over a sar or mesh network to a smaller central mission computer that only needs to decide the action to take. Large, centralized, modular, standards-based main mission computers used in the larger defense platforms can easily dissipate hundreds or Watts, weighing 15, 20 sometimes as much as 70 lbs. These system architectures are seeing the slow transition to multiple, smaller, lighter weight, modular SFF-based, stand-alone systems weighing only a few pounds, dissipating less than 20W and in the end, costing much less.

Have these new SFF system impacted the use of industry standards? Are we headed towards the similar problems the industry faced in terms interoperability across systems, proprietary architectures, etc.?

Doug Patterson: The ANSI/VITA standards organization are opening open working groups to study moving towards a SFF standard. Work had already started under VITA 73, 74 and 75, but are unfinished, but starting with these should help speed up this process. Like the military’s ubiquitous ARINC-404 ATR standard developed back in WWII to define easily installed and removable VHF radios, these SFF chassis types and sizes are being defined to match the application’s needs, and are not concerned with the internal systems functionality. These new SFF standards are intended for both Earth- and Space-based applications, with the ‘standard’ defined physical function, not necessarily needing to standardize the internal electronics physical form., although the boards physical size standards are also being taken into consideration.

SFF subsystems are getting so small, it’s almost mindboggling. Where do you think the next leap will come from: in an even smaller enclosure size, a consolidation of components and assemblies within, or maybe a combination?

Doug Patterson: All systems and their physical sizes are ruled by the component size and their functionality, in turn all guided Moore’s Law. Therefore, more “stuff” can be packed into a smaller and smaller physical chassis, enclosures and physical envelopes.

I don’t see on the horizon yet any revolutionary, new component physics or manufacturing lithographic technologies becoming available to the general industry yet, but we all know folks are working on it. Current line widths and 3D stacking are taking us quickly into the realm of sub-nanometer geometries, which means smaller components with even greater levels of added functionality, but with lower operating voltages and higher heat/power dissipations.

At some point, lower the component Vcc to well below 1V also lowers the margins of the signal-to-noise ratio (SNR), making signal integrity, layouts and enclosure sizes even more paramount to tomorrow’s designs. Getting rid of the added heat and dealing with increased power is a hurdle, but that it is solvable with some grit, determination, with just a smidgeon of innovation.

The next issue to crack in miniaturization is always the system interface connectors. In most instances today, the size of the chassis is not dictated by the size of the electronics functionality inside the box, the chassis size is driven only by the physical size of the power and I/O connectors! While miniaturized, military-grade, MIL-C-38999 circular connectors are now available to help solve this issue, existing platforms desperately needing a technology insertion system upgrade, the internal harnesses all have the larger, bulky 38999 connectors, making this upgrade process even more challenging. This issue too is being addressed and should be solved in a few short years, if not sooner by companies like Aitech who have seen the problems our customer are facing, up close and personal.

As I’ve outlined, the larger, Intel-, ARM-or PowerPC based integrated, open system and modular subsystems are being given less consideration today in favor of the SFF, distributed computing architectures and methodologies. With 1 TFLOP of performance available today – and even more coming in the near future, for SFF-based systems, true, thinking neural networks are now becoming a reality in Land, Sea, Air and Space systems.

An excerpted, edited version of this interview originally appeared in the June 2019 issue of Aerospace and Defense Technology .