It’s no secret that bringing a novel idea for a safety-critical application from concept to market can take a ton of time, dedication and smarts, along with a whole lot of luck. Since it’s initially uncertain the idea will even work, a proof of concept (PoC) seems a logical place to start, focusing specifically on science while other considerations like the rigorous traceability requirements to comply with standards can be worried about far down the road.

With budgets likely tight, and software considered more a means to an end, using an open-source software solution would be logical, even in the more safety- and security-critical aerospace & defense industry. Open-source software (OSS) can be downloaded for free within minutes, and there’s an internet full of advice, support and guidance. Sounds like a fool-proof plan, right?

Not So Fast

There are a few issues with OSS for critical applications, some easier to remedy than others. Take the operating system, for example. Standard Linux does not have real-time capabilities. The development of Real-Time Linux in the form of the PREEMPT_RT patch represents the Linux foundation’s answer to those concerns, but questions remain about its suitability for hard real-time applications when alternatives may need to be sought. In addition, it has yet to be refined to a point where it can be merged into the mainline Linux kernel.

Here’s where things get even trickier. Many projects must adhere to functional safety standards to achieve compliance and civilian standards continue to gain momentum in defense applications. For example, the defacto standard for software development in commercial avionics software and other airborne systems, RTCA/DO-178C “Software Considerations in Airborne Systems and Equipment Certification,” is of increasing relevance for military applications, as is adherence to ISO 26262 “Road Vehicles – Functional Safety” for automotive vehicles. Each lays down a development life-cycle that is managed to ensure rigor in terms of quality assurance, configuration management, change management and verification/validation:

  • Quality Management: At the heart of any critical system is a robust quality management process. As a generalization, quality management within an open-source software development environment tends to be less rigid. Even so, compliance with a standard such as ISO 9001 ensures only that the quality of product is repeatable, not necessarily high.
  • Change Management: The downside of anyone being able to fix bugs as they are found in open-source developments is that it bypasses formal change control processes.
  • Configuration Management: Typically, both open-source projects and more traditional developments use a repository (e.g. Git, SVN) that provides a controlled environment for the latest development version and for builds, candidates and actual releases.
  • Verification and Validation (V&V): A key problem area for mission-critical software is in the V&V activities. The safety case requires evidence to support the argument that the software is fit for purpose; that is, it meets the documented requirements.

In common with many sector specific functional safety standards, ISO 26262, for example, describes a V-model for automotive developments ( Figure 1).

This requires traceability of the requirements through the full lifecycle, verification and validation of the design, and verification and validation of the implementation. These stages are all difficult to achieve when adopting open-source solutions.

In addition, functional safety standards typically recommend adopting coding standards such as the popular guidelines described by MISRA. Empirical evidence suggests that adoption of such guidelines is rare within the open-source community, perhaps because the guidelines themselves are not open source.

Figure 1. Traditional sequence for the application of an automated tool chain to the ISO 26262 process guidelines.

The net result is that you can develop application software of exemplary quality in line with the functional standard of choice. However, if your operating system doesn’t also achieve that level of quality—and just as importantly, doesn’t provide evidence of that quality—then your system cannot be compliant.

Introducing ELISA, a Future Solution

Long faced with this conundrum, the Linux foundation in 2019 launched the Enabling Linux in Safety Applications (ELISA) open-source project to help companies “build and certify Linux-based safety-critical applications and systems whose failure could result in loss of human life, significant property damage or environmental damage.” While the project is backed by significant supporters such as Arm, BMW and Toyota, it’s still in its early stages, so unfortunately not immediately helpful for any current development projects.

One-Shot Adoption

Right now, an open-source operating system cannot be used, uncontrolled and incrementally, within a mission-critical project. However, this does not mean that OSS cannot be used at all.

Linux is an open-source package no different from any other externally developed software package (e.g. a library or driver) in that it should be considered as Software of Unknown Pedigree (or SOUP) with a full V&V process applied to it—and reapplied in the event of any changes to the base package, which should be introduced only through a managed change-control process.

The traditional application of formal test tools is illustrated in the ISO 26262 ‘V’ model diagram. A team facing the task of building a standards-compliant application on an open-source operating system will be required to follow a more pragmatic approach not only because the code already exists, but because the code itself may well be the only available detailed “design document.”

In general, for a library or a driver, that might be a practical proposition. But for a safety-critical application, will such a development team really want to spend their time reverse engineering an operating system?

A Pragmatic Solution for Today

In a proof-of-concept development, the use of Linux in some form or another is almost inevitable. What happens after that depends primarily on timescales. How long will the development take? And when will ELISA be ready?

Figure 2. Performing unit test with the LDRA tool suite.

Portability is one of the major advantages of any Portable Operation System Interface (POSIX) compliant development. Even when application development moves from a sandbox “hack it and see” approach to a formally documented, compliant development lifecycle, that makes it practical for even an extended development team to continue to leverage Linux. This includes performing all the verification and validation activity demanded by the functional safety standard of choice, including requirements tracing, the applications of coding standards, unit test ( Figure 2), and structural coverage analysis.

Eventually the product has to be readied for market. If ELISA is deployed by then, the road ahead is clear. If not, there are several POSIX conformant, commercially available RTOS such as those from QNX and Lynx that are certified for use. In principle, it is then simple to recompile the application with a single license for a commercial RTOS of choice, re-run the dynamic analysis tests in the new environment, and go to market.

To make this practical, consider these two points early in the project:

POSIX compliance and conformance: Figure 3 highlights the possible degrees of subtle mismatch between “compliance” and “conformance”, capturing the blurring of boundaries between what is defined by the POSIX specification and what is implemented in practice.

Figure 3. The Open Group’s illustration of architecture compliance and conformance.

This raises questions about assumed portability. For example, if you’ve developed a system deploying an RTOS that includes non-conformant features, any change of RTOS is likely to involve at least a partial rewrite.

Now suppose that your original system used a fully POSIX-conformant RTOS and your selected replacement is conformant, but not fully conformant. How can you be sure the new OS implements all of the features leveraged in the code base?

Fully automated test and requirements traceability: “Re-run the dynamic analysis tests in the new environment” is something of a throwaway phrase, but if all of those tests have been performed by manual means, then even if things go smoothly there could be considerable overhead implied.

Figure 4. Impact analysis of changing requirements with the LDRA tool suite.

Now suppose that the shift to the certified RTOS of choice has necessitated a partial rewrite. Keeping track of any implications for requirements, design and test could easily become a project management headache at exactly the time in the project when it is least welcome.

Ensuring a fully integrated, automated approach to test and requirements traceability can minimize that impact, making the identification of necessary retests easy and their execution a simple matter of rerunning them.

Right now, using open system software like Linux for the most safety-critical applications is not an option, but that doesn’t mean it can’t be used when developing the application. This might all change with ELISA, but if not, the portability inherent in POSIX offers an option for the transition from PoC into a certifiable project.

Making that a practical proposition requires careful use of POSIX features, and a seamless mechanism for retest if and when the time comes to port the application from the OSS to a standards-compliant alternative.

This article was written by Mark Pitchford, Technical Specialist, LDRA (Wirral, UK). For more information, visit here .


Aerospace & Defense Technology Magazine

This article first appeared in the June, 2021 issue of Aerospace & Defense Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.