Features

Given the needs to meet the most stringent requirements for reliability, safety, and security resulting in lengthy software development schedules, aerospace and defense projects have become among the most challenging to complete. In response to the increasing size and complexity of software used in airborne systems, the guidance document for certifying such systems has gone through numerous revisions with the latest being DO-178C.

Table 2-1 of the document dictates that for a system to be compliant, it has to be assigned one of five failure condition categories, proportionate to the hazard associated with system failure. Severity ranges from “Catastrophic”, which could involve multiple fatalities and the loss of the aircraft, to “No Effect” on safety. Each of these categories is mapped to an associated Design Assurance Level (DAL) from A (Catastrophic) to E (No Effect), such that the DAL assigned to a system is proportionate to the level of quality assurance required in its production. Objectives are detailed for each DAL throughout the development process including requirements specification, design, coding, life cycle traceability, and verification. Thorough, robust testing is a must, and a comprehensive suite of analysis, test and traceability tools is similarly essential.

DO-178C requires that the source code is written in accordance with a set of rules (or “coding standard”) that can be analyzed, tested, and verified for compliance. It does not specifically require a particular standard, but does require a programming language with unambiguous syntax and clear control of data with definite naming conventions and constraints on complexity. The most popular coding standards are MISRA C and MISRA C++, which now include guidelines for software security, but there are alternatives including the JSF++ AV standard, used on the F-35 Joint Strike Fighter and beyond. The Ada language has its own coding standards such as SPARK and the Ravenscar profile, both subsets designed for safety-critical hard real-time computing.

The static analysis of source code may be thought of as an “automated inspection”, as opposed to the dynamic test of an executable binary derived from that source code. The use of a static analysis tool will ensure adherence of source code to the specified coding standard and hence meet that DO-178 objective. It will also provide the means to show that other objectives have been met by analyzing things like the complexity of the code, and deploying data flow analysis to detect any uninitialized or unused variables and constants.

Verification Beyond Coding Standards

Figure 1. A requirements traceability tool spanning requirements, code, and tests, and integrated with test management, code coverage, and data and control coupling, can provide transparency and reduce compliance risk.
Figure 2. The LDRA tool suite allows visualization of control flow and code coverage in both C source code and the associated assembly code and the graphical illustration of the change in control flow.

As well as providing these important metrics on the nature of the source code, detailed and comprehensive static analysis also ensures that the code base is such that other tools can make a similar contribution to the development of critical avionics systems that are certifiably reliable, safe, and secure. For example, DO-178C mandates a requirements-based development and verification process that incorporates bi-directional requirements traceability across the lifecycle. High-level requirements must be traceable to low-level requirements and ultimately to source code, and source code must be traceable back up to low- and high-level requirements (Figure 1). Source code written in accordance with coding standards will generally lend itself to such mapping.

Static analysis also establishes an understanding of the structure of the code and the data, which is not only useful information in itself but also an essential foundation for the dynamic analysis of the software system, and its components. In accordance with DO-178C, the primary thrust of dynamic analysis is to prove the correctness of code relating to functional safety (“functional tests”) and to therefore show that the functional safety requirements have been met. Given that compromised security can have safety implications, functional testing will also demonstrate robust security, perhaps by means of simulated attempts to access control of a device, or by feeding it with incorrect data that would change its mission. Functional testing also provides evidence of robustness in the event of the unexpected, such as illegal inputs and anomalous conditions.

Structural Coverage Analysis (SCA) is another key DO-178C objective, and involves the collation of evidence to show which parts of the code base have been exercised during test. The verification of requirements through dynamic analysis provides a basis for the initial stages of structural coverage analysis. Once functional test cases have been executed and passed, the resulting code coverage can be reviewed, revealing which code structures and interfaces have not been exercised during requirements based testing. If coverage is incomplete, additional requirements-based tests can be added to completely exercise the software structure. Additional low-level requirements-based tests are often added to supplement functional tests, filling in gaps in structural coverage whilst verifying software component behavior.

SCA also helps reveal “dead” code that cannot be executed regardless of what inputs are provided for test cases. There are many reasons why such code may exist, such as errors in the algorithm or perhaps the remnants of a change of approach by the coder, but none is justified from the perspective of DO-178C and must be addressed. Inadvertent execution of untested code poses significant risk.

The structure and control flow of object code generated by compilers is often not directly traceable to source code statements. For the most demanding DAL A projects, DO-178C requires additional verification to ensure the correctness of the object code elements which do not directly map to the source code, and it is helpful to have tools that can highlight such elements. Because there is a direct one-to-one relationship between object code and assembly code, one way for a tool to represent this is to display a graphical representation of the source code alongside the equivalent representation of the assembly code (Figure 2).

On the surface, coverage analysis might seem as simple as assessing whether all the statements have been exercised, but of course it is not quite so simple. Merely checking to see if every line in the code has been executed does not reveal the structure of execution and does not clearly relate back to the requirements. We must turn to branch/decision testing and its cousin, Modified Condition/Decision Coverage (MC/DC). Adequate MC/DC coverage requires the invocation of sufficient combinations of conditions applicable to a decision to demonstrate that each of those conditions can independently affect the outcome.

« Start Prev 1 2 Next End»