Current practice in certification of avionics systems (e.g. DO-178B1) is centered on testing. The testing process has been prescribed to incorporate a semi-formal notion of completeness to demonstrate that the software correctly implements all its requirements, and nothing further. But in the last analysis, our confidence in modern avionics is based primarily on the successful completion of a test suite that, although extensive, is incomplete.

Testing can never give 100% confidence in software correctness since it is impossible for tests to cover all cases.
We all know that from a formal point of view, testing can never give 100% confidence in software correctness since it is impossible for tests to cover all cases. In practice, the DO-178B approach has been highly successful, but we have had some close calls that remind us that the process is not infallible2. As avionics systems get more complex, and as the software is developed using advanced techniques such as object-oriented technology and modeling languages, it is unclear whether testing alone will be sufficient in the future.

But other potentially valuable approaches are available. Static analysis, in the form of sophisticated tools that analyze application programs to detect possible problems, can help, though in only a limited fashion if the analysis is not guaranteed to be complete. A tool that promises to detect 90% of buffer overruns merely leaves us worrying even more about the 10% it has not detected.

Another important approach is the application of formal methods, which allow us to demonstrate mathematically that certain properties of programs are satisfied. However, the notion of proving an entire program correct in such a mathematical sense is a mirage. Even if we prove that a program correctly implements some complete formal specification, we have no way of ensuring that the specification itself is correct. Nevertheless, formal methods can definitely play a significant role, as is shown by the SPARK language and

toolset3. Importantly for safety- or security-critical systems, the SPARK tools are sound: if they do not detect an error (such as buffer overrun), then the program does not contain any such errors. SPARK is currently being used in building the iFACTS4 air traffic control system in England, applying formal methods to prove freedom from such defects as buffer overruns and arithmetic overflow.

The Hi-Lite Project

SPARK is currently being used in building the iFACTS air traffic control system in England, applying formal methods to prove freedom from such defects as buffer overruns and arithmetic overflow.
Individually, the approaches embodied in testing, static analysis, and formal methods are useful but do not provide a complete solution. In combination, however, these techniques can complement each other and thus advance the state of the practice in developing modern avionics systems and other critical software. An effort to build a toolset supporting such a combined approach — in particular, to ease the effort in transitioning to and adopting formal methods for software verification — is currently in progress. Named “Hi-Lite”5, this project is being conducted by a consortium of partners consisting of AdaCore, Altran Praxis, CEA LIST, Astrium Space Transportation, INRIA ProVal, and Thales Communications. Hi-Lite fits in well with the upcoming DO-178C6 avionics safety standard, a revision to DO-178B, which, among other things, accounts for technologies such as formal methods. Static verification through formal methods may reduce the need for later testing (e.g. robustness tests).

Annotations may be derived from several sources shown in the top level of this diagram.
A key part of the Hi-Lite project is to create a common annotation language that can be used in an integrated fashion to coordinate the different approaches. The idea behind annotations is to augment a program with information about what the program is supposed to do, and how it is supposed to behave. Of course, we have always written natural language comments for these sorts of things, but natural language comments cannot be processed by tools. Instead, we need to formalize these comments so they can be read and interpreted by the toolset.

Annotations state properties such as expected ranges of values at certain locations in a program, conditions that must always hold at some point, or conditions that must hold as invariants throughout the execution of a program. Some languages have such capabilities built in. For example, in Ada you can explicitly specify a variable’s range of values, and a runtime error will be generated if the variable is assigned a value outside its range. Another useful annotation, found in Ada and some other languages, is the assert statement. This provides an optionally executed test that is not part of the fundamental logic, but is evaluated to check that the program is behaving as expected. Annotations may be used to facilitate formal proofs of program properties; this is illustrated by SPARK, which is a subset of Ada augmented by an extensive annotation language.

Annotations may be derived from several sources, as shown in the diagram. User Input refers to annotations manually added by the programmer, who is certainly in a good position to know what the program is supposed to be doing. These are analogous to traditional comments, but are written in a formal notation instead of natural language.

Inferred by Static Analysis refers to annotations that can be automatically generated by static analysis tools. For instance, if a tool can determine that a function Day_In_Month will blow up if given an input argument outside the range 1 through 366, then it can generate a precondition specifying that callers must ensure this range condition is met.

Finally, Generated with Code from Model refers to the situation where modeling tools are used to not only produce executable code in some language such as C or Ada, but to also output additional annotations about the expected behavior of this generated code.

Having a common language for annotations is a primary goal of the Hi-Lite project. The idea is to encompass the requirements for a future version of SPARK, while remaining compatible with the annotation facility being added in the upcoming version of the Ada language standard.

Once the annotation mechanism is formalized, it can serve several different goals in an integrated manner. First, annotations facilitate testing, since they can be compiled into executable code. We can run the program and make sure that these executable tests do not fail. If we detect a failure, then the code disagrees with the annotations, and this discrepancy must be resolved. Second, annotations can be exploited by static analysis tools to increase the power of the deductions they can make. Such tools can automatically reason along the lines ”given that this annotation says X, then subsequent code must do Y, but it does Z, which is inconsistent.” Third, these annotations can be used as the basis for formal methods approaches: instead of treating them as tests that must hold at runtime, we regard them as formal properties of the program to be proved.

All too often, the software world is confronted by suggestions of new techniques that imply that existing techniques must be discarded. It is as though someone suggests a marvelous new plumbing tool to a plumber, but then demands that the plumber abandon all their existing tools. Hi-Lite is a unified attempt to draw on the best of existing approaches, and to extend and combine them. Complex avionics systems of the future are going to require new approaches, and we believe the Hi-Lite approach can play an important part in shaping avionics technology in the 21st century.

This article was written by Robert B.K. Dewar, President and CEO of AdaCore, New York, NY. For more information, Click Here 

References

  1. RTCA;DO-178B. Software Considerations in Airborne Systems and Equipment Certification. December 1992.
  2. Flight Safety Foundation. Incident Description, Malaysia Airlines Flight 124; August 1, 2005. http://aviation-safety.net/database/record.php?id=20050801-1
  3. John Barnes. High Integrity Software: The SPARK Approach to Safety and Security, Addison-Wesley, June 2003.
  4. Neil White. The Use of Formal Methods on the iFACTS ATC Project, March 2010. http://www.open-do.org/2010/04/20/the-use-of-formal-methods-on-the-ifacts-air-traffic-control-project/
  5. Project Hi-Lite: Simplifying the Use of Formal Methods. http://www.open-do.org/projects/hi-lite/
  6. DO-178C. RTCA SC-205/EUROCAE WG-71; http://ultra.pr.erau.edu/SCAS/

Defense Tech Briefs Magazine

This article first appeared in the August, 2010 issue of Defense Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.