Making Sense Out of SOUP (Software of Unknown Pedigree)

Software test tools have been traditionally designed with the expectation that the code has been (or is being) designed and developed following a best practice development process. Legacy code turns the ideal process on its head. Although such code is a valuable asset, it is likely to have been developed on an experimental, ad hoc basis by a series of “gurus” — experts who prided themselves on getting things done and in knowing the application itself, but not necessarily expert at complying with modern development thinking and bored with providing complete documentation. That doesn’t sit well with the requirements of standards such as DO-178B.

Frequently, this legacy software — software of unknown pedigree (SOUP) — forms the basis of new developments to meet modern coding standards and may deploy updated target hardware and development tool chains. The need to leverage the value of SOUP presents its own set of unique challenges.

The Dangers of SOUP

Many SOUP projects will have initially been subjected only to functional system testing, leaving many code paths unexercised and leading to costly in-service corrections. Even in the field, it is highly likely that the circumstances required to exercise much of the code have never occurred, and such applications have therefore sustained little more than an extension of functional system testing by their in-field use.

When there is a requirement for ongoing development of legacy code, previously unexercised code paths are likely to be called into use by combinations of data never previously encountered (Figure 1). Given that commercial pressures often rule out a rewrite, what options are available?

Static and Dynamic Analysis

Figure 1. Code exercised both on site and by functional testing is likely to include many unproven execution paths. Code enhancements are prone to call previously unexercised paths into service.

Some static analysis tools use mathematical techniques to try to verify all possible execution paths through a program. It is true that some problems in simpler code sections can be readily isolated and corrected in this way, giving some level of warm, fuzzy feelings that the software is reasonably robust. However, where source code is complex, such tools rely ever more heavily on data approximations and hence, raise many warnings. These warnings require the user to confirm or deny the existence of each problem in the most complex parts of the application and represent considerable overhead.

Even if overhead weren’t an issue, these tools provide no evidence that the code is functionally correct! A better alternative involves the use of static and dynamic formal test tools, traditionally modified. Even where all source code remains identical, a new compiler or target hardware can introduce unintentional functionality changes with potentially disastrous results. The challenge is to identify the building blocks within the test tools which can be used in an appropriate sequence to aid the efficient enhancement of SOUP.

There are five major considerations:

1. Improving the Level of Understanding

The system visualization facilities provided by modern test tools are extremely powerful. Static call graphs provide a hierarchical illustration of the application and system entities, and static flow graphs show the control flow across program blocks.

Figure 2. Traditional sequence for the use of test tools as applied to a “V” model.

Such call graphs and flow graphs are just part of the benefit of the comprehensive analysis of all parameters and data objects used in the code. This information is particularly vital to enable the affected procedures and data structures to be isolated and understood when work begins on enhancing functionality.

2. Enforcing New Standards

When new developments are based on existing SOUP, it is likely that standards will have been enhanced in the intervening period. Code review analysis can highlight contravening code. It may be that the enforcement of an internationally recognized coding standard to SOUP is too onerous and so a subset compromise is preferred. In that case, it is possible to apply a user-defined set of rules which could simply be less demanding, or which could, for instance, place particular focus on portability issues.

Where legacy code is subject to continuous development, a progressive transition to a higher ideal may then be made by periodically adding more rules with each new release, so that the impact on incremental functionality improvements is kept to a minimum. Test tools enable the correction of code to adhere to such rules as efficiently as possible. Using a “drill down” approach, they provide a link between the description of a violation in a report, and an editor opened on the relevant line of code.

3. Ensuring Adequate Code Coverage

Figure 3. Adequate structural coverage throughout the code provides reassurance that changes in data or functionality will not expose untested code paths.

As discussed, code proven in service has effectively been subjected only to extensive “functional testing”. Structural Coverage addresses this issue by testing equally across the sources, assuming each path through them has an equal chance of being exercised. Although not offering a complete solution, system-wide functional testing exercises many paths and provides a logical place to start.

The test tool takes a copy of the code under test and implants additional procedure calls (“instrumentation”) to identify the paths exercised during execution. Colored graphs complement reports to give an insight into the code tested and into the nature of data required to ensure additional coverage.

Manually constructed unit tests can be used to ensure that each part of the code functions correctly in isolation. However, the time and skill involved in constructing a harness to allow the code to compile can be considerable. Modern unit test tools minimize that overhead by automatically constructing the harness code within a GUI environment and providing details of the input and output data variables to which the user may assign values. The result can then be exercised on either the host or target machine.

To complement system test, it is possible to apply code instrumentation to unit tests and hence exercise those parts of the code which have yet to be proven. This is equally true of code which is inaccessible under normal circumstances, such as exception handlers. Sequences of these test cases can be stored, and they can be automatically exercised regularly to ensure that ongoing development does not adversely affect proven functionality.

4. Dealing With Compromised Modularity

Figure 4. Color-coded graphical information clearly identifies unexercised code.

In some SOUP applications, structure and modularity may have suffered, challenging the notion of testing functional or structural subsections of that code. However, unit test tools can be very flexible, and the harness code which is constructed to drive test cases can include as much of the source code base as necessary. The ability to do that may be sufficient to suit a purpose. If a longer term goal exists to improve overall software quality, then using instrumented code can help to understand which execution paths are taken when different input parameters are passed into a procedure — either in isolation or in the broader context of its calling tree.

5. Ensuring Correct Functionality

Perhaps the most important aspect of SOUP-based development is ensuring that all aspects of the software function as expected, despite changes to the code, the compiler, the target hardware, or to the data handled by the application. Even with the aid of test tools, generating unit tests for the whole code base may involve more work than the budget will accommodate. However, the primary aim here is not to check that each procedure behaves in a particular way; it is to ensure that there have been no inadvertent changes to functionality.

By statically analyzing the code, test tools can automatically generate test cases to exercise a high percentage of the paths through it. Input and output data to exercise the procedures is generated automatically, and then retained for future use.

They can then be used to perform batch regression testing to ensure that when those same tests are run on the code under development, there are no unexpected changes. These regression tests provide the cross reference back to the functionality of the original source code.

Conclusion

In an ideal world, test tools should be applied from the beginning of a structured and formal development process. However, sometimes commercial realities mean that legacy SOUP code is used as a basis for further development — maybe to DO-178B standards. By using those same test tools in a pragmatic way in these circumstances, it is possible to develop legacy SOUP code into a sound set of sources which are proven to be fit for purpose, all in an efficient and cost-effective manner.

This article was written by Mark Pitchford, Field Applications Engineer, LDRA (San Bruno, CA). For more information, contact Mr. Pitchford at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit http://info.hotims.com/28051-400 .