Security concerns currently dominate software thinking wherever sensitive or safety-critical information is potentially accessible. Embedded software is no exception. Security researcher Barnaby Jack demonstrated this in 2011 when he used a modified antenna and software to wirelessly attack and take control of Medtronic’s implantable insulin pumps. He demonstrated how such a pump could be commanded to release a fatal dose of insulin. Obviously, that vulnerability puts dependent diabetics at risk.

Table 1. ISO 15408 defines a range of Evaluation Assurance Levels (EALs), which determine the process rigor associated with each software component.
But, what about military vehicles? Their vulnerability raises security concerns to a whole new level. The strategic advantage given to an enemy capable of interfering with or interrogating military vehicle positioning and tracking systems could jeopardize the safety of the driver and crew. But even in unmanned vehicles the exposure of information about the vehicle’s intended course could compromise an entire military strategy.

MIL-STD-1180B is concerned with military vehicles and has been in existence since 1986, before any of this became a concern. Yet its demand for “full consideration… to military mission requirements” is perhaps more relevant now than ever before, especially in the context of the United States 2013 National Defense Authorization Act. While that act may be primarily concerned with large IT systems, the threat posed to embedded software can be every bit as real.

Safety considerations impact any discussion on secure software simply because safety and security are so interwoven. From a developer’s perspective, many unsafe software practices are also insecure, and vice versa. Similarly, the end user diabetic suffering the consequences of an incorrect dose from an insulin pump will consider it unsafe, whatever the technicalities of the software’s safety or security vulnerability.

It follows that security issues are often an extra level of complexity to consider in tandem with the adherence to existing safety-related software standards. It is, therefore, useful to consider how this confusing array of standards helps to address rising concerns over embedded software security.

What the Standards Suggest

Figure 1. Building secure code by eliminating known weaknesses.
There are two kinds of standards to consider:

  • Process standards describe the development processes to be followed to ensure that the finished product is written to behave in a safe manner (such as IEE/EIA 12207, or ISO 26262 for road vehicles) or a secure manner (ISO 15408)
  • Coding standards describe a high-level programming language subset (MISRA C , secureC) that ensures the software is written as safely and securely as possible

There are many widely agreed principles when it comes to best practice for software development, whether that software is required to be high integrity or not. In the high-integrity markets, process standards originally designed for safety-critical work provide a sound basis for security-critical software too, provided that security risk considerations replace or supplement safety risk assessment.

This assertion is underlined by the Automation Standards Compliance Institute’s recommendations. These suggest the implementation of a Software Development Security Assessment by referring to existing process standards such as ISO 26262, and superimposing best practice security measures on them.

ISO 15408 (also known as the “Common Criteria” with reference to the merged documents from which it was derived) is an international process-oriented standard that defines IT security requirements. Reference to that standard underlines the similarity between security and safety related software development, with the seven Evaluation Testing Assurance Levels (EALs) of ISO 15408 being highly analogous to the concept of Safety Integrity Levels adopted in such standards as ISO 26262.

The process standards present a path to build in security to the whole development process. In turn they require the use of coding standards as typified by language subsets such as CERT C, secureC and MISRA C:2012. These coding standards consist primarily of lists of constructs and practices for developers to avoid in order to ensure high integrity code.

Building Security In

Most software development focuses on building high-quality software, but high-quality software is not necessarily secure software. Testing is generally used to verify that the software meets each requirement, but security problems can persist even when the functional requirements are satisfied. Indeed, software weaknesses often occur by the unintended functionality of the system.

Building secure software requires adding security concepts to the quality-focused software development lifecycles promoted by such as ISO 26262 so that security is considered a quality attribute of the software under development. Building secure code is all about eliminating known weaknesses (Figure 1), including defects, so by necessity secure software is high-quality software.

Security must be addressed at all phases of the software development lifecycle, and team members need a common understanding of the security goals for the project and the approach that will be taken to do the work.

Figure 2. Secure Coding in the Iterative Lifecycle.
The starting point is an understanding of the security risks associated with the domain of the software under development. This is determined by a security risk assessment, a process that ensures the nature and impact of a security breach are assessed prior to deployment in order to identify the security controls necessary to mitigate any identified impact. The identified security controls then become a system requirement.

Adding a security perspective to software requirements ensures that security is included in the definition of system correctness that then permeates the development process. A specific security requirement might validate all user string inputs to ensure that they do not exceed a maximum string length. A more general one might be to withstand a denial of service attack. Whichever end of the spectrum is used, it is crucial that the evaluation criteria are identified for an implementation.

When translating requirements into design, it is prudent to consider security risk mitigation via architectural design. This can be in the choice of implementing technologies or by inclusion of security-oriented features, such as handling untrusted user interactions by validating inputs and/or the system responses by an independent process before they are passed on to the core processes.

The most significant impact on building secure code is the adoption of secure coding practices, including both static and dynamic assurance measures. The biggest bang for the buck stems from the enforcement of secure coding rules via static analysis tools. With the introduction of security concepts into the requirements process, dynamic assurance via security-focused testing is then used to verify that security features have been implemented correctly.

Secure Software Through Coding Standards

Figure 3. LDRA TBvision showing improper data type sign usage resulting in buffer overflow vulnerability.
In his book The CERT C Secure Coding Standard, Robert Seacord points out that there is currently no consensus on a definition for the term “software security”. For the purposes of this article, the definition of “secure software” will follow that provided by the US Department of Homeland Security (DHS) Software Assurance initiative in Enhancing the Development Life Cycle to Produce Secure Software: A Reference Guidebook on Software Assurance. They maintain that software, to be considered secure, must exhibit three properties:

  1. Dependability - Software that executes predictably and operates correctly under all conditions.
  2. Trustworthiness - Software that contains few, if any, exploitable vulnerabilities or weaknesses that can be used to subvert or sabotage the software’s dependability.
  3. Survivability (also referred to as “Resilience”) - Software that is resilient enough to withstand attack and to recover as quickly as possible, and with as little damage as possible from those attacks that it can neither resist nor tolerate.

There are many sources of software vulnerabilities, including coding errors, configuration errors, architectural and design flaws. However, most vulnerabilities result from coding errors. In a 2004 review of the National Vulnerabilities Database for their paper “Can Source Code Auditing Software Identify Common Vulnerabilities and Be Used to Evaluate Software Security?” to the 37th International Conference on System Sciences, Jon Heffley & Pascal Meunier found that 64% of the vulnerabilities resulted from programming errors. Given this, it makes sense that the primary objective when writing secure software must be to build in security.

Fitting Tools Into the Process

Tools that ease and automate the path towards the development of secure applications exist for each step of the development process. The more integrated those tools are, the easier that process will be. For example, Figure 2 shows how such tools fit into an iterative development process.

Static analysis – automating the process of static analysis and enforcement of coding standards such as CWE or CERT C Secure Coding guidelines ensures that a higher percentage of errors is identified in less time. Static software analysis tools assess the code under analysis without actually executing it (Figure 3).

They are particularly adept at identifying coding standard violations. In addition, they can provide a range of metrics that can be used to assess and improve the quality of the code under development, such as the cyclomatic complexity metric that identifies unnecessarily complex software that is difficult to test.

When using static analysis tools for building secure software, the primary objective is to identify potential vulnerabilities in code. Example errors that static analysis tools identify include:

  • The use of insecure functions
  • Array overflows
  • Array underflows
  • Incorrect use of signed and unsigned data types.

Requirements traceability – a good requirements traceability tool is invaluable to the build security in process. Being able to trace requirements from their source through all of the development phases and down to the verification activities and artifacts ensures the highest quality, secure software.

Unit testing – the most effective and cheapest way of ensuring that the code under development meets its security requirements is via unit testing. Creating and maintaining the test cases required for this, however, can be an onerous task. Unit testing tools that assist in the test-case generation, execution, and maintenance streamline the unit testing process, easing the unit testing burden and reinforcing unit test accuracy and completeness.

Dynamic analysis –analyses performed while the code is executing provide valuable insight into the code under analysis that goes beyond test case execution. Structural coverage analysis, one of the more popular dynamic analysis methods, has been proven to be invaluable for ensuring that the verification test cases execute all of the code under development. This helps ensure that there are no hidden vulnerabilities or defects in the code.


It is not surprising that the processes for building security into software echo the high-level processes required for building quality into software. Adding security considerations into the process from the requirements phase onwards is the best way of ensuring the development of secure code, as described in Figure 2. High-quality code is not necessarily secure code, but secure code is always high-quality code.

An increased dependence on Internet connectivity drives the demand for more secure software. With the bulk of vulnerabilities being attributable to coding errors, reducing or eliminating exploitable software security weaknesses in new products through the adoption of secure development practices should be achievable within our lifetime.

By leveraging the knowledge and experience encapsulated within the CERTC Secure Coding Guidelines and CWE dictionary, static analysis tools help make this objective both practical and cost effective. Combine this with the improved productivity and accuracy of requirements traceability, unit testing and dynamic analysis and the elimination of exploitable software weaknesses become inevitable.

This article was written by Mark Pitchford, Field Applications Engineer, LDRA (Wirral, UK). For more information, Click Here .