Articles

Embedded applications are increasingly going online. With the introduction of new embedded technologies that utilize a wide variety of communications options from Ethernet to Wi-Fi and ZigBee, there is a pressing need to secure these applications against the same problems that are inherent in any networked application.

The rub is that a lot of the products that will be used for these applications have limited computing power and, therefore, will have problems running the same cryptographic algorithms as PCs. Even on embedded hardware that does have the resources (such as an embedded Pentium or high-speed ARM), there is an additional security issue that every embedded systems designer will need to contend with — embedded devices are deployed in a wide variety of locations. A PC or server can run in a locked room with security guards, or in a relatively secure office building. Embedded hardware can be found virtually anywhere from the bottom of the ocean into outer space. Securing industrial embedded applications presents a number of unique challenges not found in personal computing or consumer applications (although in the latter case many of the same challenges apply).

Defining Embedded Security

Image

So what can we do to protect our applications? Well, we first need to understand what constitutes “security” in an embedded application. On a PC, security can be loosely defined as being the protection of sensitive information from malicious individuals who may try to exploit that information.

The definition also encompasses the idea of protecting against unwanted and unintended actions, such as sending copious amounts of email or erasing data (think viruses). In an embedded application, both definitions may apply, but often one or the other may be more important.

For example, a device that monitors oil well activity probably cannot do too much damage (assuming it just monitors and does not control something), but it may be important to keep the information it collects secret. An opposite example would be an automated assembly-line machine that can cause serious damage if it malfunctions, but the information it collects is not sensitive. The point here is that the security for embedded applications really depends on what the applications are doing. Using this concept, it is possible to design security into an application that is specifically tailored to that application. If we do not tailor the security to our application, the associated costs will necessarily increase to meet the resource demand of a more general security solution.

For example, the OpenSSL implementation of the Secure Sockets Layer provides blanket security for TCP/IP applications, but it supports a vast array of cryptographic algorithms and protocols. If you were to include all the algorithms supported in OpenSSL, you would end up with potentially megabytes of compiled code. You have to figure out what you need to secure in your application and leave out everything else, but this also has some associated challenges because it is quite easy to leave out something that is necessary; it only takes one bug in the security for a hacker or virus to successfully attack your application.

Hardened Devices

Porting cryptographic algorithms can be difficult, and optimizing them is downright dangerous, but the resources issue is only part of the problem in securing embedded applications. Even if the hardware platform you are using has the memory and computing power to support the latest and greatest cryptographic algorithms, there is still the issue that your device will likely be in a place with little or no physical security. Sure, it may be in a locked box on a telephone pole somewhere, but it is easy to compromise a physical lock (a pair of bolt cutters will usually do) and then your device is hanging naked in the breeze. Once an attacker has access to the device, there is a whole host of new issues to worry about. In the world of webservers and PCs, physical access to the machines is restricted since those machines live in secure buildings or homes. For this reason, the physical security of the device itself is usually ignored for the most part. If an attacker can gain physical access to the machine, a reboot or running debugging tools (to dump memory contents) may be all that is needed to access the machine.

Barring those options, a simple logic analyzer may be attached to the memory bus to extract information as it moves between the CPU and memory. It has even been shown that meaningful data can be extracted from the status LED on a CDROM drive! Given all the options available to an attacker that has physical access to the machine, how can we ever hope to secure a device that is out in the field somewhere? Well, fortunately, there are a few ways that we can harden a device against attacks through physical means. The easiest form of physical security is to simply coat your entire device in tough epoxy so that any attempt to access the device will inevitably result in its destruction.

This is probably not a practical solution for many applications (heat dissipation is the most immediate problem that comes to mind), but there are other options. One of the easiest ways to protect against logic analyzer attacks is to use an all-in-one microcontroller that incorporates program storage, RAM, and the CPU in a single package; if a bus does not exist, there is no place to hook the analyzer. However, even that solution can be thwarted by removing the packaging and getting down to the silicon itself, but at least it’s much harder for the attacker to do that since he probably only gets one shot at it. If the security of on-chip RAM and program storage can be compromised, what is left that we can do? Well, some microcontrollers actually integrate security right into the packaging.

A fine grid of wires embedded in the chip packaging provides a fairly high level of security against removing the packaging or drilling holes in it for inserting probes. The wire grid works by providing a physical means of detecting a compromise of the packaging. If the grid is broken when the attacker drills a hole or inserts a probe, the chip immediately erases a piece of the non-volatile memory upon restart, thus preventing the attacker from gaining sensitive information such as encryption keys. Another variant of the solution is to provide a small integrated memory that is erased if a particular event occurs, such as plugging in a hardware debugger or accessing that memory from an external location (i.e. through a serial port). Finally, another solution to prevent prying eyes is to use what is basically “write-only” memory —that is, the memory can be written to directly but not accessed directly.

This solution is used in cryptographic “helper” chips that provide cryptographic functionality to a host processor — cryptographic keys are written into the write-only memory and then are only used internally for the cryptographic operations. The security of embedded applications is of paramount importance as increasingly intelligent networked devices invade our homes and workplaces. Without an intelligent and conscious approach to embedded security, we are leaving our well-being exposed to numerous threats. We are only beginning to see the proliferation of these highly-networked applications, and we must be ready to take on all the new security challenges presented by this new way of thinking.

This article was written by Timothy Stapko, Senior Software Engineer at Rabbit Semiconductor (Davis, CA).

For more information, contact Mr. Stapko at rabbit@rabbitsemiconductor . com or visit http:// info.hotims.com/10972-402.