Countering Cybersecurity Threats Against Unmanned Vehicle Systems

Cranfield University researchers have developed a monitoring system whose purpose is to monitor mission profile implementation at both high level mission execution and at lower level software code operation to tackle specific threats of malicious code and possible spurious commands received over a vehicle's data links.

As the use of UASs and UAVs increase in the modern battlefield, the need to secure these weapon platforms is of increasing importance as any flaw could have catastrophic results. To this end much work has been carried out recently to first understand the avenues through which a cyber-attack could come and then how to react when measures meant to protect against it have failed.

One expert breaks down the avenues of attack into ten different categories, from which four stand out: GPS jamming, malware, erroneous data transmissions, and application code manipulation. The significance of these avenues of attack is that they target the vehicle itself in a direct manner.

On the other hand, others single out command/control messages, spoofing, and software-based threats as some of the major risks faced by UAVs in the field. Others propose a more challenging attack dubbed “the stealthy deception attack.” During one such event an undetected injection of erroneous information is performed on a vehicle, which could affect such critical systems as navigation.

A perhaps most unnerving avenue of attack is that which comes from within the vehicle itself. In this kind of attack the vehicle's hardware is compromised from inception by hacking either software encryption keys present inside the chip itself or even the CAD software suite used to design it to gain access to said key. Another hardware attack pattern involves malicious Trojan-style code being inserted into the FPGA (field-programmable gate array) at the time of manufacturing. This type of threat is difficult to detect and could lead to severe consequences depending on what system the tainted integrated circuits are installed on.

In response to all this, several solutions have been proposed with the most common being supervisory architecture. Supervisory logics compare ideal to actual system states in hopes of detecting unusual system behavior. Many agree that any solution for cyber-physical systems, such as UAVs, need to take into account the physical medium in which they operate and that there is no one single solution, but instead a different specific approach to protect the systems' safety, security, and sustainability. In this context a proposed solution from Cranfield University fits in.

Low-Level Monitoring

Figure 1. Cranfield’s system is conceptually split into two layers which oversee different aspects of the UAV's behavior, but are implemented separately due to their specific characteristics.
Cranfield’s system is conceptually split into two layers that oversee different aspects of the system’s behavior, but are implemented separately due to their specific characteristics. Figure 1 shows how the mission monitor layer oversees overall operation of the platform in terms of what it was ordered to do, while a lower layer monitors specific software features using a generic algorithm that can be applied to different software modules by only reconfiguring its parameters.

Conceptually speaking the lower level algorithm works in a similar manner as a planning algorithm. It monitors the state of the overseen software module to determine what actions are legal, applicable, and appropriate to the present situation per the following definitions:

  • Legal: actions within operational and physical limits for a given function;
  • Applicable: actions that can be physically or computationally carried out within the present context;
  • Appropriate: actions that could be legal and applicable, but against bespoke rules set by the user.

Consider the example of a UAV data link. A legal action could be defined as the transmission of data within the electrical capabilities of the transmitter (i.e., transmission speed and bandwidth). An illegal action would be trying to transmit at higher bit rates than nominally allowed, which in turn could increase power consumption or even damage the transmitter. Following on with the same theme, the action of transmitting video data back to base would be applicable only if the video camera is turned on, but not otherwise. Then there is the case where during a certain segment of a mission where data transmission could be used to track and shoot down the UAV, the user might want to block the use of the data link by declaring data transmission inappropriate under certain circumstances. The importance of this last category is that data transmission could be legal and applicable, but the user could render it inappropriate, hence blocking its execution. This scheme provides flexibility and greater level of detail when determining what actions can be performed at a given time, which in turn is useful for intrusion-caused or any other anomalous system behavior.

To correctly monitor each software function (also known as features) under its authority, the low-level monitoring algorithm is structured in a way that allows for its operation to be as “generic” as possible as it could possibly be interacting with completely different software features from one moment to another.

Each feature object under the monitor's authority must be classifiable as “primitive.” In other words, it must be directly in control of a physical task. An example of this could be one software function in charge of aileron control or camera data capture.

To achieve its goal the monitor exploits the concept of execution domain. An execution domain is the set of data that describes the group of actions that each feature can carry out. It also includes the rules for deciding on the legality, applicability, and appropriateness (LAA criteria) of each individual task, which could be abstract or based on physical data. Next is the information regarding the set of input and output data coming and going from the feature being evaluated.

Finally, “safe state” information for each software feature is provided in case of intrusion detection. In this case the lower-level algorithm would pass the data up to the higher level advising what actions should be taken to isolate the suspect feature in order to reduce the risk exposure of the platform.

The execution of the monitoring system conforms to the following steps:

Step 1: Load executive domain data.

Step 2: Read functional I/O.

Step 3: Evaluate LAA criteria.

Step 4: Determine if I/O or any LAA rules are broken; if so, flag software feature as suspect of tampering.

Step 5a: If Step 4 proved positive, recommend higher level algorithm to take vehicle to the safe state defined for the given feature.

Step 5b: If Step 4 is negative, carry on to next software feature.

Figure 2. Graphic depiction of low-level monitor interactions.
Figure 2 illustrates the steps in a graphical format. The function Evaluator takes in as input argument the name of the feature to monitor. It then loads in the matching entry from the Execution Context Database to take a snapshot of the feature's I/O. This information is passed on to helper function that examines the snap shot data and returns NULL if the I/O combination does not match any LAA rule, or it returns a structure with the details of the rule which come packaged together with its recognized I/O.

This low level monitor works similar to a planner function. Outputs are treated as states in a “state space” and inputs as conditions that are a prerequisite for such a state to be reached. Therefore the helper function performs a simple search of applicable states using both the output values snapshot together with the LAA rule set. If there is a match a second search is performed to find out whether the inputs (i.e., the prerequisites) are valid for such a state to exist. If at least one of these searches proves unsuccessful the function returns NULL. By monitoring at primitive software feature level the complexity of state search is dramatically reduced and can be performed in a computationally efficient manner while detecting potential violations to intended software operation.

Navigation Analyzer

The first step to understanding the operation of the navigation analyzer is to introduce the concept of “vehicle stance.” As a mission progresses the UAV can go through any number of operating conditions. It could be flying to target, loitering, or even engaging an enemy. Therefore, to capture this fluctuating environment, a decision making tree is used to decide what stance to adopt in the present situation.

Figure 3. To make sure the overall operation of the vehicle is in conformance with set mission goals, an abstract high-level mission monitor is implemented that verifies energy consumption figures from subsystems are in line with expected parameters; vehicle navigation and control are supervised to ensure maneuvers are coherent with actual vehicle stance and mission objectives; and there is coordination with the lower-level monitor to secure the vehicle in a safe operational state should an intrusion or anomalous behavior be detected.
For example, if there are no threats present in the path of the vehicle as it moves toward its target, the stance could be set to “passive-navigation.” If a threat appears that can be avoided, “avoidance-navigation” is used. Should the vehicle come under attack, then “abort-defensive” or “avoid-defensive.” The former could be employed if the threat cannot be escaped from while still being able to continue the mission; if evasion is probable and the mission can be continued, then the latter would be used. The inner workings of the stance decision engine are outside the scope of this work. Suffice to say that each stance comes with certain conditions and guidelines that need to be met and observed.

Now, in the course of a given assignment a vehicle can adopt a myriad of trajectories depending on factors like weather conditions, existing threats and their positions, fuel consumption, distance to base, etc. Because of the infinite number of possible combinations, it would not be efficient to attempt to use a state space search engine to try and figure out if the vehicle is heading in the correct general direction per the existing stance. To solve this problem two systems are used in tandem to monitor that the vehicle is not navigating in an unexpected manner.

The first element is an Inertial Navigation System (INS). Although considered pricier for the same level of accuracy as a GPS system, INS devices are stealthy in that they do not emit or require electromagnetic signals for data exchange. The system that manages the INS can be hardwired with fixed firmware to prevent any tampering or intrusions from outside, and in the case that the main navigation systems are compromised INS can be used as a backup.

Second is a cost function. This cost function weighs in factors such as fuel consumption projection, distance to base, and distance to objective to calculate a score for a probable trajectory that satisfies mission parameters. For example, if the vehicle is taken over by a Trojan that changes the existing navigation waypoint to one further away from base, the navigation analyzer will compare the new route against the previous one and will raise a flag when a score outside a preset threshold is computed.

Behavior Coherence

Figure 4. The behavior analyzer receives guidelines and conditions from the stance decision engine and extracts from guidelines a list of subsystems that should be associated with the set profile
The last stage of Cranfield’s system works with the decision-making tree to determine what stance—or what basic behavioral rules and guidelines—will govern the actions of the vehicle itself. This is important as many intrusion methods seek to alter the way in which a UAV operates, from the low-level basic functions to the higher level and more complex systems.

By setting a behavior profile, or stance, complexity is reduced as the system must follow what the stance allows, rendering other combinations of actions illegal and non-applicable making monitoring quicker and more efficient. Figure 4 is the graphical representation for this behavior coherence stage. The behavior analyzer receives guidelines and conditions from the stance decision engine and extracts from guidelines a list of subsystems that should be associated with the set profile. It is then verified that none of the associated subsystems have been flagged by the lower-level monitor of being suspect of intrusion; and verified that all operating conditions for the desired profile can be met. If not, the incoherency flag is raised. Otherwise, vehicle system information is monitored for possible behavioral deviations. If a behavioral deviation is detected, the incoherency flag is raised to have the intrusion detection response take the vehicle to a known safe state.

Table. Landing-Navigation Stance Extract
The Table and Figure 5 provide an example of guideline and condition parsing and evaluation. From parsing the contents of the Table, the behavior analyzer knows that it should expect to have a specific response from the monitor's physical layer, the low-level monitor, and the lighting system itself.

In the top plot of Figure 5, the power consumption for the landing lights is shown. Per the information in the Table, the trace should stay within the tolerance, but it can be seen to start oscillating after 200 seconds of operation. The behavior analyzer flags this as a deviation, as shown in the middle plot of Figure 5, but curiously enough the low-level monitor does not raise any flags. This actually comes to exhibit a strong point of the proposed system.

Figure 5. An atypical response for the landing light behavior described in the Table.
Remember that the low-level monitor raises a flag when the output of a primitive function does not match what was expected for a given set of inputs, but what happens when the primitive function is behaving as expected yet the overall subsystem does not?

Here is when the behavior monitor comes into play. Even when the low-level monitor cleared the primitive aspect of the lighting system it still behaved erroneously. These deviations by the non-primitive features were then picked up by the higher level behavior monitor. It is clear then how complete coverage is provided by the different system blocks.

Should the need arise for a platform to be reconfigured midway through a given assignment, the following chain of events would take place for the system to allow for such changes without flagging them as possible intrusions.

  • Prior to departure a set of decoding frames are shared between controller and vehicle; these frames should provide complete secrecy in cryptographical terms, for example, a set of onetime pads (OTPs). Each frame should be paired with a unique identifier character chain. This chain, in turn, would be passed through a SHA-like (secure hash algorithm) encoder and uploaded into both the control and vehicle systems.
  • When the vehicle needs to be reconfigured for a change of mission the controller would send the new profile information encoded using one of the preset OTPs together with the pad's encoded identifier. Each pad is selected using a uniformly distributed random variable based selector.
  • The vehicle matches the encoded identifier with its OTP index and recovers the pad.
  • The new profile is decoded and the vehicle is then reconfigured. Since the mission information is now loaded any modifications in behavior are not flagged and the vehicle can proceed.

Further work will now be carried out to pair the system with an artificial intelligence planner. This new addition will be in charge of scheduling the execution of all required actions necessary for the achievement of the mission goals. After analyzing the risks and possible avenues for cyber-attack on UAS platform it should be clear how this new functionality could not be considered without first implementing the monitoring system object of this work.

This article is based on SAE Technical paper 2014-01-2131 by Rodrigo Felix, John Economou, and Kevin Knowles, Cranfield Univeristy, doi: 10.4271/2014-01-2131.