The US Army has announced plans to increase the autonomy of its Unmanned Ground Vehicles (UGVs). “We are moving along that spectrum from tele-operating to semi-autonomy where you can send a robot from point A to point B without any intervention,” said U.S. Marine Corps Lt. Col. David Thompson, project manager with the Robotic Systems Joint Program Office.

The Armored Combat Engineer Robot (ACER) from Mesa Technologies uses the 5D Behavior Engine, which provides autonomous and semi-autonomous operating capabilities to facilitate simple, intuitive control. (Photo by Chris Kiker)
This should not come as a surprise to anybody. Conventional thinking has held that increased autonomy for all unmanned systems, not just UGVs, will reduce operator workload and allow robots to fulfill their promise of force multiplication.

Situational Awareness

Tele-operation, the most widespread method currently employed for control of unmanned systems, relies solely on a human operator’s cognitive abilities to navigate in extremely dynamic and complex environments. Increased autonomy requires that unmanned systems adopt similar cognitive capacities, including greater situational awareness. Situational awareness can be regarded as involving three-stages:

  1. Sensory input – information is collected from the environment;
  2. Perception – assignment of significance to the perceived information;
  3. Response – using the acquired information to make a decision or formulate a plan. In Stage One, a UGV receives the visual image of an object. During Stage Two, the UGV decides what the object is. Is it a rock, a bomb, or a child? In this example, let’s say it’s a rock. In Stage Three, the UGV plans whether or not to go over the rock or around it as well as what mechanical operations are necessary to complete this task.

In Stage One, a UGV receives the visual image of an object. During Stage Two, the UGV decides what the object is. Is it a rock, a bomb, or a child? In this example, let’s say it’s a rock. In Stage Three, the UGV plans whether or not to go over the rock or around it as well as what mechanical operations are necessary to complete this task.

The U.S. Army’s Autonomous Platform Demonstrator, or APD, is a 9.6-ton, six-wheeled, hybrid-electric robotic vehicle currently being developed by the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC). When equipped with its autonomous navigation system, the APD is configured with GPS waypoint technology, an inertial measurement unit and computer algorithms which enable it to move autonomously at speeds up to 50mph while avoiding obstacles in its path. (Photo: U.S. Army)
In general, tele-operation relies on the situational awareness of the human operator, and assumes zero situational awareness and autonomy for the unmanned system. At the other end of the spectrum, a fully autonomous robot will have highly developed situational awareness, while the human operator, in theory, will require correspondingly less.

In practice this relationship is more complex. The situational awareness of the human operator and unmanned system can be interdependent, while the level of a robot’s autonomy may fluctuate during any given mission. Furthermore, the nature and level of human interaction will be dictated by the operator’s confidence in the unmanned system’s situational awareness, as well as the reliability of the information it provides.

Canine Autonomy

The Segway RMP 400 uses the 5D Behavior Engine to give it high-speed navigation capabilities, including obstacle avoidance, waypoint navigation, and follow. (Photo by Chris Kiker)
To use a commonly employed metaphor, think of unmanned systems as dogs. Dogs are useful for hunting because of their keen smell and hearing, i.e. good situational awareness. Clearly, it is more advantageous to not use a leash (tele-operation) and let the dogs run freely (autonomy), because of the reduced effort of the hunter physically controlling the dogs and the greater area that can be covered (improved mission success). However, the hunter must have faith in the dogs’ abilities, monitor them carefully (both visually and audibly), and from time to time rein them in (reduce their autonomy). For example, if he hears a dog barking, he must interpret the sound to know if a prey is located, the dog is in trouble, or if the dog is just yapping for no reason at all. Similarly, the operator of an unmanned system must evaluate incoming data as well as the appropriateness of the system’s autonomous behaviors.

The above scenario contradicts the notion that autonomy will necessarily reduce the operator’s need for situational awareness and system understanding. Unleashing the dogs will diminish the hunter’s physical workload, but it may increase his need to monitor sensory cues (maintain greater operator sensory awareness).

Increased Autonomy Equals Increased Data

Another possible issue with increased autonomy will be management of data. Increasing autonomy may not necessarily lessen the operator’s workload. If autonomy is used to support force multiplication, this could lead to a significant increase of data. In fact, as autonomous systems gather more and more information, the workload demands on the individual human who must analyze this data may increase dramatically. Granted, most of the increased data will be filtered out from the operator, but even a small increase could have negative consequences.

As reported in a recent New York Times article titled, “In New Military, Data Overload Can Be Deadly,” (Shanker, Thom & Richtel, Matt; Jan. 16, 2011), the military is already suffering from information overload. The article described the inability of humans to track and process the tsunami of data gathered by UAVs and other means of surveillance. One memorable statistic stated that since 9/11, new technologies have increased the amount of intelligence by “1,600 percent.” The New York Times described an incident, in which a crucial piece of data was overlooked, and, as a result, 23 civilians were inadvertently killed.

Will the possible increase in data with autonomous systems make the overload problem worse? Is there a possibility that increased autonomy of unmanned systems will actually make their operation more difficult? Could it be that human, not technological, limitations will be the greatest barrier to autonomy?

Do You Trust Your Robot?

Tele-operation, the most widespread method currently used to control unmanned systems, relies on a human operator’s cognitive abilities to navigate in extremely dynamic and complex environments. Increased autonomy requires that unmanned robots adopt similar cognitive capacities, including greater situational awareness. (Photo: Maj. Deanna Bague)
I voiced some of my concerns to a recognized expert on situational awareness and autonomy, David Bruemmer, Vice president of Research and Development at 5D Robotics, a software company that designs user-friendly, intuitive functionality for unmanned systems. He confirmed my suspicions that increased autonomy doesn’t necessarily simplify the operation of unmanned vehicles. Also, he explained that the human operators’ inability to manage increased data input wasn’t the only stumbling block.

He explained that if appropriate interfaces are not designed, human operators may experience greater frustration and less trust with vehicles as they become more autonomous. The lack of transparency in the robot’s motivations can be confusing. For example, telling a user that the robot will follow them doesn’t necessarily explain how the user can expect the robot to deal with dynamic obstacles. Will the robot plan its motion in a map (which works well in static environments) or will it emphasize reactive obstacle avoidance (which works well in dynamic environments)? The human does not need to understand how the robot will reason, but the human does need to predict robot behavior at some level.

If we want humans and unmanned systems to work as a team, we need a way for them to develop shared understanding. Bruemmer explained that if we do this successfully, we provide a way for the human and robot to understand and support each other’s limitations. Roboticists often try to hide robot limitations. Bruemmer thinks they would do better to state them clearly. These views are also held by Dr. Curtis Nielsen, who works with Bruemmer as the Chief Engineer at 5D. His doctoral dissertation studied the benefit to providing a 3D visual representation that supports shared understanding. Through a collaboration that involved Scott Hartley (another researcher at 5D) and colleagues at the Idaho National Laboratory, a series of experiments indicated that people are more likely to use a low-performing robot which they can predict and understand than a high performing robot which is complex and unpredictable. Research continues at 5D to insure that the benefits of autonomous robot behavior can be achieved without the drawbacks of increased complexity, confusion and distrust.

Bruemmer described an incident in which the lack of trust hindered the utilization of unmanned vehicles operating in a radioactive environment. The vehicle refused to go through a door, which to the human operators observing by video, looked totally navigable. The human operators repeatedly attempted to manually steer the unmanned vehicle through the door, an action stubbornly resisted by the robot. This kind of “fight for control” characterizes Human Robot Interaction (HRI), when the human is not given an appropriate window into the “mind” of an autonomous robot.

The Autonomy Revolution Will Not Be Televised

Soldiers prepare to deploy a small unmanned ground vehicle (UGV) during training. Giving UGVs greater autonomy will allow their human handlers to focus more attention on other aspects of completing critical missions. (Photo: U.S. Army)
Bruemmer argues that as autonomy increases, we need to change the nature of situational awareness accordingly. In the above example, the “fight for control” ended when the human operators switched from observing videos to using a 3-D abstract rendering of the walls. The abstract graphical display revealed that contrary to the view provided by the video, the door was too small for the robot to enter.

One characteristic of unmanned systems has been the explosion of video capabilities. UAVs have evolved from a single-image capability to 65 independent video feeds. However, as demonstrated by the above story, video is not always the best way of observing. “More the better” is not an iron-clad axiom for situational awareness. Abstract interfaces presenting only minimal data are often necessary for efficient HMI. The “information-rich environment with copious data” must die.

Too Much Information

In addition to the above incident, Bruemmer cited numerous examples to support his advocacy of reduced-complexity user interfaces. The original radar display was simply a blip on the screen. When aircraft silhouettes and additional information were displayed on the screen, researchers discovered that the performance of radar operators actually declined. In another experiment, tele-operators of unmanned vehicles drove better using only abstract map displays than they did using both video images and abstract map displays.

These apparently counter-intuitive results can be illustrated by the familiar GPS navigation systems found in many cars. Think of how confusing it would be if video feeds were added to the current mix of maps and verbal prompts. What users really want is less data and more information. This requires careful, artful craftsmanship in the design of both behaviors and interfaces.

Bruemmer has put his theory into practice with his “Real Time Occupancy Change Analysis” solution, which enables robots to detect motion changes of 10cm in their environment. This is valuable for determining if a new occupant has entered a previously searched house. When a change happens, the robot then sends pixels of “footprints on a map” to a nearby patrol, so they can investigate. Bruemmer points out that real-time occupancy change analysis removes the need for a user to monitor a 24-hour real-time video feed.

An approach that emphasizes utilization of abstract graphic displays over videos has several important implications. There would be less utilization of bandwidth as well as a lower update rate. Bruemmer estimates that 3-D abstract renderings require 50,000 times less information than live video. Considering how the growth of video feeds has overwhelmed military networks, this would be a welcome change.

One Size Does Not Fit All

If video is no longer the default user interface, displays for unmanned systems will have to become more application specific. This would require a change on the part of some robot manufacturers, who are trying to produce solutions that are everything to everybody.

“The unmanned systems community tries admirably to develop general purpose robots. The problem is that no one robot can do everything. At 5D we are focusing on portable mission-centric capabilities that can be used across multiple robots and sensors using an interoperable framework.” says Bruemmer.

If we do not use an interoperable framework, mission-centric capabilities could have the unfortunate side-effect of increasing dedicated hardware. If we are to avoid a proliferation of computers, each one committed to a specific application, defense vendors will have to approach interoperability with the seriousness it deserves. Autonomous systems will need computers that are modular, open-platform, and flexible.

Just as one falling domino can unleash a cascade effect, one change in a defense system can trigger a host of others. Increased autonomy causes a demand for increased situational awareness, which increases the data stream, which mandates a refinement in human-robot interaction, which favors abstract mapping displays over videos, which initiate a demand for mission-centric capabilities, which strengthens the need for interoperability. So, it would seem, autonomy for unmanned systems is not the final solution; it’s the beginning of the next challenge.

This article was written by William Finn, Senior Editor and Writer, American Reliance, Inc. (AMREL) (El Monte, CA). For more information, visit .