Soldier-Robot Team Communication: An Investigation of Exogenous Orienting Visual Display Cues and Robot Reporting Preferences

The effective use of robots to conduct dangerous missions depends on accurate man-machine communications.

The advancement of robot capabilities and functionality has changed the way in which soldiers perform many of their operational tasks. The various unmanned air, ground, and submersible vehicles currently deployed have significantly impacted present-day warfare.

Example of a robot's report showing its immediate environment is unsafe. In image format (left) 4 threats are bounded in yellow boxes while text report (right) counts the total number of threats.

Although many of these systems have shown to be beneficial and effective for mission success, traditional control of these systems is through tele-operation. While teleoperation may be necessary and appropriate for situations that may otherwise require soldiers to be exposed to hazardous or life-threatening situations, it is not recommended for dismounted operations. Hence, autonomous robots provide a solution that takes advantage of current robot sensing and intelligence while reducing the cognitive demands on the soldier, allowing robots to maintain awareness of the operational environment. However, the implementation of autonomous robots within human teams carries with it concern regarding human-robot interaction (HRI) and, more specifically, human-robot communication.

Moving beyond teleoperation, military HRI has focused on integrating multimodal communication (MMC) methods that leverage the natural ways in which human-human interaction takes place and the commonly employed functionality for human-computer interaction. In a general sense, MMC is sending and/or receiving information through multiple sensory systems (e.g., seeing text information that is also presented in an audible format).

In terms of benefits for signal-communication processing, MMC systems are robust, flexible, efficient, intuitive, and redundant. While many robot systems are equipped with multimodal interaction capabilities, the impact of each communication type on the soldier's ability to perform task critical operations is not well known. Therefore, systematic evaluation of the components that comprise the transactions between humans and robots and the way in which information is conveyed is critical prior to the deployment of any system to the field.

There were two major goals for this experiment. The first was to investigate the effects on performance and operator perception of various exogenous orientation design cues associated with a visual display in a multimodal interface to facilitate squad-level communication within a dismounted soldier-robot team. In particular, this goal focused on determining whether the elements of visually displayed robot reports provided adequate information about the situational context so the soldier could quickly determine the best course of action the robot should take without being cognitively overloaded. The second goal was to investigate soldiers’ preferences when it came to status updates from a robot teammate (e.g., reporting frequency and format). Specifically, this aspect of the experiment focused on understanding the relationship between robot-reporting preferences, task performance, and situation awareness (SA) with a soldier population.

This work was done by Daniel J. Barber, Julian Abich IV, Andrew B. Talone, Elizabeth Phillips, and Florian Jentsch of the University of Central Florida; and Rodger Pettitt and Linda R. Elliott for the Army Research Laboratory. ARL-0211



This Brief includes a Technical Support Package (TSP).
Document cover
Soldier-Robot Team Communication: An Investigation of Exogenous Orienting Visual Display Cues and Robot Reporting Preferences

(reference ARL-0211) is currently available for download from the TSP library.

Don't have an account? Sign up here.