To address the issue of poor georegistration performance for small unmanned aerial vehicles (UAVs), navigation and target-location accuracy improvements achievable by tightly integrating an image-based feature-tracking algorithm with Global Positioning System (GPS) and a consumer-grade inertial navigation system (INS) sensor are being investigated. The image-aiding algorithms add to the solution across a wide variety of terrain types, thus allowing for additional estimates of camera position and orientation in the dynamic adjustment.

After integrating GPS with the image-aided inertial architecture, the system is tested using a combination of Monte-Carlo simulation and flight-test data. The flight-test data was flown over Edwards Air Force Base using representative hardware. The effects of variations in sensor quality and integration methods were investigated and shown to greatly improve the overall performance of the tightly coupled image-aided sensor over the reference GPS/INS sensor.

The method is based on the following assumptions:

  • A strapdown inertial measurement unit (IMU) and GPS antenna are rigidly attached to one or more calibrated cameras. Synchronized raw measurements are available from all sensors;
  • The inertial, GPS, and optical sensors’ relative position and orientation are known;
  • The camera images areas in the environment that contain some stationary objects; and
  • A statistical terrain model is available that provides an initial indication of range to objects in the environment.

The system parameters consist of the navigation parameters (position, velocity, and attitude), inertial sensor biases, GPS clock bias and drift, and a vector describing the location of landmarks of interest (tn) in the navigation frame. The navigation parameters are calculated using body-frame velocity increment (Δvb) and angular increment (Δθbib) measurements from the inertial navigation sensor, which have been corrected for bias errors using the current filter-computed bias estimates. These measurements are integrated from an initial state in the navigation (local-level) frame using mechanization algorithms.

The position, velocity, and attitude errors were modeled as a stochastic process based on the Pinson navigation error model. The accelerometer and gyroscopic bias errors were each modeled as a first-order Gauss-Markov process, based on the specification for the IMU. The GPS clock drift is modeled as a random bias. The landmarks are modeled as stationary with respect to the Earth. A small amount of process noise is added to the state dynamics to promote filter stability.

Because both the system dynamics model and measurement models are non-linear-stochastic-differential equations, an extended Kalman filter algorithm is employed. The extended Kalman filter is an error-state with feedback formulation that estimates the errors about the nominal trajectory produced by the nonlinear filter dynamics model. In addition, this nominal trajectory serves as the operating point where the nonlinear dynamics and measurement models are linearized. Finally, the feedback nature attempts to constrain the inevitable departure of the nominal trajectory by periodically removing error estimates. This feedback process improves the performance of the extended Kalman filter by reducing linearization errors due to errors in the nominal trajectory.

A Monte Carlo simulation was used to evaluate the performance and stability of the GPS-aided inertial navigation algorithm both with and without image aiding. The simulations were performed using a standard flight profile, based as closely as possible to the experimental flight data. The simulated trajectory was generated based on a semicircular path with no overlapping portions. The trajectory was flown at approximately 1,000 meters above relatively flat terrain.

It was discovered that the algorithm improves the target location accuracy of a UAV equipped with a low-cost GPS/IMU and imaging system. The method is implemented recursively for online operation and only requires a terrain database. No a priori imagery is required over the area of operations. The system automatically selects and tracks stationary features in the field of view and uses these tracks to update the navigation state. The system uses a combination of simulation and flight-test data to improve the attitude accuracy by an order of magnitude. This results in a corresponding improvement in target location accuracy.

This work was done by R. Anderson of the National Geospatial-Intelligence Agency, M. Nielsen of the Air Force Test Pilot School, and M. Veth and F. Webber of the Air Force Institute of Technology. AFRL-0121


This Brief includes a Technical Support Package (TSP).
Tightly Coupled INS, GPS, and Imaging Sensors for Precision Geolocation

(reference AFRL-0121) is currently available for download from the TSP library.

Don't have an account? Sign up here.



Defense Tech Briefs Magazine

This article first appeared in the June, 2009 issue of Defense Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.