Amethod of real-time fusion of readout data from electronic inertial and image sensors for passive navigation has been developed. By "passive navigation" is meant navigation without the help of radar signals, Lidar signals, Global Positioning System (GPS) signals, or any other signals generated by on-board or external equipment. The concept of fusion of image- and inertial- sensor data for passive navigation is inspired by biological examples, including those of bees, migratory birds, and humans, all of which utilize inertial and imaging sensory modalities to pick out landmarks and navigate from landmark to landmark with relative ease. The present method is suitable for use in a variety of environments, including urban canyons and interiors of buildings, where GPS signals and other navigation signals are often unavailable or corrupted.

When used separately, imaging and inertial sensors have drawbacks that can result in poor navigation performance. A navigation system that uses inertial sensors alone relies upon dead reckoning, which is susceptible to drift over time. Image-sensing navigation sensors are susceptible to difficulty in identifying and matching good landmarks for navigation. The reason for fusing inertial- and image-sensor data is simply that they complement each other, making it possible to partly overcome the drawbacks of each type of sensor to obtain navigation results better than can be obtained from either type of sensor used alone.

Feature-Extraction Times as functions of image resolution, measured in tests, were found to be much less for a GPU-accelerated algorithm according to the present method than for an older CPU-based algorithm.
Prior designs of image-aided inertial navigation systems have represented compromises between computing power demand and performance. Some have used a simplified image processing algorithm or a priori navigation information, while others have simply post-processed navigation data. Such designs are not robust enough for use in autonomous navigation systems or as viable alternatives to GPS-based designs. In contrast, a navigation system based on the present method can achieve real-time performance using a complex image-processing algorithm that can work in a wide variety of environments.

The present method is a successor to a prior method based on a rigorous theory of fusion of image- and inertial-sensor data for precise navigation. The theory involves utilization of inertial-sensor data in dead-reckoning calculations to predict locations, in subsequent images, of features identified in previous images to within a given level of statistical uncertainty. Such prediction reduces the computational burden by limiting, to a size reflecting the statistical uncertainty, the feature space that must be searched in order to match features in successive images. When this prior method was implemented in a navigation system operating in an indoor environment, the performance of the system was comparable to the performances of GPS-aided systems.