In both the Air Force and search and rescue (SAR) communities, there is a need to detect and characterize persons. Existing methods use red-green-blue (RGB) imagery, but produce high false alarm rates. New technology in multispectral skin detection is better than the existing RGB methods, but lacks a control and processing architecture to make them efficient for real-time problems. A number of applications require accurate detection and characterization of persons, human measurement and signature intelligence (H-MASINT), and SAR in particular. H-MASINT requires it for the detection of persons in images so other processing can be performed. It is useful in the SAR community as a method of finding persons partly obscured, in remote regions, and either living or deceased.
Standard color cameras take images at three different wavelengths of light, corresponding to red, green, and blue. The bandwidths of these three wavelengths are generally on the order of 100 nm wide. Hyperspectral cameras take an image, but often at several hundred different wavelengths, ranging from the visible through the near infrared, typically 400-2500 nm. The bandwidth at each of these wavelengths is generally on the order of 10 nm wide. This additional information allows analysts to examine the reflectance properties of the imaged materials.
Hyperspectral imagery contains a lot of information about the material being imaged. Different materials respond differently at various wavelengths, and analyzing a single pixel throughout a hyperspectral cube provides important information regarding the material content in the image at that pixel. These curves can be used to discriminate between multiple classes of materials in images. An important feature is the difference between the reflectance in the red (660 nm) and green (540 nm) portion of the spectrum. Skin is significantly more red than it is green, whereas most common “skin confusers” are typically either more green than red (i.e. vegetation), or approximately equal (i.e. snow) in reflectance.
The system is designed with three distinct programs. The first is a versatile acquisition program that can acquire images and view, save, and run algorithms on them in real time. The next program, the processing program, can run algorithms and generate new videos based off of saved video streams from the acquisition program. The final program is a playback utility for the video files.
The architecture is flexible, as one can easily add additional functionality to meet growing demands. All programs were organized using a basic Model-View-Controller design, using Universal Modeling Language principles, and coded using a bottom-up approach. Based on the results presented, image acquisition, processing, skin detection, viewing, and saving can be performed in real time, at nearly 10 fps. Not only does this support the SAR community, but the Air Force now has a new capability to help address its H-MASINT mission.
The architecture designed here allows a common user to quickly and easily acquire data, view raw and processed data, and save detections in real time. For the SAR community, this provides a unique mechanism to perform skin detection algorithms in real time. The program correctly implements all common algorithms required for skin detection, for use both in real time and post-processing.
This work was done by Matthew Paul Hornung of the Air Force Institute of Technology. For more information, download the Technical Support Package (free white paper) at www.defensetechbriefs.com/tsp under the Information Sciences category. AFRL-0165
This Brief includes a Technical Support Package (TSP).
Flexible Computing Architecture for Real-Time Skin Detection
(reference AFRL-0165) is currently available for download from the TSP library.
Don't have an account? Sign up here.