Using unmanned surface vessels (USVs) for “dull, dirty and dangerous missions” is gaining traction in recent years as it removes the human element from a potentially life-threatening environment in missions such as mine hunting or maritime interdiction. Current USVs rely on human operators sitting in remote control stations to monitor the vessels’ surroundings and perform collision detection and avoidance. This reliance on the human operator constrains the operating envelope of the USV as it requires a high bandwidth and low latency communication link for safe operations, especially in waters with heavy traffic.
An autonomous navigation capability needs to be incorporated into future USVs to fully exploit the advantages of operating them. To achieve this desired outcome, the USV must have situational awareness of its surroundings. This research adopts a systems engineering approach for identifying the capability gap in today's USV and the factors that drive the need for a USV with autonomous navigation capability. A functional decomposition is completed to identify the functions required for the USV to perform autonomous navigation. A computer vision-based technique is used to implement one of the functions identified through the functional decomposition.
The algorithm, developed in MATLAB, converts the video into individual frames before enhancing them for further processing. The images undergo processing using edge detection and morphological structuring techniques before information is derived from the processed images. The algorithm was tested with images from color video sources as well as infrared (IR) video sources.
This work was done by Ying Jie Benjemin Toh for the Naval Postgraduate School. NPS-0004
This Brief includes a Technical Support Package (TSP).
Development of a Vision-Based Situational Awareness Capability for Unmanned Surface Vessels
(reference NPS-0004) is currently available for download from the TSP library.
Don't have an account? Sign up here.