Home

This system could be useful for other applications involving the automated retrieval of items where the presence of humans may be hazardous or impractical.

Automating large-scale material handling systems that involve picking up or retrieving items via cranes or robot arms can yield big benefits, particularly in harsh or hard-to-access environments. The Automatic Launch & Recovery System (AutoLARS) is being developed for the U.S. Navy by Allied Systems and Concept Systems. The Navy required a system for retrieving unmanned vessels and placing them aboard Navy ships. The problem is how to guide a robot that is affixed to a ship so that it can attach a line or fixture to an unmanned floating vehicle, and how to accomplish this while both are tossing at sea.

Figure 1. A CAD Model of the End Effector is shown attached to a cylinder. In this case, it is picking up a weight that simulates an autonomous submarine. This is a CAD image of the test system, which was initially constructed for the Navy as a 1/4-scale model.
The solution was to provide a robotic crane with stereoscopic vision to enable the system to see exactly where the unmanned vehicle's attachment point is, and a brain to predict where it will be in the future as it is moved by the action of the sea. The robot crane is guided to the place where a hook or latch is predicted to be, and contact is made as the two come together.

Other solutions to the problem were tried, such as driving the unmanned boat onto a conveyor on the back of the mother ship, or capturing the boat in a cargo net. No other proposals were deemed satisfactory by the Navy.

Figure 2. The Vision System is composed of two video cameras, one located on each side of the crane. The cameras are moved by the computer so that each one centers on the target.
The solution determines the position of the target object using parallax angles obtained from focusing the two cameras on the target. The first challenge was to make each camera point to the target object, which was done by controlling pan-tilt units on which the cameras are mounted. The system uses pattern-matching software to isolate the target from the rest of the image in order to point the cameras directly at the target. Since each camera's positioning is according to a separate coordinate system, it was necessary to transform the coordinate system of one camera to that of the other camera in order to calculate the exact parallax angle between the two. Using the parallax angle, it is possible to calculate the distance to the target with high precision (approximately 0.1" accuracy with repeatability to within 2").

The next challenge was to develop the computer algorithms to enable the prediction of the path that the target will take in the time required to move the robot crane to intercept it. Engineers ultimately arrived at an algorithm that characterizes the optimal target positions for both the future and the past, based on the parallax angle data from the camera subsystem. Only when the actual past position of the target matches the model's predicted past position does the system indicate that the model's predicted future position is valid for hookup. This validation step maximizes the chance for success by filtering out random target motion sequences for which the model would not generate accurate predictions.

The robot itself is hydraulically operated and is capable of moving very quickly (with a 2G acceleration rating, it can move the end of the robot arm at twice the speed that an object would fall). High-speed operation is a requirement in order to minimize the length of moves that must be predicted by the system, and thereby maximize the accuracy of the target-acquiring motion. The vision system updates the image 15 times per second and provides targeting information to a high-speed PC with a motion control card that issues commands to a high-speed servo motion system, which in turn drives the three axes of the robot arm.