• Font Size:
  • A
  • A
  • A

Robotics Industry Insights

  • This industry insights is filed under:

Robotic Vision Advances Center Around Software Technology Leaps

by James F. Manji, Contributing Editor
Robotic Industries Association

Advances in robotic vision software are opening up new vistas in applications in both automotive and non-automotive industries. These advances center around geometric and pattern recognition analysis that eliminate some of the previous drawbacks in shape recognition.

"The methods that were available before were blob analysis, which required objects to be separated from their background  by lighting variations and template matching, but which really couldn't handle object rotation and size and shading variations," explains Bill Silver, chief technology officer for Cognex Corp. (Natick, MA). "There were many applications where users wanted to guide robots in certain applications, but they were limited because current techniques couldn't find objects. What we did at Cognex was to teach vision systems any pattern just by drawing a box around the object. This method of object location would teach any pattern regardless of size, shading, or angle variations. That method is called PatMax."

PatMax can accurately locate objects in instances where they vary in size or orientation, when their appearance is degraded, and even when they are partially hidden from view. To locate objects, traditional machine vision systems have relied upon analyzing pixel-grid values, a process known as correlation. This method locates objects by comparing a grey-level model-or reference image-of the object to the image. The X-Y position at which the model best matches the image is calculated and, from this, the object's location is determined.  When paired with real-world manufacturing problems such as inconsistent lighting, process variations, and occlusion, correlation methods fail altogether.

PatMax uses geometric information in place of pixel grid-based correlation. For example, it interprets a square as four line segments and a football as two arcs. It does this by applying a three-step geometric measurement process to an object. PatMax first identifies and isolates the key individual features within an object image and measures characteristics such as shape, dimensions, angle, arcs, and shading. It then corresponds the spatial relationships between the key features of the trained image to the runtime image, encompassing both distance and relative angle. By analyzing the geometric information from both the features and spatial relationships, PatMax is able to precisely and repeatably determine the object's position; and it does so without regard to the object's angle, size, or appearance.

By reliably locating objects under conditions that defeat correlation methods, PatMax withstands changes in appearance caused by process variations, inconsistent lighting, and other manufacturing problems. PatMax can even locate objects when a significant portion of the object is missing or blocked-a common problem in robotic pick-and-place applications where overlapping parts can confuse a robot and hinder its ability to locate an object.

In application, PatMax has been able to reduce fixturing costs by half at Wisconsin-based Ganton Technologies. The company manufactures die cast engine parts for the Big Three automakers. Its robotic-based machine vision system runs PatMax object location software and finds engine parts moving at random on a conveyor and-despite wide variances in lighting, angle, or scale-sends their coordinates to a robot for fast, accurate handling. The Cognex vision system with PatMax has increased product throughput and machine efficiency while reducing equipment costs, fixturing, programming, and labor. "With the machine vision, we no longer have a need for pallet tooling or staging fixture tooling," says Curt Pape, manager of manufacturing engineering at Ganton. "This has reduced our conveyance fixturing costs by approximately half."

FANUC Robotics North America Inc. (Rochester Hills, MI), reports the same type of advancement in robotic vision software. "We're also using geometric-based tools for the location of parts regardless of lighting, glare, size change, and part orientation," says Ed Roney, senior product manager for vision at FANUC Robotics. In fact, FANUC Robotics' VisLOC features a simplified, automatic circle grid calibration process that allows the user to calibrate the cameras in a matter of minutes. Its vision tool is powered by Cognex's PatMax. With FANUC Robotics' VisLOC, its robots are now used for machine load/unload, material handling, packaging and assembly applications.

FANUC Robotics' VisTRAC goes a little further than VisLOC in that it includes the hardware and software necessary to address the complexity of locating and tracking loosely or non-fixtured moving parts. VisTRAC is designed for use in either linear or circular tracking modes. It includes the use of a high-speed signal interface to the robot to allow for synchronization of an encoder count with the VisLOC image acquire process. VisTRAC also includes a progressive scan camera that provides an image of a moving part for vision analysis. Available on FANUC Robotics' R-J3 robots, the programming for both vision and tracking are handled through standard teach pendant programming.

FANUC Robotics has also teamed up with Robotic System Integrators Inc. (RSI) (Rapid City, S.D.) to integrate its VisLOC in a first of its kind robotic system for grinding and/or polishing of jewlry. The grinding and polishing system uses the LRMate 200I robot and VisLOC to create a flexible material removal application. The application provides jewelry manufacturers with higher throughput, consistent part quality, and production flexibility. The robot cell also takes over monotonous and injury-prone jobs and should help reduce manufacturers' sensitivity to labor fluctuations dictated by market irregularities.

Geometric pattern matching to find product is also at the heart of the machine vision advances at Insight Integration Inc. (Lansing, MI), part of ISRA Vision Systems, L.L.C. "Machine vision is being used to pick up car body components, like door panels, hood panels, fenders that are typically laid on a rack or stacked vertically on a rack," explains Dave Dechow, president of Insight Integration. "One interesting application is the application of windshield glass that needs to be positioned and placed in the car. We use 3D analysis to look at the opening of the car body and optimize that for the placement of the glass by a robot."

ISRA-BrainWARE uses only one "eye" to ascertain an object's spatial position, instead of the customary two. With a camera installed on the robot, an item can be located and recognized on a pallet. One advanced application of this software is the application of PVC in the seam of automotive car bodies. The BrainWARE is able to spray the seam accurately-no more or no less than needed. The cost-intensive precise positioning of each individual piece is eliminated, because the robot's vision system knows where to go-even with different pieces.

Insight Integration has also been involved in a major installation. The company has incorporated off-the-shelf machine vision hardware and software with its own image-processing software at Vauxhall Motors (Ellesmere Port, England). It has deployed a robotic imaging system for the depalletization of engine blocks.

During operation, individual engine blocks are first located using a 75XCE CCIR-based camera from Sony Electronics (Park Ridge, NJ). After the camera images are digitized with a PC-based PCEye frame-grabber board from American Eltec (Princeton, NJ), they are transferred across the PCI bus to a Pentium-based PC. To locate the individual engine blocks stored on a pallet, Insight Integration incorporated HexSight vision software from HexaVision Technologies (Sainte Foy, Quebec, Canada).

"Traditional vision systems use gray-scale correlation methods to locate such parts," says Jean-Noel Berube, HexaVision vice president, sales and marketing. "In contrast, HexSight uses the geometry of parts to locate them in a two-dimensional field of view. After finding the contours of parts in the image, the position, scale, and orientation of each part is identified and sent to a robot or machine controller to guide parts handling."

In the system deployed at Vauxhall, imaging information is sent over an RS-232 link to a robot controller from Kuka (Augsburg, Germany). This information directs the Kuka robot over a large pallet of engine blocks, locates a designated engine block, and instructs the robot to lift the engine block from the pallet and place the part onto a conveyor belt. After placement, the engine block is transferred to a workstation for installing automotive components.

Like the other advances in machine vision, HexSight uses the geometry of the part to locate it in a 2D field. Its algorithms find contours of parts in the image and identifies the position, scale, and orientation of each part.

For semiconductor and electronics manufacturing applications, such as robot wafer handling and IC assembly, Imaging Technology (Bedford, MA) utilizes its SMART Search pattern location software.  SMART employs geometric -based modeling to adapt to a wide variety of process variations.

Unlike traditional correlation-based software, SMART's GeoSearch engine creates geometric-based models that are an intelligent representation of the contours extracted from an image.  These models can be edited to delete unstable or unwanted features of the pattern that may impact Search results.  This can dramatically increase the accuracy and robustness of the vision application.

SMART also reports distinct information about the variations in images relative to trained patterns.  With traditional correlation-based vision algorithms, information about an object is based on a single measure:  conformance (the percentage of the pixels in the trained pattern found in the searched image).  SMART's geometric-based Search, on the other hand, provides additional information about the kind of changes that have occurred in the process, such as a part's position, angle, quality, score (similarity between the trained and searched object), and GeoQuality (similarity between contours of the trained and searched part).  In addition, SMART reports the size (scale) variation of a part.  This helps keep the manufacturing process running at top speeds, reduces the need for manufacturers to install expensive part feeding and handling equipment, and can eliminate costly lighting devices.

Adept Technology (San Jose, CA) has the longest sustained track record and biggest installed base in vision-guided robots:  17 years and thousands of systems.  "The first robot we shipped was equipped with vision and we continue to install AdeptVision on 30-40% of our systems," notes Joe Campbell, Vice President of Marketing at Adept.  Campbell says that using geometric features for recogniton is nothing new for Adept.  "Our ObjectFinder is actually the second generation geometric guidance tool from Adept, as our original prototype finder tool dates from the mid 80s."

Adept's ObjectFinder is a gray-scale tool that locates touching and overlapping parts in adverse and varied lighting conditions.  It is optimized for robot-guidance with features like one-click training, editable models, and full 360 degree location.  It can locate multiple parts in the field of view.

Campbell says that unlike other vision software products, AdeptVision is part of  Adept's robot controller and uses the same programming language and application software.


Originally published by RIA via www.robotics.org on 07/17/2000

Back to Top


Browse by Product:



Browse by Company:


Browse by Services: