Robot Vision – Technical Issues
by Nello Zuech, President
, Vision Systems International, Consultancy
Robotic Industries Association Posted 08/03/2003
Much of the early work in the field of machine vision was driven by robot guidance applications. Back in the mid- to late-70s, the National Science Foundation sponsored research at the Stanford Research Institute under Charles Rosen that lead to the pioneering work in their global feature analysis algorithms. It was driven by the idea of recognizing different parts coming down a common conveyor while marking the specific location of the part on the conveyor delivering that location data to a robot so it could pick the part of the conveyor and sort it into the proper carton, or whatever. At the same time NSF sponsored Dr. John Birk’s work at the University of Rhode Island for the classic ‘‘bin-picking’‘ application.
In 2002, the North American market for machine vision-based robot guidance was about $30M, representing about 2.5% of the total North American machine vision market. Significantly, this is the ‘‘wholesale’‘ value of the machine vision as undoubtedly the robot supplier/integrator marks up his costs and adds system integration and application engineering value. In other words, the true market value of the machine vision portion of robot-guided applications is most likely $75-90M.
As with other articles, the material for this article was solicited from companies known to be suppliers of machine vision systems for robot guidance applications. A number of questions were developed and forwarded to representatives in these companies and what follows is their responses to those questions in a ‘‘round table’‘ discussion format.
Participants in this discussion included:
- Steve Annen, Dr. G. Sudhir, and Manish Shelat, Adept Technology
- John Keating, Cognex Corporation
- John Clark, Epson Robotics
- Walt Pastorius, LMI
- Ed Roney, Fanuc Robotics America, Inc.
- Adil Shafi, Shafi, Inc.
1. When it comes to vision guided robot system is it a hardware or software issue?
Steve Annen, Dr. Sudhir & Manish Shelat: ‘‘True vision guidance in robotics is both a hardware and software issue. Many robot companies are showing cameras mounted to robots and using the label ‘‘vision-guided robot.’‘ In reality, these are simply inspection cameras mounted to a robot, sending position offsets from a nominal location over to the robot controller via a slow communication protocol.
A ‘real’ vision guided robot system is one where a single or multiple cameras are integrated to the robot controller and locate randomly oriented objects in the field of view of the camera and generate a robot transformation that identifies the location and orientation of the object. In order to achieve this capability, both a robust robot-to-camera calibration utility is required and the robot controller must have an accurate model of the robot it is controlling. In addition, the robot programming language must have transformation variables for developing the real location of the object by combining robot and vision location information.’‘
John Clark: ‘‘Vision guided robots use software to determine the location and orientation of a part or parts in the camera's field of view. Once the part location and orientation coordinates are known, the robot re-orients the end of arm tooling so that it is now aligned with the parts.’‘
Walt Pastorius: ‘‘Both are important issues, depending on the application. A basic system, for example using a 1D sensor to control robot standoff can be relatively simple. A complex system using a 3D vision sensor operating in an absolute coordinate system may require a comprehensive suite of 3D vision algorithms, calibration and temperature compensation software, as well as data reporting and analysis software, all integrated into a common user interface.’‘
John Keating: ‘‘It’s mainly a software issue because the first step in any vision-guided robot application is finding the part accurately. That’s why Cognexâ has invested heavily in PatMax and its derivative technologies, as well as Robot Calibration software. Without a good algorithm, even the best vision system hardware or most precise robot will have difficulty meeting application requirements. ‘‘
Ed Roney: ‘‘Software. Calibration, coordinate frame mathematics and true 3D (6 degrees of freedom) calculations are extremely complicated, especially if accuracy is a requirement. A robotic vision system needs to include standard tools that provide these calculations without the integrator or end user investing in research and extensive custom programming.’‘
2. What are critical hardware issues? What are the concerns associated with lighting, cameras, etc.?
Keating: ‘‘On the hardware side, the emphasis is on small form factor and lightweight designs. For example, compact cameras that are suitable for mounting on a robot arm where low mass is important. Other hardware considerations include the need for continuous-flex cables, industrially hardened connectors, and the signal transmission limits of various data formats. With lighting you’ve got to think about shadows as the robot or other objects move in and out of the scene, and shrouding techniques can be impractical for larger work areas. Consequently, poor lighting can’t be corrected in many cases, so the software algorithms must be powerful enough to compensate for poor lighting.’‘
Clark: ‘‘In a vision application, lighting (both ambient & source) is a critical element of success. The type of light and the angle at which it hits the part can determine the vision system’s success in accurately locating the part and sending back location coordinates.’‘ Walt concurs on the lighting issue, ‘‘Lighting, whether laser, LED or any other type, should always be an integral part of the vision sensor so it is properly applied and controlled. Lighting is often a more critical issue in robotic-based systems than in fixed sensor systems, where the sensor can be placed in a variety of locations and orientations. In a fixed sensor application, if intense ambient light confuses the sensor, such as sunlight coming through a window, it can be mechanically ‘‘blocked’‘. For robotic applications, the sensor selected should be insensitive to ambient light in all positions.
If the sensor is mounted on the robot, it must be robust to perform reliably over the long term in the dynamic environment of the robot. Cables to the vision sensor should be high flex type to survive constant motion.’‘
Annen, Sudhir & Shelat: ‘‘Hardware encoder latching and asynchronous triggering of cameras and strobes are critical hardware issues. Integration of vision and motion hardware on the same back plane or on a high-speed bus like FireWire is important since it keeps communication overheads to a minimum. In order to perform on-the-fly imaging using a robot-mounted camera the camera has to be triggered at high shutter speeds or a strobe is required.
From a camera’s viewpoint appropriate hardware is required for several aspects like
- Providing external sync signals to genlock multiple cameras
- Providing trigger pulses to reset cameras asynchronously
- Providing pulses to strobe lights in sync with camera frame integration cycle.
- Additionally, in case of FireWire cameras, embedded software may be required for time-stamping the images.’‘
Roney: ‘‘Reliability. Whichever hardware platform and devices are selected for the application, they must be appropriate for the environment that the robot system is working in. Reliability of the vision system should be no less than the reliability of the robot.’‘
Adil Shafi: ‘‘Off the shelf ease of purchasing, robustness and ease of supportability.’‘
3. What are critical software issues?
Shafi: ‘‘Ease of Use. Robustness. Extent of content from factory floor psychological perspectives i.e., the day to day perspective of the operator, maintenance technician, and the plant engineer.’‘
Pastorius: ‘‘From the user perspective, ease of use is critical. Programming the robot, setting up the vision sensor and managing the sensor/robot interaction needs to be simple. Integration of the robot and sensor user interface should be maximized.’‘
Annen, Sudhir & Shelat: ‘‘The key software issue is the ability of single or multiple cameras, which are integrated to the robot controller, to locate randomly oriented objects in the field-of-view of the camera and generate a robot transformation that identifies the location and orientation of the object. In order to achieve this capability, both a robust robot-to-camera calibration utility is required and the robot controller must have an accurate model of the robot it is controlling. In addition, the robot programming language must have transformation variables for developing the real location of the object by combining robot and vision location information. Robot controllers that have a well-developed camera utility for calibrating cameras in different orientation and a library of robot kinematic modules are best suited for true vision guided robotic applications.’‘
Roney: ‘‘Most robotic system integrators are focused on the development of the robot application, not the vision system. They need a vision solution that is easy to use and one that does not require custom programming. Integrators and end users want to deploy machine vision, not develop it.’‘
Clark: ‘‘The software must be capable of accurately gathering up part positional data and relaying that back to the robot. From a typical programmer's perspective other key elements are how well integrated is the vision software with the robot's programming software and how easy is it to use. A tightly integrated, easy to use system cuts down on development time thereby reducing overall costs.’‘
Keating: ‘‘Even though lighting is a hardware issue, in a sense, it really ends up being a software issue where algorithms must accommodate for non-linear lighting changes. For example, shrouds to block out sunlight or reflections can be impractical when lighting large areas or big parts with complex shapes. Consequently, the software algorithms must tolerate lighting variations as well as other potential scene variations such as part orientation, touching or overlapping parts, or even the scale changes that are common in stacking and un-stacking applications. Other critical software issues include ease of set up, integration, and training for changeovers.’‘
4. What is the best way to calibrate a vision guided robot system? How and how often would you recommend such a calibration procedure be performed?
Keating: ‘‘Robot calibration is very application dependent. Some applications may require periodic calibration only at changeovers, for instance. Others may opt to monitor calibration status, and notify the operator when it’s time to re-calibrate. Still others, such as high accuracy or very precise applications, may require building continuous ‘‘on-the-fly’‘ calibration into the guidance operation itself.’‘
Pastorius: ‘‘Calibration requirements can vary significantly with the application. A simple guidance application where the sensor provides an offset to the robot to insert a part into an assembly may require no calibration. A more complex application which uses data from multiple sensors for guidance, or which involves measuring objects with the vision robot can require regular calibration, or at least verification of calibration. Calibration in some cases can be achieved with master artifacts of nominal dimension. For large flexible objects, for example a car body, creation of a master artifact is often impractical. In these cases, portable coordinate measuring systems that can be set up at the robotic cell provide calibration capabilities for the system.’‘
Shafi: ‘‘A wizard guided procedure. At best, there should be a self check at a customer selected frequency to make sure the system is capable and under control and that this is logged to ensure a statistical record to the customer's customer.’‘
Roney: ‘‘Calibration depends on accuracy required and on whether the camera is fixed mounted or traveling on the robot's end of arm tooling. The best calibration methods take into account the application's workspace coordinate frame as well as the robot's end of arm tool frame to reduce overall system errors. The frequency of calibration depends on the application and the potential for camera or tooling movement. Any time there is the potential for the relationship of the camera and the robot's tooling to change, calibration needs to be done on a frequent basis. Automatic calibration methods between the robot and vision system make calibration verification, and if needed recalibration, a simple task.’‘
Clark: ‘‘The best way to calibrate a vision guided robot is determined by how well integrated the vision software is with the robot software. Normally re-calibration is required when something in the system changes (lighting, focal lengths etc.)’‘
Annen, Sudhir & Shelat: ‘‘Best way to calibrate a vision guided robot is to use a target and have the robot repeatedly take pictures, locate the object, acquire the object with a gripper, and place the object back in the field of view. This allows for an extremely robust camera calibration and generates adjustments for perspective errors. The transformation used by vision guidance software is sensitive to the camera location, either with respect to the robot world coordinate frame for fixed cameras, or with respect to the joint the camera is mounted on for robot-mounted cameras. Calibration should be performed if the size of the field-of-view changes. The following actions will change the size of the field-of-view:
- Adding or removing extension tubes, and changing camera or lens
- Changing the lens focus setting or the lens aperture setting (f-stop)
- Changing the distance of the camera to the work surface
Once calibration is complete, secure the lens' focusing and aperture setting rings so they cannot turn due to tampering or vibration. This is especially true of robot-mounted cameras.’‘
5. Are there some specific integration issues that a prospective buyer should know about?
Annen, Sudhir & Shelat: ‘‘A prospective buyer should first figure out the level of vision guidance required for his or her application. Make sure that the software can handle various robot and camera orientations. Communications overhead between the vision system and the robot system should be minimal. How does the vision system interface with the robot controller? Is it using slow communication protocols like RS-232, or is it using a high-speed bus like FireWire? Ideally, vision and motion hardware should be on the same backplane like VME, ISA, PCI, PMC, etc? A lot of people think that using a Smart Camera or a stand-alone vision system and generating offsets is guidance vision. In reality, real vision guidance system is using transformation and performing complex operations like vision guidance conveyor tracking and odd-form electronic assembly. Hence, integrating a vision subsystem with the robot world coordinate system is important. Buyers should also look into camera and light control issues. Last but not least is the cycle time issue. Prospective buyers should make sure that the product meets their requirements of robot cycle time.’‘
Clark: ‘‘It's a lot less time consuming, and less expensive over the long run, to buy a vision system supported by the robot manufacturer. When looking at robot vision systems, buyers should seek out those systems that are seamlessly integrated into the robot software. In addition, look for a system that uses point and click teaching methods; this greatly reduces the amount of software code that the programmer will have to write.’‘
Shafi: ‘‘Yes, the buyer should buy from companies that have done these projects before successfully. Trying new capabilities on a project's timeframe can degrade credibility.’‘
Pastorius: ‘‘Unless the prospective buyer is capable of carrying out system integration, selecting a supplier experienced in supplying integrated systems is important.’‘
Keating: ‘‘Various robot controller protocols mean that integrators must ensure that the vision system will accommodate a particular protocol, or that the robot controller has an open communications scheme. Training is critical in terms of ease of use. Integrators must plan in advance how operators will program the system on the factory floor during changeovers so that it’s a very simple thing to do for the end user. Noise is another integration issue common to robotic applications. Analog camera signals, for example, may be susceptible to electromagnetic interference generated by the robot’s motors and drives. Since many applications tend to grow in scope or expand over time, it’s important to make sure that the system’s vision tools aren’t working at the limit of their capability in terms of speed, accuracy, or robustness.’‘
Roney: ‘‘The vision system selected should have the tools needed for robotic applications. Most general purpose machine vision systems are good at finding and measuring an object in the cameras field of view, but lack the tools needed to make that information useful to the robot. Select a system intended for robotics.’‘
6. What does it take to succeed with a vision guided robot system application from a supplier's perspective?
Shafi: ‘‘Ease of acceptance and ownership by the customer. Continued performance and repeat orders.’‘
Pastorius: ‘‘Highest level of integration provides the simplest system to the user. Simplicity of programming both the robot and vision sensor leads to success after installation.’‘
Annen, Sudhir & Shelat:
- ‘‘Robust robot-to-camera calibration utility
- Camera calibration utility for calibrating cameras in different orientations like fixed downward looking, fixed upward looking, and robot mounted
- Ease of deployment and integration of product
- Seamless calibration of both vision and motion systems
- Minimum communications overhead between vision and motion system
- Products that cover a wide range of applications
- Customer training and support’‘
Roney: ‘‘Do not over estimate what the vision system can do. Test the application with real parts and consider the environment that the system will be needed to perform under. Pay close consideration to accuracy as robots are very repeatable but require careful camera to tool calibrations to be accurate.’‘
7. What does it take to succeed with a vision guided robot system application from a customer's perspective?
Annen, Sudhir & Shelat: ‘‘Good definition of both vision and motion requirements and consultation with a reliable and proven vendor for evaluating application scenario with cost/benefit analysis.’‘
Keating: ‘‘Choose a machine vision supplier that not only offers a full range of products, but also provides a direct line of contact for support.’‘
Pastorius: ‘‘Specifications should be as comprehensive as possible, including all elements required for the application to be successful.’‘
Shafi: ‘‘Ease of deployment. Consistent performance. Ease of supportability by operators. Minimal support by plant engineering. Reduction in tooling and mechanical costs with the flexibility gained. Ease of switchability to new products or processes.’‘
8. What advice would you give to someone planning to buy a vision guided robot system?
Keating: ‘‘Work with an integrator that is knowledgeable in all aspects of robots and machine vision. Don’t let someone else make the vision choice for you. Make sure that the machine vision vendor has the breadth of product that can help you grow in scope and offers a direct line of contact for technical support.’‘
Pastorius: ‘‘Unless the user is well skilled in vision sensing, robots and integration of the two, it is recommended to obtain the system as an integrated package, with all elements included. Make sure that the vision sensor to be used is capable of providing all functions required, and does so in the required cycle times.’‘
Annen, Sudhir & Shelat: ‘‘Prospective buyers should look for vendors that have the tools to perform ‘‘real’‘ vision guidance applications. Beware of vendors that consider vision guidance as the ability to send position offsets from a nominal location over to the robot controller via a slow communication protocol. Buyers should also look for robust camera calibration utilities that can calibrate robots and cameras in any mounting orientation. Robot controllers should have accurate kinematic models, transformation variables, and vision commands that make it easy to combine robot and vision transformation information. Evaluate only experienced vendors that offer integrated motion and vision solutions.’‘
Shafi: ‘‘Do your homework. Select the vendor carefully. Be prepared to support the vendor. Establish a clear scope of work and clear milestones and commercial details.’‘
Roney: ‘‘The robot supplier has the best knowledge of which vision systems have been successfully applied in robotic solutions. Look for examples of past installations where the requirements were similar to those needed. Consider also support. Ask, will support for the vision system be from a different company than the robot supplier? If there is an issue with the overall performance between the robot and vision system, which will take the lead to resolve the concern.’‘
9. How does one recognize a good application for a vision guided robot system?
Annen, Sudhir & Shelat: ‘‘Repetitive need for pick-and-place of moving objects and applications with a small cycle time where the object cannot be stopped for inspection,’‘ as flags for good opportunities.
Pastorius: ‘‘’Good’ applications are extremely varied. In some cases, vision guided robots offer the only practical solution to automating an operation. In other cases, vision guided robots can replace multiple mechanical tools with a single robot station, with dramatic overall cost savings. Another area where robots with vision provide value is in ‘‘virtual fixturing’‘, where position data from the vision sensors is used to determine the actual position of an object which is only roughly located. The sensor data is used to offset the robot path to match the actual part location for further processing, assembly, measurement or other operations. This can eliminate cost of expensive precision fixturing or allow automation of operations that previously required manual processing.’‘
Shafi: ‘‘Vision feasibility.’‘ And where ‘‘Large mechanical cost structure’‘ can be avoided.
Clark: ‘‘Robot vision applications fall into two general categories – finding part orientation for robotic handling or inspection of the part prior to subsequent operations. Feeder bowls or hard tool fixtures are two common alternatives to vision orientation. Often it comes down to cost and flexibility. A vision system, because it is reprogrammable, has greater re-use flexibility than fixtures or feeder bowls. If there is a high mix of parts, then the cost of separate fixtures or feeders can be very expensive vis-à-vis a vision system. A vision system also makes sense if the product has a short life. Vision inspection is done to verify that the part is configured correctly. Many companies automate this operation because the cost of a mistake (i.e. a defective part being included with a shipment of good parts) is very high. No supplier wants to be responsible for causing an expensive product recall.’‘
10. What might one expect to spend for a vision guided robot system? How does one justify a vision guided robot system?
Clark: ‘‘Ten thousand to twelve thousand dollars for a vision system that is integrated into the robot programming software.’‘
Pastorius: ‘‘Costs for a vision robot can vary a great deal, depending on the costs of the robot selected, the sensor selected, and what other components are needed for the specific application, such as temperature compensation or data reporting. Basic justifications can be done by conventional comparisons to costs for other methods, but don’t forget to factor in the value of quality improvements often achieved through automation.’‘
Shafi: ‘‘Justify against manual labor. And, one will gain additional bonuses in the areas of: a) throughput consistency, b) medical injuries costs, c) tooling cost reduction, d) flexibility in working with new products in the future.’‘
Annen, Sudhir & Shelat: ‘‘Unbiased cost/benefit analysis for the lifetime of an operation and savings in cycle time since part does not have to be stopped for inspection.’‘
11. When is 3D needed?
Roney: ‘‘3D is an application issue. If a part is lying on a flat surface, often 3D is not needed, 2D (X, Y and Roll) will suffice. If the part however can be shifted in any or all of the 6 degrees of freedom (X, Y, Z, Yaw, Pitch and Roll), then a 3D solution of how the object is presented to the robot may be required.’‘
Annen, Sudhir & Shelat: ‘‘Reliable part fixtures are essential requirements for 2D inspection. In most cases these fixtures are expensive. The use of 3D inspection eliminates the need for part fixtures. Additionally, 3D vision is needed when objects are randomly placed for inspection and they have multiple poses. 3D based vision guidance is also needed for picking up randomly placed moving parts that have multiple poses.’‘
Pastorius: ‘‘Some applications require 3D vision, such as locating an object that has position variation in three dimensions. 3D sensors are also very useful in applications where large amounts of data are required. Robotic measuring systems are one example. While measuring systems can be constructed with a point sensor (like a traditional coordinate measuring machine with a touch probe), use of a 3D sensor dramatically improves cycle time, since multiple data points are obtained in a single camera frame, rather than requiring multiple robot movements.’‘
Shafi: ‘‘When parts can land in multiple resting positions, or when the variation in the part position varies beyond a planar position with a roll angle i.e., the variation extends to x, y, z, yaw, pitch and roll changes. This is true of parts in racks, presentation without fixturing, and slowly moving conveyors onto which parts hang, which may also sway slightly.’‘
12. What do you anticipate will be the impact of emerging camera standards such as Firewire (IEEE1394), CameraLink, USB 2?
Shafi: ‘‘Too early to tell. Standards will develop when full solutions are successful, or when big companies push a standard.’‘
Pastorius: ‘‘From the user point of view, particularly where the system is acquired as a turnkey package, emerging camera standards should be transparent. These standards do provide opportunities to simplify the effort for the integrator in establishing communications between the robot and the vision sensor. As emerging standards mature and become broadly adopted, the complexity of integration will be reduced.’‘
Keating: ‘‘Firewire may have some impact as it’s currently used in the general robotics community for some motion control applications. CameraLink might be useful because of its capability to handle very large format images. The impact of USB 2 remains to be seen. However, we don’t see the impact of any of these standards being any greater in vision guided robotics than in the machine vision industry as a whole.’‘
Annen, Sudhir & Shelat: ‘‘Analog standards that are used to connect cameras to robot controllers are being replaced by digital standards. Real-time transfer of signals and images is a key requirement in vision-guided robotics. Digital standards that a