Robotics Online - RIA - Robotic Industries Association
Featured Articles

Robots Use 3D Vision to Improve System Costs and Product Quality

by Steve Prehn , Sr. Product Manager for Material Handling and Vision
FANUC America Corporation

3D vision could become standard equipment on robots in a decade or less, according to experts in the machine vision field.

Two FANUC material handling robots equipped with an integrated 3D-laser vision sensor pick tubes that are randomly oriented in a bin and assemble them into an exhaust muffler.  The inset shows the camera image as seen by the 3D-vision system, courtesy FANUC Robotics America Inc..“Vision is a very critical technology,” says Jeremy Pennington, a controls engineer at Guide Engineering in Ft. Wayne, Indiana, an integrator with customers in the auto industry.  “It allows integrators to set up cells that do not require extensive fixturing to locate parts.”  Part fixturing adds cost, and reduces the flexibility of a robot cell tasked with handling multiple parts.  Tooling is very expensive and it’s very fixed.  Vision systems minimize the time required to change a line over to run different part styles.  “With vision, you simply run a different robot program, and don’t need to switch out fixtures.”  Millions of dollars can be saved on tooling and racks in auto plants alone.

The Added Benefit of Using 3D Vision

A vision system takes an image and uses algorithms to find the things an operator trains it to find. An image is basically data, a series of pixels that each have a gray-scale value.  Image processing algorithms have been created that recognize patterns or structures in the image data. With vision cameras, operators can find features and mathematically figure out where the part is in space and guide the robot to the part.

Robot operators know where an image was taken so they can identify where an object is sitting and make judgments about its size, whether it’s part A or part B, and whether it has errors or not. Whatever program the robot is designed to do, it can adapt based on the images and algorithms. For example, if a part is larger than another part, the robot might take a different path. Or, part A might have to get dropped off at a different place than part B. “Ultimately a robot with machine vision will be able to manipulate any part in any orientation,” says David Dechow, president of Aptura Machine Vision in Lansing, Michigan.

“When extracting part positions with a single 2D camera, certain assumptions must be made,” explains Edward Roney, national account manager of Intelligent Robotics and Vision systems at FANUC Robotics.  ”2D systems find a part in X, Y and rotation if you assume the Z (distance from the camera) did not change. If the part moves closer to the camera, changes size, or is tipped differently, traditional 2D systems may miscalculate where the part is in space.  3D systems are more robust, and allow the robot to know exactly where the part is, so the robot can pick up the part cleanly.”

“I believe that the technology is moving at a fast enough pace now,” says John Burg, president of Ellison Technologies Automation (ETA), a Council Bluffs, Iowa-based integrator of robotic systems. “Prices will go down, ease of use will go up, and customers will be demanding more and more of the flexibility that vision brings.”

Burg predicts that 3D vision and robotics will work hand-in-hand in material handling, welding, and loading machine tools, all the while increasing quality and minimizing manual labor. “The reason you do something like install new technology is that in the long run it costs less,” Burg says.

Pennington says, “What happens – either with an onboard camera system or a remotely mounted camera – is you can get a snapshot of an object and find out where that object is in space, relative to the robot’s position, and the robot can use that positional data. It makes it able to go out and find that object no matter where you move it.”

ETA’s Burg says his company installed a robotics system with 3D vision for the processing of 4x2x1/4-inch plates. They start as bar stock, are cut to size, dropped into a bucket and transferred into a room where robots are used to put a hard surface on the plates via a welding process to make them last longer.

A FANUC 3D-laser imaging system is shown finding a wheel that is randomly located within a cardboard bin.   The robot is then guided to the found position so the suction cups are square to the flat surface before it is picked out of the box, courtesy FANUC Robotics America Inc..“The welding process tends to contaminate the gripper. If the gripper encounters a different-sized part, the new-sized part can be contaminated. Before vision, adjustments needed to be done manually. Now, robot positions can be verified on each part,” Burg explains. “The 3D vision allows them to show the part to the 3D camera, and the robot can adjust all its points and now they run without manual intervention.”

The company had four robots doing this process for ten years, but at any given time, one robot was always in need of manual adjustment. Now the company does the same work with three robots and a lot less labor. “There’s a job that never could have been done with 2D because you needed the plane of the part, whether it’s tilted up or down two or three degrees or swung to one side a half a degree,” Burg says.

Integrated 3D Systems
FANUC Robotics has developed the industry’s first integrated 3D system into their robot controllers. Picking a part found with vision requires the translation of this position into known robot coordinates.  If a part position is found to move with six degrees of freedom (X,Y,Z, Roll, Pitch and Yaw) this translation into robotic positions can quickly become a daunting calculation.   3D-vision processes are executed on the main robot CPU eliminating additional hardware and communication delays, while leveraging the robot’s knowledge of its relative relationships in space (kinematics).

“It’s a big difference having an integrated vision system,” Pennington says. “If you want to integrate a third-party camera and collect offset data, there’s a lot of manipulation of that data you have to do to get it back to the robot. Additional hardware and software is needed to accomplish that. And it takes an experienced person days to get that integrated.” Setting up FANUC’s integrated 3D system takes just hours, according to Pennington. Burg says the integrated 3D-vision system is a “huge step” in the evolution of the technology.

Best Applications for Vision
A pro-3D-vision case can be made for every robotic application. “Vision’s universal,” FANUC Robotics’ Roney says. “I wouldn’t say it fits in one place and not another. It can do so many things. It’s a data collector. You just have to decide if you need 2D data or 3D data. It’s an enabler. Almost any application can benefit from it.”

Burg adds, “We are seeing more and more applications for 3D. It gives us the ability to not only know where the part is in X and Y and rotation, but it gives us the ability to also understand the angle of a surface. Now a robot can come in directly perpendicular to that surface and grip the part more effectively and repeatedly.”

Yet some applications seem to present better opportunities than others – like welding, bin picking, packaging, and machine tool loading – and are leading the way in 3D-vision adoption.

Welding – In welding, robots that use vision can adapt  to subtle changes in the presentation of the two components that are to be welded. Other than having a human doing the welding, there’s no way to automatically adapt other than to use vision to tell the robot a part has a slightly different shape and requires a slightly different welding path. Even in a spot-welding application, it can be useful for error proofing. In that sense vision could be useful on every welding robot.

Bin Picking – In the robotic industry, the premiere application has been the ability to bin pick randomly stacked parts. There are three key elements involved (Vision, Bin Avoidance and Collision Detection). Vision is the most obvious – Where is that part? But what about the constraints of the bin wall? As the robot gets farther into the bin, the parts become harder to pick out. There’s modeling that’s done in a bin-picking package that understands the picking tool, the sensor and the constraints of the arm itself so that once the part is found, calculations on the robot are made as to whether it can actually get that part out of the bin. Even though the models are rather sophisticated, they’re still generalized. Eventually the robot is going to hit the wall. It has to sense if it was a soft hit or a hard hit since hard contact can damage the robot. “We’ve been very successful with both structured and random bin picking,” says FANUC Robotics’ Roney. “Structured is where everything is facing up where-as with random, the parts come in a pile. The latter offers more challenges, but with the technologies listed previously, they become very doable.”

Packaging – Packaging is an area where vision is critical. Food products often come down a conveyor or slide down a ramp into a pickup area. There’s no repeatable positioning. The products end up in different positions and need to be picked up, oriented and placed in the package. Vision allows the robots to find the product and make that happen. 

3D vision also offers performance advantages when stacking parts on a wooden pallet, or removing them.  Parts may not only shift side to side, but may also be at different heights, or at different angles.  Pallets are easily damaged by fork trucks, and this damage translates into inconsistencies in the presentation of the parts to the vision system.  Robots that handle parts or boxes that are stacked on pallets can’t blindly assume they are always in the exact same place. 

Loading Machine Tools – “In many of our applications, we’re picking up a part and loading a part directly into a machine tool,” says ETA’s Burg, “and in many cases the fixturing in that machine tool does not have the forgiveness for an out- of-location robot placement. We have to put it in there pretty accurately so it clamps properly and the process happens correctly. If we don’t do that, you get a bad part. Worse yet, you could cause a crash in the machine tool and that can be very costly.”

3D Vision as Quality Control
While 3D vision makes robots more efficient and effective and therefore enhances throughput, integrators and their customers also benefit from vision’s quality control capabilities.  “A 3D system mounted on a robot provides us with a way to measure and verify several critical dimensions on cast parts,” says Bill Yeck, General Manager at Epoch Robotics, Holland, Michigan.  “The flexibility of this approach allows us to move the sensor so it is in the right position on the part, and switch to other part types easily.”

“Applications are all over the board,” says Guide Engineering’s Pennington. “The use of vision systems is getting more and more critical, because people who buy the final product want everything to be perfect. When you have a camera system, you can have 100-percent inspection and be confident that the part you’re sending to your customer is good. A human just doesn’t have the capability to check 100 percent over long periods of time.”

Aptura Machine Vision’s Dechow agrees. “The principal value of machine vision is to provide a go/no-go decision on product quality. This avoids adding more cost and operations to components that are already bad. It also provides protection against things like product returns.”

FANUC Robotics America, Inc. is an RIA Supplier Member. For additional information, please contact FANUC Robotics America, Inc., Rochester Hills, Michigan, at (800) iQ-ROBOT (477-6268), or click here.

Automated Imaging Association Motion Control Association