• Font Size:
  • A
  • A
  • A

Robotics Industry Insights

Intelligent Robots: A Feast for the Senses

by Tanya M. Anandan, Contributing Editor
Robotic Industries Association

Dumb robots – it’s a phrase we’ve heard too often. We’re told robots can only perform tasks that they’re explicitly programmed to do. Every movement carefully plotted. Every target uniform and predictable. No variations, no deviations. Everything has to be very structured. Well, no more!

Advanced sensing technology is raising the robots’ IQ to new levels. Sophisticated software and clever end-of-arm tooling provide a head start. As we predicted in January, sensors, software and EOAT are the tools of the future.

Faculties many humans take for granted – sight, sound, touch, taste and smell – are no longer the exclusive territory of organic creatures. Artificial intelligence is gaining ground.

Advances in vision systems, force and tactile sensors, speech recognition, and even olfactory receptors are creating high-achieving robots able to do things and go places that their predecessors could only dream about.

Sensors are getting better, smaller, cheaper and easier to integrate. The computing power to crunch all the sensory data they churn out is getting faster and more robust. They’re shrinking in size but growing in capacity. In turn, our artful manipulators are graduating steps ahead of their sensor-envying peers. Where are your robots on the spectrum?

One of the fastest growing areas of sensor development in automation and robotics is in perception. Machine vision technology, laser scanners, structured-light 3D scanners, and the imaging and mapping software to support them are making their way into more applications, which is opening doors to robots in more industries. The machine vision market is coming off two consecutive years of double-digit growth and expected to continue that upward trend.

Henrik Christensen, the Executive Director of the Institute for Robotics and Intelligent Machines at Georgia Institute of Technology in Atlanta, Georgia, sees tremendous potential for robots coupled with sensor technology.

Researchers use uncalibrated visual servoing to intuitively locate and remotely manipulate objects with a robot arm (Courtesy of Gary Meek, Georgia Institute of Technology)“About 90 percent of all robots we use today don’t use sensors,” says Christensen. “We basically put them inside an area. We have enough control of that area that we don’t have to have sensors on the robots and that’s allowed us to penetrate 10 percent of the manufacturing industry. The remaining percentage is where you need to understand more about the environment. We need to be able to know where the parts are in these environments, or we need to build very expensive fixtures.”

“If we look at sensor-based robotics, we have the technology in many cases to be able to engineer solutions for picking particular objects and to do it very well. I think it’s interesting that we can do this for things like the Amazon Picking Challenge,” referring to the e-commerce giant’s recent competition to spur technological advances in automated picking for unstructured environments.
Christensen notes that perception was the dominating factor separating the winners from the rest of the field. The Robot Report agrees in this event summary.

“There’s no doubt we will see tremendous progress on using these kinds of techniques to open new areas where we can use robots,” he says, crediting lower costs and higher computing power as the primary drivers in the upsurge in sensor adoption.

“We’re getting much cheaper sensors than we had before. It’s coming out of cheap cameras for cell phones, where today you can buy a camera for a cell phone for $8 to $10. And we have enough computer power in our cell phones to be able to process it. The same thing is happening with laser ranging sensors. Ten years ago, a modest quality laser range sensor would be $10,000 or more. Now they’re $2,000.”

“The other thing that happened is we got the Kinect sensors (Microsoft), then PrimeSense (acquired by Apple), and more recently the Structure Sensor (Occipital). They all allow us to do 3D modeling of objects in our environment.” And he points out, they’re now more affordable.

Uncalibrated Visual Servoing
At Georgia Tech, researchers are working on uncalibrated visual servoing. Hoping to bypass the need for tedious camera calibration, they use visual servoing to control the motion of a robot manipulator using visual feedback signals from a vision system.

In this video the lead researcher explains how uncalibrated visual servoing works and its advantages. Without requiring calibration, a robot can be remotely and intuitively controlled to do precise tasks, even thread a needle.

“One of the things we’ve been working on extensively is the ability to take a vision system and a CAD model, and with those two things, basically build a vision system that will do reliable tracking,” explains Christensen. “We’re trying to make it as minimally calibrated as possible.”

“Right now, we have a project where we’re using the vision system, in this case calibrated, to get to an accuracy that’s better than 0.1 millimeter, even for very large robots, either for automotive or for aerospace. That’s at a level of maturity where we’re starting to transition this out of the laboratory and into commercial applications.”

He says they recently deployed similar technology in an automotive factory in Paris.

“Because we’re using a CAD model of the object (in this case a car door), we can actually get very reliable tracking. Even if a person walks in front of the door while it’s being manipulated on the assembly line, it does the tracking perfectly. No problem at all, even though there’s a person right in the middle covering part of the door. Because it has a CAD model it knows what the entire part is supposed to look like, and it can track to 0.1 millimeter accuracy.”

“For industry that opens up some very interesting opportunities,” he continues. “We’ve used this for all kinds of tracking. We’ve also used this to detect pieces in a bin and to do kitting applications. Getting those sorts of trays or kits built automatically rather than having it done using manual labor is something that has a lot of interest in electronics, automotive and aerospace.”

“The hard part in robotics for many years was bin picking,” says Christensen. “That’s now becoming a problem that’s very much approachable.”

He cites examples such as FANUC America Corporation’s 3D area sensor for random bin picking shown in this video and Universal Robotics’ 3D sensing and machine learning demonstrated in this video.

Both Mitsubishi Electric Automation (video) and Nachi Robotic Systems (video) recently introduced their own 3D random bin picking solutions.

Robot picks and stacks randomly piled parts in six degrees of freedom without calibration, CAD files, lasers or point clouds (Courtesy of Recognition Robotics Inc.)Solving Random Picking
At Automate 2015 in March, another random picking solution was creating a lot of buzz. Dubbed Cortex Random Picking and developed by Recognition Robotics Inc. in Elyria, Ohio, the system retrieves randomly piled parts and stacks them one-by-one in a uniform orientation. It does this with an off-the-shelf camera by Baumer and without requiring calibration.

Shot on the show floor, this video shows the Cortex Random Picking system at work.

The reaction by booth visitors was one of partial disbelief and amazement, according to Recognition Robotics’ Vice President Joe Cyrek. “Nobody could believe that we can find objects in three dimensional space in six degrees of freedom just based on an image or a picture of the object. Everybody wanted to know where the point cloud was, where the laser scanner was. There is none. There’s a camera taking a picture and we’re guiding the robot to find it.”

“And when we find the part, we’re finding it with submillimeter accuracy,” he adds. “Most part pickers, especially in the point cloud or 3D CAD world, find the blob and go pick it up. Then they usually drop it and use a 2D camera on a known plane to localize its final position. We’re picking up these parts out of the pile with arguably perfect accuracy every time.”

Mimicking Human Cognitive Recognition
The “magic” is in the Cortex Recognition software, a collection of patented visual recognition algorithms developed by the company’s founder and based on the human cognitive ability to recognize objects. Cyrek offers an analogy akin to the way a baby learns.

“When we’re born we have nothing in our memory. As we go through life, we unconsciously store all these images in our brains. It begins with our parents’ faces, then our favorite blanket, and so on. Then when we see something, we recognize it.”

Recognition Robotics’ software “learns” like a baby.

“The software resides on a recognition controller, basically an industrial computer,” explains Cyrek. “Attached to that computer is a single 2D color camera. The camera takes a picture and sends that image back in to the software, where the patented algorithms run. It determines what the object is and where it is in space, in six degrees of freedom (x, y, z offset, as well as the rotation about those axes, Rx, Ry and Rz).”

“All you’re doing is taking a picture,” says Cyrek. “There’s no calibration, or calibration plate. The only information the user has to enter is the focal length of the camera lens and the distance to the part. There’s no programming with our system. Just store all those recognized images in the baby’s brain.”

Sound too good to be true? Wait, there’s more.

“The human brain also doesn’t need to see the entire item to recognize it,” explains Cyrek. “Our guidance system works exactly the same way. We only need to see roughly 70 to 75 percent of an image to recognize it and still guide a robot to pick it up.”

As demonstrated in this video, this partial visibility recognition makes the system very good at seeing deformable objects, such as chip bags. In this demo, the “baby” was taught the front side and the reverse side of each bag for two different brands of corn chips. It sorts the bags into the correct boxes regardless of how they orient before the camera.

VGR in Your Palm
We first introduced you to Recognition Robotics and its googly-eyed robot two years ago in the article Robotics+Vision at a Glance: The Dos, Don’ts and Applications. Back then, we were talking about Robeye™. The new Cortex Random Picking is an advanced version of Robeye with software and algorithm enhancements to allow for the recognition of randomly placed parts.

“Previous to Automate 2015 we had Robeye, which was a three-dimensional guidance system for any robot,” explains Cyrek. “It consisted of a camera, some lights, and the associated cables that all get connected to a large control panel. The control panel then gets connected to the robot controller and that’s how they communicate. So it was a sizable amount of stuff.”

In addition to debuting its random picking system at the Automate show, Recognition Robotics launched RAIO, or Robeye All In One.

“For a long time, we’ve been trying to figure out how to shrink everything, how to get the power of our algorithms into basically just the camera,” explains Cyrek. “That’s RAIO. It’s a smart sensor (CMOS image sensor). Our software is embedded inside of it. Instead of having floor space and electricity consumed by a control panel, we took a shrink ray to everything.”

He says the effect was shock and awe on the show floor. “It’s everything you need to guide a robot in the palm of your hand and it’s deployable within hours,” says Cyrek. “There’s still no need for calibration and it has its own on-board lighting control.”

A 2D vision guided robot performs bolt shooting and tightening on automotive underbody (Courtesy of Recognition Robotics Inc.)“Because we don’t have all the horsepower we need, today it is only used for 2D, 2-1/2D and what we’re calling 4 degrees of freedom guidance, so that’s x, y and z, plus Rz (not full 6 degrees of freedom like Robeye). In the future, once we refine our algorithm and figure out how to take advantage of the quad core processing power inside of RAIO, we feel we’ll be able to offer six degrees of freedom guidance inside that small package.”

“We still sell Robeye for when you need full six degrees of freedom. But now with RAIO we can compete much more efficiently in the 2D and 2-1/2D market.”

Applications include conveyor picking, palletizing, and depalletizing. In the pictured application, the RAIO system is guiding a screw feeder onto a nut on an automotive underbody. RAIO recognizes the nut and guides the robot-carried bolt installer to the proper location to install and tighten the bolt. Guidance is needed because the build tolerances of the vehicle allow the position of the nut to float greater than the compliance value of the bolt installer.

Cyrek says, since the Automate show, their offices have been flooded with parts for performing prototype feasibility studies. Applications range from picking parts out of totes to assemble valves and components for air bags, to a snack food company that wants to randomly pick bags of product from a bin with a mobile robot and load it into distribution boxes.

Sensor-Enabled Mobility
Sensor fusion, which combines sensory data from different sensors, is enhancing robot mobility.

“We’re starting to use vision and laser ranging to allow us to build mobile platforms that we can use on the factory floor,” notes Georgia Tech’s Christensen. “The mapping technology is now getting good enough that we no longer need to necessarily bolt the robot to the floor. We can actually have them move around. There was a lot of that at Automate, and we’re seeing this for logistics applications.”

Automate show goers were greeted by very polite white “boxes” cruising around Adept Technology’s booth. The Lynx® autonomous indoor vehicles said “pardon me” as they performed evasive maneuvers to navigate among booth visitors. They’re shown here in their natural habitat.

Meanwhile, KUKA Robotics Corporation’s LBR iiwa is turning heads as it appears to float across the factory floor on its new mobile platform.

Simultaneous Localization and Mapping, or more commonly known as SLAM, is used in conjunction with usually two or more sensors. SLAM addresses the challenges of a mobile robot building a map of an unknown environment while at the same time navigating that environment using the map. Researchers are busy developing and tweaking algorithms for SLAM applications.

This video demonstrates autonomous aerial navigation with a quadrotor using SLAM technology.

SLAM has emerged from the research labs and found practical use, most notably in self-driving cars and more recently, a line of robot vacuums by Neato Robotics.

The LIDAR Effect
LIDAR, which is often referred to as Light Detection and Ranging, or laser radar, is forging new inroads for mobile robotics.

Mobile robots equipped with LIDAR sensors are moving into warehouse logistics (Courtesy of Fetch Robotics, Inc.)“I think LIDAR is exploding right now,” says Aaron Rothmeyer, National Product Manager for Ranging Products at SICK Inc. in Minneapolis, Minnesota. “The accuracy is improving. The cost is coming down. I think a lot of it is driven by the market as well. For the better part of the early 2000s, a lot of venture capital money was going into software development. Now a lot of the VC money is actually going into hardware. That’s producing these companies that get a flush of funding, and now all of the sudden, they need sensors. That’s when they come to us. That’s when they find that LIDAR has a lot of benefits.”

One of those companies is Fetch Robotics, which just secured $20 million in new funding to ramp up the launch and development of its Fetch and Freight system for warehouse logistics automation. Rothmeyer says SICK has been involved with the mobile system since the beginning.

“So if you look at the front of one of the Fetch robots, you will see a little black inverted cone near the bottom (above the gray-colored base). That’s one of our TiM laser scanners. As the robot drives around, it can map its way around a facility.”

This video shows the Fetch and Freight system in action with the SICK TiM laser scanner on board guiding each mobile robot independently.

According to this IEEE Spectrum article, the Fetch mobile manipulator is also using a PrimeSense 3D sensor in its head to locate product on shelves and place it in the bin of its faithful sidekick, Freight.

Rothmeyer says the TiM is SICK’s most recent laser scanner. “We’re taking a lot of the features of our larger scanners and packing them into a smaller housing and making them more efficient. As we make it smaller and smaller, we find that more people are more willing to adopt it.”

You may have noticed TiM’s heavyweight predecessors from the early days of the DARPA Urban Challenge when 90 percent of the self-driving cars, including the winning teams, had SICK laser scanners on board. Let’s just say that the spectators were brave and sensors have come a long way since the 2007 race.

“There are a lot of technologies like stereovision and structured-light cameras that do a good job in ideal conditions, in lab conditions, and I’m really looking forward to seeing what those types of technologies can do in the future,” says SICK’s Rothmeyer. “But as they are right now, as soon as you go away from ideal, into low light or even no light, or precipitation or fog, all of a sudden those sensors really start to drop off. So when a customer designs a robot and they need it to work in all conditions, they find that LIDAR or laser scanning doesn’t have the same issues with environment as some of these other technologies do.”

“In the case of machine vision or a camera, you’re depending on external light to be fed into the sensor,” explains Rothmeyer. “With LIDAR, we actually send out our own light and then it interacts with the object and returns to the sensor. So you can start to ignore a lot of the environmental effects like bright sunlight or no sunlight, or color. It’s very independent of external effects.”

Illustration of LIDAR sensor demonstrating the time of flight principle (Courtesy of SICK, Inc.)Time of Flight
LIDAR works on the principle of time of flight. The sensor shines a laser off of a rotating mirror. As the mirror spins, the laser scans a viewing angle between 70 and 360 degrees, effectively creating a “fan” of laser light around the sensor. Any object that breaks this fan reflects laser light back to the sensor. The distance is calculated based on how long the light takes to bounce back to the sensor.

“The sensor sends out a pulse of light, we wait for it to return,” says Rothmeyer. “We multiply the elapsed time by the speed of light and we get our distance.”

He says three major criteria differentiate most laser scanners. It comes down to range, angular resolution, and speed.

“For example, if you don’t need 8 meters of range, then there’s no point in investing in something that can do that. Angular resolution is the distance between the beams. The tighter we can make that distance, the more complete data we can send to the robot. So a lot of people are looking for tighter resolution. We’re getting down to where we can send out a beam every sixteenth of a degree and there are other products out there that can go higher.”

“Speed depends on what the customer is trying to do with the scanner,” he continues. “Some people are doing very slow applications that don’t care about speed at all. Others are trying to capture cars as they travel down a highway at 70 miles per hour or greater. Then, speed becomes very important.”

Illustration demonstrating the distance calculation for time of flight (Courtesy of SICK, Inc.)

He says the limiting factor on speed is almost always how fast the mirror is spinning. SICK laser scanners go up to 100 hertz, so that means 100 times per second the mirror makes a full revolution. In 100th of a second, you get a complete picture of what the laser scanner sees.

Safety, Security, Volumetric Measurement
Rothmeyer says applications for LIDAR run the gamut. From autonomous mobile robots like the Fetch and Freight system, to other mobile platforms for collaborative robots such as Clearpath Robotics’ Ridgeback and Knightscope’s automated security guard, the laser scanners are used for path planning and anti-collision.

The role safety laser scanners are playing in the new era of human-robot interaction was covered in The Shrinking Footprint of Robot Safety. “We took LIDAR and made it redundant, so it’s approved to protect human operators,” says Rothmeyer.

“We also do volumetric measurement, so if you’re looking down at a conveyor belt, we can measure the volume of the product traveling underneath it. We do a lot of industrial security for nuclear plants and substations. A cruise ship company actually approached us about overboard protection. Basically, they want to make sure they’re aware if somebody falls overboard so they can take the appropriate action.”

“Bin-level management is one of our biggest sellers in plants. Say you have a bin of plastic pellets being filled for an extrusion molder, you can have a laser scanner above the bin to get a complete picture of how full the bin is. We’ve also seen them used on automotive painting lines, so right before the car goes into the paint booth they can make sure a door, hood, or trunk isn’t open.”

Smart Robots, Smart Manufacturing
With the advent of Smart Manufacturing or Industry 4.0 and the Industrial Internet of Things (IIoT), collecting and managing data for better decision-making is paramount. Sensors play a big role in that data generation.

“A lot of our customers aren’t even aware of all the data they could pull off of one of our sensors,” says Rothmeyer. “They don’t just provide ranging data.”

“They can also provide an internal temperature, so you can monitor the health of a system. They can provide a screen contamination or screen cleanliness readout. It’s an optical system, so if you have grease or dirt on the screen, that can create issues. In the past, you might have to troubleshoot to figure out what’s going on. Now we can actually send out a signal indicating that the screen needs to be cleaned.”

“We can also monitor our own power levels to make sure we’re not consuming more than we should be. We can monitor hours of operation, so we know when we’re reaching the sensor’s service life and can put out an alert.”

The sense of touch is one of our most valuable sensory inputs. Equipped with sight, robots can pick and place objects in less-structured environments, but touch allows them to manipulate objects with greater precision and sensitivity. Force sensors give robots the ability to “feel” what they touch.

“Without a force sensor or vision, robots are totally dependent on everything being located somewhere in space in a predictable and repeatable manner,” says Milton Gore, Account Manager at ATI Industrial Automation in Apex, North Carolina. “They can’t adapt to unknown or unpredictable environments.”

Scientists around the world rely on force sensors to provide accurate data for their robot-assisted research. Robot manufacturers and system integrators covet high-performance force sensors for their material removal, part fitting and assembly applications. Force sensing, or tactile feedback, covers a wide spectrum of applications, even in space, where developments in six-axis force/torque sensing are helping robotic explorers on Mars.

This video takes you inside the ATI Six-Axis Force/Torque Sensor where silicon strain gages, a proprietary bonding method, and low-noise electronics ensure reliability.

“We’re the only commercial sensor company making a six-axis sensor using silicon,” explains ATI’s Gore. “Silicon gages have a very high output. Every sensor has a point at which if you apply too much load, it will fail. Since our strain gages have an extremely strong signal, we can operate well below that failure point and still have a useable signal.”

He compares ATI’s silicon strain gages to metal foil, a common type used by other suppliers.

“A metal foil strain gage sensor normally can handle 50 percent overload. Our sensors are designed to have an overload capacity of typically between 5 and 20 times. If it’s designed to measure 100 pounds, it might survive up to 2,000 pounds overload.”

Operator uses haptic-controlled telemanipulator with force feedback to guide a robot in a foundry grinding application (Courtesy of Vulcan Engineering Co.)Gore says that it’s a common misconception that these highly precise sensors are fragile. If you think of the first time you may have used a force sensor, it was probably in a university research environment. But Gore reminds us that force sensors are used in gritty, demanding industrial applications every day.

“Our sensors are on robots moving around hundreds of kilogram payloads and running around the clock seven days a week. They’re also very strong in order to survive the occasional crashes that robots experience.”

Force Feedback in the Foundry
In this video an operator uses a force-feedback haptic controller in conjunction with a force/torque sensor to operate a high-payload industrial robot as a telemanipulator. Currently, this technology has been applied to cleaning castings in foundries with all components designed to withstand this severe environment. Other markets are being explored where allowing the operator to use the functionality of a robot while running it manually in real time will improve productivity and safety. The haptic interface along with the ability to access the functionality of a robot to move in a plane allows the operator in the cab to “feel” the forces being exerted on the workpiece and maintain the proper cutting pressure during removal of risers from castings (as shown in the video).

The VTS™ Vulcan Tactile System was developed by Vulcan Engineering Co. in Helena, Alabama, and uses tactile and force feedback to allow manual control of industrial robots to cut and grind at virtually any angle, according to Chris Cooper, Vice President of Sales and Marketing. It’s available in floor, gantry, and carrier models.

“The harder the operator attempts to push the manipulator against an object, the more force they feel in the hand control,” says Cooper. “The level of feedback can be adjusted based on the application and operator preference.”

Automate 2015 show goers got a feel for the “hands-on” experience when we commandeered the haptic controller in Vulcan’s booth demo. In case you missed it, check out this video.

You have to feel it to believe it. For more on haptics and other technologies helping robots tackle unstructured environments, refer to Our Autonomous Future with Service Robots.

Tactile Feedback
From huge foundry parts to the controls on your car’s dashboard, the applications for force sensors are varied. Force/torque sensors provide robots and their operators with valuable tactile and force feedback.

“Because the silicon strain gage has such a strong signal, we’re able to make a very strong structure for the transducer,” says ATI’s Gore. “That gives us the high overload capacity and also it allows us to make a sensor that is very stiff.”

He says other force sensors rely on significant movement of the transducer to perform a measurement, which means the sensor actually flexes. In industrial applications, that flexibility or compliance is not desirable. Gore says it can lead to oscillations or make it difficult for the robot to determine where the tool endpoint is if components are moving around.

“Our sensors are very stiff even when measuring,” says Gore. “They do move, but only on a microscopic level.”

The ATI Force/Torque Sensor features a monolithic design, which limits hysteresis.

“It’s primarily just one chunk of metal and we carve a sensor out of it. Hysteresis is essentially a measure of the repeatability. Our sensors have very good repeatability, very low hysteresis. We also have an extremely advanced calibration process that produces the highest accuracy and repeatability.”

Force Sensors for Product Testing
Accuracy and repeatability are critical in product testing applications. Gore says this is a big area for ATI, especially in the automotive industry. Every car owner can appreciate a vehicle that not only looks good, but feels good. Part of that sensory experience is how the various knobs, buttons and levers perform under your touch.

Robots are used to manipulate different components in a car, such as the turn signal lever or headlight control. Sensors test the relative force needed to actuate these different components.

“The automobile manufacturers have pretty tight specs for how much force that should take,” says Gore. “It should not be too difficult to operate, but it shouldn’t be all loose and sloppy either. Everything has to have a certain feel and it has to be consistent and uniform. There’s a company that uses our sensor to manipulate all of these things inside the cabin of the car and make sure they operate to the manufacturer’s specs.”

This video shows sensor-equipped robots testing the force needed to move the louvers on the vents for a car’s heat and air conditioning system.
Robot equipped with force sensor performs product testing of automotive seating (Courtesy of Battenberg Robotic)The automobile industry also uses robots and force sensors for seat testing, where wear and tear is a major concern.

“Until they started using robots, they had a single-axis linear slide that would drive something into the seat over and over again,” explains ATI’s Gore. “It didn’t simulate the way people actually get in and out of cars. Now they use a robot that has the freedom to move in all axes just like a person, and with our sensor they have the ability to maintain programmed contact forces. There are robot seat testers all over the world now.”

Beyond Human Capability
With the aid of force sensors, robots can assemble complex items where even humans struggle. This video shows automated assembly of a nuclear fusion target at the Lawrence Livermore National Laboratory.

“This device is the size of a pencil eraser,” explains Gore, describing the nuclear fusion target, “and has several different components that are being put together in this automated system using two of our sensors. These parts have tolerances of micrometer level. That’s something a human would definitely struggle to do, especially when you have to maintain that level of precision day in and day out.”

Force sensors are also helping robots perform automated tasks that are simply beyond human capacity. In the aerospace industry, demands for greater fuel efficiency and extended service life are requiring ever-tighter component tolerances.

In the quest to reduce emissions and increase fuel savings for airlines, OEMs are racing to improve aerodynamics on gas turbine engine blades by requiring tighter tolerances on leading and trailing edges, making manual blade profiling a thing of the past. Increased labor costs, a shortage of skilled labor, and concerns about worker safety are also fueling the automation thrust.

In April, FANUC’s 2014 Innovative System of the Year Award went to a new robotic system for jet engine blade weld blending and profiling. Developed by AV&R Aerospace in Montreal, Canada, the automated system is geared to the Maintenance, Repair and Overhaul (MRO) market and uses a FANUC LR Mate robot coupled with an integrated force sensor to re-profile blades and vanes according to original part design. The process achieves tolerances of plus-minus 37.5 microns, or 1.5 thousands of an inch, a feat only possible with robotic automation.

Robot equipped with integrated force sensor precisely blends and re-profiles the weld repair on a jet engine blade (Courtesy of AV&R Aerospace)While in service the compressor blades wear over time, both narrowing the blade and making it thinner. The blades are supplied to AV&R Aerospace’s robotic system with weld repair already complete to build up the surfaces of the leading and trailing edges. Then the process uses a belt sander to blend the weld smooth with the parent material and re-profile the blade edges within tolerances.

This video shows the AV&R Aerospace Blade Profiling and Blending System in action.

According to AV&R Aerospace’s Engineering Coordinator, Guillaume Couture Armand, Eng., it would be impossible to achieve the tight tolerances without the FANUC force sensor. They need the adaptability of the force sensor to be able to follow the complex geometry of the blade.

“The key difficulty of this application is the random shape of the parts, because they are used parts,” explains Armand. “They will deform all differently. We have a CAD of the blade, which would be the ideal part, but the actual reality is very different, so we need the force sensor to adapt the robot path to the actual blade. Not only is the shape random, but the curve gives it a complex geometry, so we need the force sensor to compensate in different directions depending on the wear and how we’re removing the material on the airfoil. The force sensor allows us to change the orientation and keep in contact with the material removal tool (belt sander).”

Armand notes that the system intelligence comes from the adaptive and closed-loop capabilities, which he attributes to the software, designed by the company’s team of engineers, combining data from the force sensor and the inspection system, in this case a Keyence laser scanner.

“Robots are repeatable, but not really accurate. An inspection process is done prior to material removal. The vision system takes measurements of the part to determine the geometry and the thickness of the weld, and to compute the parameters to get the blade from its original state to the desired finished result. This is to calculate the actual geometry, because it varies a lot from blade to blade and so we need to adjust the task. Even if we change the path of the robot to fit with the actual geometry of the blade, the robot doesn’t go exactly where it’s commanded. The force sensor adapts the path to follow the actual shape of the blade. That’s the adaptive part.”

Then the system validates its own work. He says that’s the closed-loop capability.

“When the material removal process is done, the blade is inspected again to make sure we are within tolerances. If that’s not the case, then we’ll go for a second run to remove additional material to bring it closer to the desired tolerances.”

The process must allow for material removal without breaking or changing the microstructure of the weld. Armand says it must also stand up to the touch test.

Aerospace blade shown before and after the robotic blending and profiling process (Courtesy of AV&R Aerospace)“Not only shouldn’t you see where the weld starts, but when you touch the blade, you shouldn’t feel it either. The weld and the parent material need to become one.”
He says the entire process takes about 15 minutes per blade. The system has been in production at an aircraft manufacturer’s facility for about six months now. Two more installations are in the works for different customers also looking to extend the service life of their jet engine components.
“This is a very complicated process that they weren’t even able to do manually,” says Armand. “Our system actually brought them new production, which would not have been possible before that.”

AV&R Aerospace first cut its teeth on new OEM blades and vanes, where the profiling process is slightly easier given they start with a known geometry and less variation. They also use a similar process in their polishing systems for industrial gas turbine blades used in the energy sector.

This video shows various applications, from processing small to very large engine blades.

“The MRO application is a step up from what we were doing for the new blades. We always knew it was going to be a big step to get from where we were with the new parts to the MRO sector, but it was definitely a worthwhile investment because the market for MRO is huge.”

The MRO market is a new target for AV&R Aerospace. With the experience the company’s engineers have amassed in the aerospace industry for over 20 years, the systems integrator is excited by the significant prospects of this new frontier for robotic automation.

“The market for this system is very large since it goes from the jet engine blade manufacturers – creating new blades – to anyone who repairs jet engine airfoils (OEMs, approved repair stations and third parties),” says Michael Muldoon, AV&R Aerospace’s Sales & Marketing Director. “Considering every engine has to be taken off wings and go through a refurbishing process, every blade that flew needs some sort of repair before it can be used again. Every airfoil is repaired between 4 to 5 times during its service life.”

AV&R Aerospace sells its blade profiling and blending systems as standalone, integrated automation systems. Armand says all the software and algorithms for calculating the blade geometry and adjusting the robot path are also developed in house.

This is precision unobtainable by humans. Companies that want to participate in this space have no choice but to automate.

Haptics-controlled teleoperated mobile robot uses sensors to locate roadside bombs (Courtesy of Harris Corporation) Sensors with Military Might
Sensors not only help robots make the impossible, now possible. They help save human lives.

Force sensors are used on these teleoperated mobile bomb disposal robots for military deployments. Primarily designed for explosive ordnance disposal (EOD), the system enables operators to feel remote objects through end-of-arm force sensing and a haptic controller.

This video dramatization shows one of these bomb “sniffing” robots in action.

Mounted on the wrist of the robot arm, ATI’s Mini45 Multi-Axis Force/Torque Sensor delivers tactile feedback to the operator over the robot’s wireless communication system.

“By giving the robot a sense of touch, it allows robots to have much more autonomy,” says Gore. “Within the job they’re doing they can make decisions and change strategies based on the forces they are measuring and the direction and magnitude of those forces, just as humans can. It also allows them to interact with humans. You can have cobots with built-in force control that prevent the robot from injuring the human.”

Human-Robot Collaboration
Cobots and other robots used in human-robot collaboration and the sensors and technology that enable more autonomy were profiled in The Realm of Collaborative Robots – Empowering Us in Many Forms.

“We’ve gotten better force torque sensors,” says Georgia Tech’s Christensen, “which has allowed us to start to see the new generation of safe robots. Whether it’s a Baxter, Sawyer, Universal, KUKA, YuMi, they’re all equipped with one form or another of force or torque sensing that allows us to build robots that are safe in the presence of humans. There’s no doubt that’s going to be a big area.”

“Not too long ago, I visited the BMW factory in Spartanburg,” says Christensen. “They have 13 cells where they are assembling a car line and the door for a car. They have it set up so they can roll out a robot (in place of a human) and if they have a challenge with a robot, they can roll it out and put in a human. It’s very interchangeable.”

Universal Robots’ cobots have been hard at work on the production line in Spartanburg since 2013. Take a look.

“In December, I was in China and I was looking at one of their electronics assembly lines,” says Christensen. “There are certain areas down the line where they would like to put in robots, primarily to get higher accuracy and better repeatability. So they are slowly starting to put in robots. And the sensing is really helping us get to easy programming, less-structured environments, and lower costs.”

Speech recognition, spurred on by the consumer electronics industry, is bolstering advances in human-robot interaction and collaboration. The rise of social robots like Pepper, which sold out in the first minute of its rollout, is paving the way. Meanwhile, researchers are working hard to make robots easier to “train” and implement for a new generation of users.

“We have a robot here at Georgia Tech in Andrea Thomaz’s lab where they basically give it a recipe for something like spaghetti Bolognese and then a human and the robot collaborate on cooking that meal,” says Christensen. “So in pasta Bolognese, you have carrots, tomatoes and meat. If the human starts chopping the tomatoes, the robot is smart enough to realize that it should pick up the carrots and bring them to him, so when he’s done with the tomatoes, the robot can give him the carrots. It’s at this very high-level description. You tell the robot what you want to do and then it will actually execute on this.”

 Researcher uses verbal commands to teach a humanoid robot with speech recognition capabilities and other high-level sensing how to perform various tasks (Courtesy of Harold Daniels Studio, Georgia Institute of Technology)“We need to stop programming these very low-level description languages,” he continues. “Instead we need to be able to give the robot a very high-level description and then the robot will automatically figure it out. And that’s a combination of having planning capabilities, and then sensing that allows it to recognize the tomatoes. The robot will also then recognize my activities.”

“For Curi and Simon (Georgia Tech’s resident research robots), we primarily do learning by demonstration where we will drive the arm to different places and then we do speech dialogue. So we’ll tell the robot, ‘here are the tomatoes, here are the carrots, here are the noodles,’ and then it will actually understand.”

This video shows the Simon humanoid robot learning how to do a task via a combination of speech recognition and lead-through teaching, where a Georgia Tech researcher moves the arms to the desired positions and augments the instruction with verbal commands.

“I think we’re getting reasonably close to taking this technology out of the lab and into practical applications,” says Christensen.

Robots that can taste and smell? Yep, there’s even a robot for that. A researcher from Mexico is using artificial intelligence to mimic our olfactory senses. Her work in search-and-rescue robotics is profiled in this article. Watch out, man’s best friend. Electronic bloodhounds are in our future. 

Moley Robotics asks if you could trust a chef that can’t taste. This robotic chef sports two arms from Universal Robots, the movements of which were mapped from 3D camera recordings of a real chef preparing one of his acclaimed recipes. Frankly, who cares if it doesn’t have a palate, as long as it can cook!

 But wait, this “robot” can taste, at least if you like Thai food.

Beyond Our Senses
Not unlike bats and dolphins, some robots are equipped with senses humans don’t possess. The new age of collaborative robots combine multiple sensing technologies for a smarter-than-average automated workforce.

Rethink Robotics’ Baxter robot has a crown of a dozen sonar sensors just above its “face” that detect movement 360 degrees within the robot’s proximity. Next to each sonar sensor are yellow-colored LEDs that light up when Baxter senses something, or someone, nearby. Baxter is also endowed with a camera in its head and each wrist, and force/torque sensors in its joints.

Mobile robots equipped with night vision and infrared sensors are advancing search-and-rescue teams and security details. Robot-sensor combos to detect electrical and magnetic fields, pressure fluctuations, humidity, chemicals and other environmental conditions are forging new frontiers for robotics in mining, construction, agriculture, marine exploration and many other industries ripe for intelligent automation. This is just the beginning.

RIA Members featured in this article:
ATI Industrial Automation
Georgia Institute of Technology
Recognition Robotics Inc.
SICK, Inc.
Vulcan Engineering Co.


Originally published by RIA via www.robotics.org on 06/25/2015

Back to Top

Browse by Product:

Browse by Company:

Browse by Services: