Manufacturers place demands on collaborative robots (cobots) to be faster and more powerful every year. But, engineers have to keep in mind these cobots need to be safe for the employees working around them. Some may think simply adding fencing can make a cobot safer, but there’s an alternative that often works better.
Artificial Intelligence vs. Fencing
People are fast and clever. They often find workarounds for physical barriers they wish to cross. Due to safety, if a work cell is fenced-in and designed so a robot needs to be shut down before a person can enter, it is possible a human worker will try to bypass the safety mechanism instead of interrupting the cobot’s work.
Many fenced-in cobot systems still have force and speed limitations as additional safety precautions. Artificial Intelligence could remove the need for these kinds of limitations. Instead, separation monitoring and advanced vision technology could allow the increase of a collaborative robot’s capabilities.
AI systems could make any robot collaborative. Humans could work even closer to cobots than they do now without a threat to their safety. And since cobots could work faster with more force, production cycles could be improved as heavier loads could be moved and manipulated quicker. Most cobot maximum payloads are now limited to about 10 kg because of safety concerns, but that could increase with AI.
Vision Systems for Collaborative Robots
To give AI the information it needs about the work environment, the collaborative robot will need a way to “see.” Machine vision and motion-sensing technology will need to be integrated into automated systems. Multiple vision cameras are needed to overlap and monitor the work cell.
One solution is to perform this scanning with cameras and computer vision software. Infrared flashes at 30 flashes per second are used to map every object near the cobot. The system combines the camera’s data to look for occlusions (obstructions). When an occlusion is detected, it’s assumed a human has entered the work cell. Custom procedures can be followed so the human is not harmed.
Artificial Intelligence, Not Machine Learning
The use of AI with these vision systems instead of machine learning is an important distinction. Machine learning is probabilistic – it’s based on probability and subject to chance variation. AI classification makes occlusion analysis more efficient and ensures human safety at all times.
Current safety standards make it clear no statistical approaches for triggering safety measures should be allowed. A statistical approach to safety would program a robot to assess, “there is a 78% chance this human will be injured if action isn’t taken.” At what point does the system act? 50%? 75%? Humans should always be able to count on 100% safe working conditions. AI is the route for this assurance.
Be sure to register for our upcoming August 22nd free webinar Robot Safety Update. Director of Standards Development at RIA, Carole Franklin, will share key updates and provide a clear understanding of what defines a robot standard.