EN

LD4Robots - Learning from Demonstration for Collaborative Robot

Vaud Engineering and Architecture Arc (BE-JU-NE)

Learning from demonstration for collaborative robots: programming a collaborative robot by demonstrating the task to be performed by the user. This task is acquired and analyzed using different motion sensors and then replayed by the robot.

[Translate to Anglais:] Démonstration d’une tâche au robot ABB YuMi et enregistrement des données à l’aide du capteur Leap Motion

Technological advances in recent years have allowed robots to get out of their cages and work side by side with human operators. With these collaborative robots, a new era of industrial robotics is opening. The potential market for collaborative robots is expected to grow from $ 400 million in 2017 to $ 7.5 billion in 2027.

Brought to work with and alongside humans, these robots will have to provide advanced skills. Classic robot programming (complex, time consuming) is reaching its limits in the face of this new challenge. The imitation learning paradigm is a promising way to teach robots to accomplish complex tasks. The aim is to equip the collaborative robot with cyber-physical systems allowing it to observe, analyze, reproduce and refine manipulations carried out by a human operator.

The goal of this project is to design and prototype a learning system by imitation of a collaborative industrial robot. To do this, the following scientific objectives are pursued:

  • Needs analysis in close collaboration with industrial partners makes it possible to define relevant scenarios by considering industry needs and constraints.
     
  • Develop a methodology for decomposing scenarios into hierarchical characteristics allowing the modeling of manipulations. Starting from primitive characteristics such as spatial position, the objective is to create more advanced compound characteristics, such as "grasp object", capable of describing a manipulation.
     
  • Based on motion acquisition systems, the goal is to develop a system capable of acquiring and analyzing the manipulations performed by an operator in order to extract their primitive and compound characteristics. Several sensors are necessary for gestures and movements at different scales. Subsequently, it is a matter of designing classification algorithms, by machine learning, in order to recognize gestures. An expert system allows the translation of recognized gestures into a robotic program. The ABB YuMi Robot installed at HEIG-VD is used in this project.
     
  • This learning system is finally applied to different typical use cases such as assembling a part made up of 4 to 5 different elements taken from different containers.