Scientific direction Development of key enabling technologies
Transfer of knowledge to industry

PhD : selection by topics

Inverse reinforcement learning of a task performed by a human





Learning from demonstration involves an agent (e.g., a robot) learning a task by watching another agent (e.g., a human) performing the same task. It often uses reinforcement-learning methods to improve the robot's ability to perform a task in new situations (i.e., generalization). These methods involve providing a positive reinforcement (i.e., a reward) when the outputs of the algorithms help achieving the task, but require a human designed reward function. The more the task is complex the more difficult is the reward function to design, but it can be learned from a series of examples with methods called inverse reinforcement learning. The use, jointly or not, of these techniques has shown encouraging results, but which are limited to toy examples and cannot be adapted as such to tasks more representative of the industrial environment. During the thesis, the PhD student will analyze and test state-of-the-art previous works. S/He will then propose a method, combining inverse reinforcement learning to other algorithms (e.g., generative adversarial networks, GAN), so that the robot will understand the task performed by the operator (with as little explanation from the operator as possible), and will generalize enough to make the robot robust to dynamic environments (obstacles, moving objects?). This method should be suited for a "pick and place" task in an industrial environment and ensure a reasonable enough learning period (information a priori, feedback from the operator) for tasks of medium complexity.

Voir toutes nos offres