This topic falls in the context of the development of autonomous vehicles, drones, and robotics.
The environment of the vehicle is described in an occupation grid, while each cell of the grid contains the probability of the occupation by an object. Bayesian fusion technique allow to fuse information provided by several sensors into the grid. Thus, we plan to take benefit from knowledge in CEA-LETI, about both physics behind sensors and applications of occupation grid, and focus this thesis on approximation of a occupation model with Machine Learning techniques.
A key aspect in this technique is the shift from a ?measurement? coming from a sensor, and the ?occupancy? information for each cell of the grid. Since each sensor has its own specifications in terms of radial or angular accuracy, misdetection rate ? each of them has a specific occupancy model. The exact computation of this model is intractable in practice in the general case, even though the formulae is well known, due to a combinational explosion of the number of terms.
At the same time, the recent success of Deep and Reinforcement Learning ? image classification, automatic language translation, strategy games ? pushed in the front scene the capability of neuron networks to approximate any function.