Scientific direction Development of key enabling technologies
Transfer of knowledge to industry

PhD : selection by topics

Fault injection and integrity of edge neural networks: attacks, protections, evaluation

One of the major trends of Artificial Intelligence is the large-scale deployment of Machine Learning systems to a large variety of embedded platforms. A lot of semi-conductor practioners propose "A.I. suitable" products, majoritarely with neural networks for inference purpose. The security of the embedded models is a major issue for the deployment of these systems. Several works raised threats such as the adversarial examples or the membership inference attacks with disastrous impact. These works consider the ML aglorithms through a pure algorithmic point of view without taking into consideration the specificities of their physical implementation. Moreover, advanced works are compulsory for physical attacks (i.e., side-channel and fault injection analysis). By considering a overall attack surface gathering the theoretical (i.e. algorithmic) and physical facets, this subject propose to analyze Fault Injection Analysis threats (FIA) targeting the integrity of the model (fooling a prediction) of embedded machine learning systems and the development of appropriate protections. Several works have studied physical attacks for embedded neural networks but with usually naive model architecture on 'simple' 8-bit microcontrolers, or FPGA or at a pure simulation level. These works do not try to link the fault models or the leakages with well-known algorithmic threats. Being based on the experience on other critical systems (e.g., cryptographic primitive), the main idea of this PhD subject will be to jointly analysis the algorithmic and physical world in order to better understand the complexity of the threats and develop efficient defense schemes. The works will answer the following scientific challenges: (1) Caracterization and exploitation of fault models: how to exploit fault injection mechanisms (laser, EM, glitching) to fool the prediction of a model with minimal perturbations. (2) Evaluation of the relevance of classical countermeasures (such as redundancy-based techniques) for this kind of systems and threats. (3) Develop new protections suitable to embedded neural networks.

See the summary of the offer
Département : Département Systèmes (LETI) Laboratory : Laboratoire Sécurité des Objets et des Systèmes Physiques Start Date : 01-02-2021 CEA Code : SL-DRT-21-0159 Contact :

 Retour á la recherche précédente --  Voir toutes nos offres