Scientific direction Development of key enabling technologies
Transfer of knowledge to industry

PhD : selection by topics

Technological challenges >> Artificial intelligence & Data intelligence
8 proposition(s).

See all positions [+]

Fault injection and integrity of edge neural networks: attacks, protections, evaluation

Département Systèmes (LETI)

Laboratoire Sécurité des Objets et des Systèmes Physiques

01-02-2021

SL-DRT-21-0159

pierre-alain.moellic@cea.fr

Artificial intelligence & Data intelligence (.pdf)

One of the major trends of Artificial Intelligence is the large-scale deployment of Machine Learning systems to a large variety of embedded platforms. A lot of semi-conductor practioners propose "A.I. suitable" products, majoritarely with neural networks for inference purpose. The security of the embedded models is a major issue for the deployment of these systems. Several works raised threats such as the adversarial examples or the membership inference attacks with disastrous impact. These works consider the ML aglorithms through a pure algorithmic point of view without taking into consideration the specificities of their physical implementation. Moreover, advanced works are compulsory for physical attacks (i.e., side-channel and fault injection analysis). By considering a overall attack surface gathering the theoretical (i.e. algorithmic) and physical facets, this subject propose to analyze Fault Injection Analysis threats (FIA) targeting the integrity of the model (fooling a prediction) of embedded machine learning systems and the development of appropriate protections. Several works have studied physical attacks for embedded neural networks but with usually naive model architecture on 'simple' 8-bit microcontrolers, or FPGA or at a pure simulation level. These works do not try to link the fault models or the leakages with well-known algorithmic threats. Being based on the experience on other critical systems (e.g., cryptographic primitive), the main idea of this PhD subject will be to jointly analysis the algorithmic and physical world in order to better understand the complexity of the threats and develop efficient defense schemes. The works will answer the following scientific challenges: (1) Caracterization and exploitation of fault models: how to exploit fault injection mechanisms (laser, EM, glitching) to fool the prediction of a model with minimal perturbations. (2) Evaluation of the relevance of classical countermeasures (such as redundancy-based techniques) for this kind of systems and threats. (3) Develop new protections suitable to embedded neural networks.

Download the offer (.zip)

Sensor network and low-power Edge AI for predictive maintenance

Département Systèmes (LETI)

Laboratoire Autonomie et Intégration des Capteurs

01-10-2021

SL-DRT-21-0312

vincent.heiries@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Predictive maintenance is a major issue for the industry of the future (Industry 4.0), allowing to maximize the use time of parts, increase the machines service lifes, reduce failures and equipment downtime, with economic and environmental gains for the company. Predictive maintenance relies on sensor networks placed on the equipment to be monitored and on learning mechanisms using artificial intelligence (Machine Learning). These sensors are today essentially wired, which makes their installation complex: cables installation, walls, rotating environments, ... The ideal solution would be to have wireless communicating sensors; then the question of their energy autonomy arises, which is the issue of this PhD thesis. This topic, which is part of the "Cyber-Physical Systems" roadmap of the Systems Department of CEA-LETI (Grenoble), will aim to develop a network of low-power wireless sensors to monitor industrial equipments and anticipate their failure. The PhD work will be based on advanced technological solutions using embedded artificial intelligence (edge AI), data fusion processing from different sensors (audio, vibration) and low-power electronics (hardware and firmware) in particular for the signals processing and communication aspects. Artificial intelligence is booming, with major challenges for health, transport, environmental protection and industry. At present, computations are mainly carried out on servers (commonly referred to as the cloud), which requires the complete transmission of data measured by sensors (e.g. an audio signal for a microphone, or vibrations for an accelerometer). This architecture is simple to deploy but not very energy efficient with oversized computing servers for the most part, and not very resilient in case of data transmission failure. The trend is therefore to implement processing algorithms as close as possible to the sensors in order to reduce the utilization rates of communication systems, offload the computational servers by reducing their energy consumption, and improve the resilience of these sensor networks. Based on this observation, it remains to be understood how a data processing task initially carried out by servers with no power and computing power constraints can be offloaded onto a local sensor network with limited available energy and reduced computing power (e.g. low-power microcontrollers). To this end, methods used in the field of compressive sensing and machine learning algorithms can be applied in the compressed space. The core of the thesis will thus focus on the minimization of hardware and firmware energy consumption of embedded electronic systems implementing artificial intelligence and aiming at the application "predictive maintenance for industry". Research questions and associated innovations will be targeted: (i) the development of low-power electronic architectures (wake-up functions, adjustment of measurement frequency, ...), (ii) the development and implementation on microcontrollers of Machine Learning algorithms for sensor functions (audio, vibration, temperature) and (iii) the development and implementation on microcontrollers of predictive Machine Learning algorithms for the optimization of energy and autonomy. A complete electronic device (hardware + firmware) implementing these innovations and deployed in a real environment is expected by the end of the thesis.

Download the offer (.zip)

Lensless imaging and artificial intelligence for rapid diagnosis of infections

Département Microtechnologies pour la Biologie et la Santé (LETI)

Laboratoire Systèmes d'Imagerie pour le Vivant

01-10-2020

SL-DRT-21-0380

caroline.paulus@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The objective of the thesis is to develop a portable technology for pathogen identification. Indeed, in a context of spread of medical deserts and resurgence of antibiotic-resistant infections, it is urgent to develop innovative techniques for rapid diagnosis of infections in isolated regions. Among optical techniques for pathogen identification, lens free imaging methods draws attention because they are the only ones currently able to offer simultaneous characterization of a large number of colonies, all with low-cost, portable and energy-efficient technology. The objective of the thesis is to explore the potential of lensless imaging combined with artificial intelligence algorithms to identify bacterial colonies present in a biological fluid. The thesis will aim to optimize the sizing of the imaging system (sources, sensors) and to study image processing and machine learning algorithms necessary for colony identification. Two cases of clinical applications will be studied.

Download the offer (.zip)

Oscillating neurons for computational optimization and associative memory

Département Composants Silicium (LETI)

Laboratoire d'Intégration des Composants pour la Logique

01-10-2021

SL-DRT-21-0393

louis.hutin@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Hopfield networks are a type of recurring neural network particularly well-suited to content-addressable associative memory functions. By giving its elements the ability to fluctuate at will, they can be adapted to efficiently solving NP-hard combinatorial optimization problems. Such problems, for which finding exact solutions in polynomial time is out of reach for deterministic Turing machines, find many applications in diverse fields such as logistic operations, circuit design, medical diagnosis, Smart Grid management etc. The frame of the proposed project is the search for hardware accelerators for Artificial Intelligence. In particular, we consider the use of injection-locked oscillators as neurons (ILO). The goals will be the design, fabrication and demonstration of such networks, featuring binary phase-encoded neurons coupled by adjustable synaptic weights, to carry out associative memory (e.g. pattern recognition) or combinatorial optimization tasks (ex: max cut, graph coloring,...).

Download the offer (.zip)

Training and quantization of large-scale deep neural networks for transfer learning

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Intelligence Artificielle Embarquée

SL-DRT-21-0446

johannes.thiele@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Training and quantization of large-scale deep neural networks for transfer learning Transfer learning is today a common technique in Deep Learning that uses the learned parameters of a generic network (a feature extractor) to accelerate the training of another network on a more specific task. This specialized network is subsequently optimized for the hardware constraints of the specific use-case. However, given that the representations of the feature extractor are often rather generic, it might be possible to optimize the parameters before the transfer, to avoid that each end-user has to perform this optimization by herself. In this context, the thesis has the following scientific objectives: - Using several ?unsupervised? learning methods (self-supervised, weakly supervised, semi-supervised) to train feature extractors on large datasets - Studying how common optimization methods (in particular quantization) can be applied on these extractors in a ?task-agnostic? fashion - Quantifying the influence of these optimizations on the transfer learning capacity, by benchmarking and theoretical analysis (e.g. information compression theory) Required competences: Master degree (or equivalent), machine learning (in particular Deep Learning), programming (Python, Pytorch, Tensorflow, C++), good English (French knowledge is not required, but helpful)

Download the offer (.zip)

Embedded autonomous incremental learning

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Intelligence Intégrée Multi-capteurs

01-09-2021

SL-DRT-21-0465

carolynn.bernier@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The recent development of incremental learning algorithms for deep neural networks is an opportunity to imagine new intelligent sensor applications deployed in real environments. By being incrementally able to learn new tasks, the sensor will be able to personalize its behavior to its specific deployment milieu, allowing it to adapt to slow variations of its targeted tasks (e.g. the detection of different types of anomalies) or learn new tasks that were not initially anticipated. This possibility would make the service rendered by the autonomous sensor more and more relevant. The objective of this thesis is the exploration of the means by which the intelligent sensor can become fully autonomous in its evolution while taking into account the limited processing capability of the embedded system. Also, seeing the limited power consumption of the platform, the idea is to associate two embedded systems, a first which is ?Always-on? and executes the nominal task of the application (e.g. the detection of different classes of events or anomalies), and a second which is ?On-demand? which would be executed now and then, in order to retrain the model of the ?Always-on? part. For coherence, it is necessary that the power consumption ratio of the two platforms be in a ratio of 1:100 to 1:1000 approximately. The challenges facing the design of such a system are many : The first is the design of detection mechanisms able to find false negative examples (slowly changing classes) as well as novel examples (new classes). These mechanisms must be executed on the ?Always-on? platform, with the associated implementation constraints. A second difficulty concerns the retraining phase which is executed on the ?On-demand? platform. This phase must take into account the structure of the ?Always-on? model in order to be able to retrain it with new examples. This both in order to slowly learn the modifications of the existing detection task or to learn a new task without forgetting the old ones. Since this a new application space, the PhD candidate must be able to have a wide understanding of the subject and will necessarily have to address a wide number of domains including different incremental leaning algorithms, different deep learning training algorithms, and the hardware requirements necessary for running these algorithms in the embedded context.

Download the offer (.zip)

Blending intuition with reasoning - Deep learning augmented with algorithmic logic and abstraction

Département Ingénierie Logiciels et Systèmes (LIST)

Labo.conception des systèmes embarqués et autonomes

01-03-2021

SL-DRT-21-0617

shuai.li@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Within machine learning, deep learning, based on neural networks, is a subfield that has gained much traction since several high-profile success stories. Unlike classical computer reasoning, the statistical method by which a neural network solves a problem can be seen as a very primitive form of intuition, as opposite to classical computer reasoning. However, so far the only real success of deep learning has been its ability to self-tune its geometric logic that lets it transform data represented as points in n-dimension, to data represented as points in m-dimension, if we provide enough training data. Unlike a human being, a neural network does not have the ability to reason through algorithmic logic. Furthermore, although neural networks are tremendously powerful for a given task, since they have no ability to achieve global generalization, any deviation in the input data may give unpredicted results, which limits their reusability. Considering the significant cost associated with neural network development, integrating such systems is not always economically viable. It is therefore necessary to abstract, encapsulate, reuse and compose neural networks. Although lacking in deep learning, algorithmic logic and abstraction are today innate to classical software engineering, through programming primitives, software architecture paradigms, and mature methodological patterns like Model-Driven Engineering. Therefore, in this thesis, we propose to blend reusable algorithmic intelligence, providing the ability to reason, with reusable geometric intelligence, providing the ability of intuition. To achieve such an objective, we can explore some ideas like integrating programming control primitives in neural networks, applying software architecture paradigms in neural networks models, and assembling modular systems using libraries containing both algorithmic modules and geometric modules. The results of this thesis will be a stepping stone towards helping companies assemble AI systems for their specific problems, by limiting the costs in expertise, effort, time, and data necessary to integrate neural networks.

Download the offer (.zip)

Scalable and Precise Static Analysis of Memory for Low-Level Languages

Département Ingénierie Logiciels et Systèmes (LIST)

Laboratoire pour la Sûreté du Logiciel

01-10-2021

SL-DRT-21-0641

matthieu.lemerre@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The goal of the thesis is to develop an automated static analysis (based on abstract interpretation) to verify, in large code base in low-level compiled languages (e.g. C, C++, assembly, Rust, Fortran), security properties that are related to memory, lke flow information properties and absence of memory corruption. This problem has many applications in cybersecurity, as most of the software-related cybersecurity issues, and those that have the highest severity, come from memory safety errors (e.g. (buffer overflows, use-after-free, null pointer dereferences, wrong type punning, wrong interfacing between several languages, etc). The three main issues when designing such an automated static analysis is to keep the verification effort low, to handle large and complex systems, and to be precise enough so that the analysis does not report a large amount of false alarms. The privileged approach in this thesis will draw on the success of a new method using abstract domains parameterized by type invariants, which found a sweet spot between precision (i.e. few false alarms), efficiency (in computing resources), and required effort (by the user). This method allowed in particular to fully automatically prove absence of privilege escalation and of memory corruptions of an existing industrial microkernel from its machine code, using only 58 lines of annotations. Many research questions remain, and we will explore how to extend the analyzer to improve scalability (using compositional analysis), how to improve its expressivity (to show complex security properties like non-interference), how to improve precision without degrading efficiency, or how to further reduce the amount of annotations (using automatic inference of more precise type invariants).

Download the offer (.zip)

See all positions