Scientific direction Development of key enabling technologies
Transfer of knowledge to industry

PhD : selection by topics

Technological challenges >> Artificial intelligence & Data intelligence
17 proposition(s).

See all positions [+]

Fault injection and integrity of edge neural networks: attacks, protections, evaluation

Département Systèmes (LETI)

Laboratoire Sécurité des Objets et des Systèmes Physiques

01-02-2021

SL-DRT-21-0159

pierre-alain.moellic@cea.fr

Artificial intelligence & Data intelligence (.pdf)

One of the major trends of Artificial Intelligence is the large-scale deployment of Machine Learning systems to a large variety of embedded platforms. A lot of semi-conductor practioners propose "A.I. suitable" products, majoritarely with neural networks for inference purpose. The security of the embedded models is a major issue for the deployment of these systems. Several works raised threats such as the adversarial examples or the membership inference attacks with disastrous impact. These works consider the ML aglorithms through a pure algorithmic point of view without taking into consideration the specificities of their physical implementation. Moreover, advanced works are compulsory for physical attacks (i.e., side-channel and fault injection analysis). By considering a overall attack surface gathering the theoretical (i.e. algorithmic) and physical facets, this subject propose to analyze Fault Injection Analysis threats (FIA) targeting the integrity of the model (fooling a prediction) of embedded machine learning systems and the development of appropriate protections. Several works have studied physical attacks for embedded neural networks but with usually naive model architecture on 'simple' 8-bit microcontrolers, or FPGA or at a pure simulation level. These works do not try to link the fault models or the leakages with well-known algorithmic threats. Being based on the experience on other critical systems (e.g., cryptographic primitive), the main idea of this PhD subject will be to jointly analysis the algorithmic and physical world in order to better understand the complexity of the threats and develop efficient defense schemes. The works will answer the following scientific challenges: (1) Caracterization and exploitation of fault models: how to exploit fault injection mechanisms (laser, EM, glitching) to fool the prediction of a model with minimal perturbations. (2) Evaluation of the relevance of classical countermeasures (such as redundancy-based techniques) for this kind of systems and threats. (3) Develop new protections suitable to embedded neural networks.

Download the offer (.zip)

Sensor network and low-power Edge AI for predictive maintenance

Département Systèmes (LETI)

Laboratoire Autonomie et Intégration des Capteurs

01-10-2021

SL-DRT-21-0312

vincent.heiries@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Predictive maintenance is a major issue for the industry of the future (Industry 4.0), allowing to maximize the use time of parts, increase the machines service lifes, reduce failures and equipment downtime, with economic and environmental gains for the company. Predictive maintenance relies on sensor networks placed on the equipment to be monitored and on learning mechanisms using artificial intelligence (Machine Learning). These sensors are today essentially wired, which makes their installation complex: cables installation, walls, rotating environments, ... The ideal solution would be to have wireless communicating sensors; then the question of their energy autonomy arises, which is the issue of this PhD thesis. This topic, which is part of the "Cyber-Physical Systems" roadmap of the Systems Department of CEA-LETI (Grenoble), will aim to develop a network of low-power wireless sensors to monitor industrial equipments and anticipate their failure. The PhD work will be based on advanced technological solutions using embedded artificial intelligence (edge AI), data fusion processing from different sensors (audio, vibration) and low-power electronics (hardware and firmware) in particular for the signals processing and communication aspects. Artificial intelligence is booming, with major challenges for health, transport, environmental protection and industry. At present, computations are mainly carried out on servers (commonly referred to as the cloud), which requires the complete transmission of data measured by sensors (e.g. an audio signal for a microphone, or vibrations for an accelerometer). This architecture is simple to deploy but not very energy efficient with oversized computing servers for the most part, and not very resilient in case of data transmission failure. The trend is therefore to implement processing algorithms as close as possible to the sensors in order to reduce the utilization rates of communication systems, offload the computational servers by reducing their energy consumption, and improve the resilience of these sensor networks. Based on this observation, it remains to be understood how a data processing task initially carried out by servers with no power and computing power constraints can be offloaded onto a local sensor network with limited available energy and reduced computing power (e.g. low-power microcontrollers). To this end, methods used in the field of compressive sensing and machine learning algorithms can be applied in the compressed space. The core of the thesis will thus focus on the minimization of hardware and firmware energy consumption of embedded electronic systems implementing artificial intelligence and aiming at the application "predictive maintenance for industry". Research questions and associated innovations will be targeted: (i) the development of low-power electronic architectures (wake-up functions, adjustment of measurement frequency, ...), (ii) the development and implementation on microcontrollers of Machine Learning algorithms for sensor functions (audio, vibration, temperature) and (iii) the development and implementation on microcontrollers of predictive Machine Learning algorithms for the optimization of energy and autonomy. A complete electronic device (hardware + firmware) implementing these innovations and deployed in a real environment is expected by the end of the thesis.

Download the offer (.zip)

Lensless imaging and artificial intelligence for rapid diagnosis of infections

Département Microtechnologies pour la Biologie et la Santé (LETI)

Laboratoire Systèmes d'Imagerie pour le Vivant

01-10-2020

SL-DRT-21-0380

caroline.paulus@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The objective of the thesis is to develop a portable technology for pathogen identification. Indeed, in a context of spread of medical deserts and resurgence of antibiotic-resistant infections, it is urgent to develop innovative techniques for rapid diagnosis of infections in isolated regions. Among optical techniques for pathogen identification, lens free imaging methods draws attention because they are the only ones currently able to offer simultaneous characterization of a large number of colonies, all with low-cost, portable and energy-efficient technology. The objective of the thesis is to explore the potential of lensless imaging combined with artificial intelligence algorithms to identify bacterial colonies present in a biological fluid. The thesis will aim to optimize the sizing of the imaging system (sources, sensors) and to study image processing and machine learning algorithms necessary for colony identification. Two cases of clinical applications will be studied.

Download the offer (.zip)

Oscillating neurons for computational optimization and associative memory

Département Composants Silicium (LETI)

Laboratoire d'Intégration des Composants pour la Logique

01-10-2021

SL-DRT-21-0393

louis.hutin@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Hopfield networks are a type of recurring neural network particularly well-suited to content-addressable associative memory functions. By giving its elements the ability to fluctuate at will, they can be adapted to efficiently solving NP-hard combinatorial optimization problems. Such problems, for which finding exact solutions in polynomial time is out of reach for deterministic Turing machines, find many applications in diverse fields such as logistic operations, circuit design, medical diagnosis, Smart Grid management etc. The frame of the proposed project is the search for hardware accelerators for Artificial Intelligence. In particular, we consider the use of injection-locked oscillators as neurons (ILO). The goals will be the design, fabrication and demonstration of such networks, featuring binary phase-encoded neurons coupled by adjustable synaptic weights, to carry out associative memory (e.g. pattern recognition) or combinatorial optimization tasks (ex: max cut, graph coloring,...).

Download the offer (.zip)

Training and quantization of large-scale deep neural networks for transfer learning

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Intelligence Artificielle Embarquée

SL-DRT-21-0446

johannes.thiele@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Training and quantization of large-scale deep neural networks for transfer learning Transfer learning is today a common technique in Deep Learning that uses the learned parameters of a generic network (a feature extractor) to accelerate the training of another network on a more specific task. This specialized network is subsequently optimized for the hardware constraints of the specific use-case. However, given that the representations of the feature extractor are often rather generic, it might be possible to optimize the parameters before the transfer, to avoid that each end-user has to perform this optimization by herself. In this context, the thesis has the following scientific objectives: - Using several ?unsupervised? learning methods (self-supervised, weakly supervised, semi-supervised) to train feature extractors on large datasets - Studying how common optimization methods (in particular quantization) can be applied on these extractors in a ?task-agnostic? fashion - Quantifying the influence of these optimizations on the transfer learning capacity, by benchmarking and theoretical analysis (e.g. information compression theory) Required competences: Master degree (or equivalent), machine learning (in particular Deep Learning), programming (Python, Pytorch, Tensorflow, C++), good English (French knowledge is not required, but helpful)

Download the offer (.zip)

Embedded autonomous incremental learning

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Intelligence Intégrée Multi-capteurs

01-09-2021

SL-DRT-21-0465

carolynn.bernier@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The recent development of incremental learning algorithms for deep neural networks is an opportunity to imagine new intelligent sensor applications deployed in real environments. By being incrementally able to learn new tasks, the sensor will be able to personalize its behavior to its specific deployment milieu, allowing it to adapt to slow variations of its targeted tasks (e.g. the detection of different types of anomalies) or learn new tasks that were not initially anticipated. This possibility would make the service rendered by the autonomous sensor more and more relevant. The objective of this thesis is the exploration of the means by which the intelligent sensor can become fully autonomous in its evolution while taking into account the limited processing capability of the embedded system. Also, seeing the limited power consumption of the platform, the idea is to associate two embedded systems, a first which is ?Always-on? and executes the nominal task of the application (e.g. the detection of different classes of events or anomalies), and a second which is ?On-demand? which would be executed now and then, in order to retrain the model of the ?Always-on? part. For coherence, it is necessary that the power consumption ratio of the two platforms be in a ratio of 1:100 to 1:1000 approximately. The challenges facing the design of such a system are many : The first is the design of detection mechanisms able to find false negative examples (slowly changing classes) as well as novel examples (new classes). These mechanisms must be executed on the ?Always-on? platform, with the associated implementation constraints. A second difficulty concerns the retraining phase which is executed on the ?On-demand? platform. This phase must take into account the structure of the ?Always-on? model in order to be able to retrain it with new examples. This both in order to slowly learn the modifications of the existing detection task or to learn a new task without forgetting the old ones. Since this a new application space, the PhD candidate must be able to have a wide understanding of the subject and will necessarily have to address a wide number of domains including different incremental leaning algorithms, different deep learning training algorithms, and the hardware requirements necessary for running these algorithms in the embedded context.

Download the offer (.zip)

Real-world data and AI for an innovative analytical approach to intonation and speech intelligibility in children with Cerebal Palsy

Département Intelligence Ambiante et Systèmes Interactifs (LIST)

Laboratoire d'Interfaces Sensorielles & Ambiantes

01-10-2021

SL-DRT-21-0539

margarita.anastassova@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Cerebral palsy (CP) is a developmental motor disorder affecting an individual's ability to move, and maintain balance and posture. It affects 2 to 4 children in 1,000, making this lifelong, chronic condition the most common motor disability in childhood. In addition to motor problems, many children with CP have difficulties with speaking, which severely impacts their access to social and educational activities. The most frequent speech impairment in CP is developmental dysarthria, characterized by limited movement of the jaw, lips and tongue; imprecise articulation; slow speaking rate; reduced intonation with a limited variation in pitch, rhythm and volume, which results in impaired speech intelligibility. Despite the frequent occurrence of problems in intonation and speech intelligibility in children with CP, the heterogeneity of the profiles and the central role of intonation in communication, few studies have examined intonation patterns in this population in order to characterize and classify them. As a result, little is known about intonational difficulties and their relationship to intelligibility in children with CP. Knowledge on the relation of intonational patterns and expressions with motor activities in real-world tasks is even more scarce. The aim of the research project is to fill in the above-mentioned knowledge gaps. This will be done using a real-world data-driven approach, combined with innovative analytical approaches based on AI and machine learning.

Download the offer (.zip)

Transfer Learning and Optimal Transport applied to the adaptation of models learnt on synthetic data

Département Métrologie Instrumentation et Information (LIST)

Laboratoire Science des Données et de la Décision

01-10-2021

SL-DRT-21-0563

fred-maurice.ngole-mboula@cea.fr

Artificial intelligence & Data intelligence (.pdf)

This PhD thesis aims at exploring possible contributions of optimal transportation field to transfer learning through the following directions: - building a knowledge transferability criterion between a source and a target task based on the regularity of the transportation plan between the source and the target data distributions; - integrating priors on the tasks similarity through the transportation ground metric; - applying Wasserstein barycenter to multi-task learning problems. These works might find multiple use-cases of interest in the lab, including adaptation of models learnt on synthetic data to real world systems. A more detailed presentation of this PhD thesis subject can be found via the following link: https://drive.google.com/file/d/13RAQEi0PdnkllM-MHxQS50WWUNUtGS07/view?usp=sharing

Download the offer (.zip)

Blending intuition with reasoning - Deep learning augmented with algorithmic logic and abstraction

Département Ingénierie Logiciels et Systèmes (LIST)

Labo.conception des systèmes embarqués et autonomes

01-03-2021

SL-DRT-21-0617

shuai.li@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Within machine learning, deep learning, based on neural networks, is a subfield that has gained much traction since several high-profile success stories. Unlike classical computer reasoning, the statistical method by which a neural network solves a problem can be seen as a very primitive form of intuition, as opposite to classical computer reasoning. However, so far the only real success of deep learning has been its ability to self-tune its geometric logic that lets it transform data represented as points in n-dimension, to data represented as points in m-dimension, if we provide enough training data. Unlike a human being, a neural network does not have the ability to reason through algorithmic logic. Furthermore, although neural networks are tremendously powerful for a given task, since they have no ability to achieve global generalization, any deviation in the input data may give unpredicted results, which limits their reusability. Considering the significant cost associated with neural network development, integrating such systems is not always economically viable. It is therefore necessary to abstract, encapsulate, reuse and compose neural networks. Although lacking in deep learning, algorithmic logic and abstraction are today innate to classical software engineering, through programming primitives, software architecture paradigms, and mature methodological patterns like Model-Driven Engineering. Therefore, in this thesis, we propose to blend reusable algorithmic intelligence, providing the ability to reason, with reusable geometric intelligence, providing the ability of intuition. To achieve such an objective, we can explore some ideas like integrating programming control primitives in neural networks, applying software architecture paradigms in neural networks models, and assembling modular systems using libraries containing both algorithmic modules and geometric modules. The results of this thesis will be a stepping stone towards helping companies assemble AI systems for their specific problems, by limiting the costs in expertise, effort, time, and data necessary to integrate neural networks.

Download the offer (.zip)

Scalable and Precise Static Analysis of Memory for Low-Level Languages

Département Ingénierie Logiciels et Systèmes (LIST)

Laboratoire pour la Sûreté du Logiciel

01-10-2021

SL-DRT-21-0641

matthieu.lemerre@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The goal of the thesis is to develop an automated static analysis (based on abstract interpretation) to verify, in large code base in low-level compiled languages (e.g. C, C++, assembly, Rust, Fortran), security properties that are related to memory, lke flow information properties and absence of memory corruption. This problem has many applications in cybersecurity, as most of the software-related cybersecurity issues, and those that have the highest severity, come from memory safety errors (e.g. (buffer overflows, use-after-free, null pointer dereferences, wrong type punning, wrong interfacing between several languages, etc). The three main issues when designing such an automated static analysis is to keep the verification effort low, to handle large and complex systems, and to be precise enough so that the analysis does not report a large amount of false alarms. The privileged approach in this thesis will draw on the success of a new method using abstract domains parameterized by type invariants, which found a sweet spot between precision (i.e. few false alarms), efficiency (in computing resources), and required effort (by the user). This method allowed in particular to fully automatically prove absence of privilege escalation and of memory corruptions of an existing industrial microkernel from its machine code, using only 58 lines of annotations. Many research questions remain, and we will explore how to extend the analyzer to improve scalability (using compositional analysis), how to improve its expressivity (to show complex security properties like non-interference), how to improve precision without degrading efficiency, or how to further reduce the amount of annotations (using automatic inference of more precise type invariants).

Download the offer (.zip)

Real Time Semantic extraction on sparse data for embedded perception

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Intelligence Artificielle Embarquée

01-10-2021

SL-DRT-21-0656

mehdi.darouich@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The thesis topic we propose is in the field of embedded architectures for the semantic analysis of sparse data in real-time. During the last decade, the analysis of image and videostreams has experienced a boom following the significant improvement of neural networks and the increased specialization of the associated processing architectures. Research has led to the development of more efficient networks, which are less memory-intensive and increasingly integrable into embedded hardware. Several works are currently in progress in the laboratory, around the N2D2 tool for the optimization and integration of neural networks on embedded hardware, as well as around the DNeuro embedded hardware architecture. Within embedded perception systems, the strong constraints on bandwidth and memory lead to the privileged use of sparse data (graphs, point clouds, etc.), reduced in quantity and containing particularly rich information on the environment to be analyzed. However, the non-contiguous and unpredictable structure of this sparse data is very different from a traditional image stream, making current hardware architectures unsuitable for their execution. However, these particular characteristics point to very interesting opportunities in terms of optimization and efficiency. This thesis work aims at exploring this class of algorithms and their capacity of integration under constraints in an embedded computing architecture. The scientific problems that arise here are how to perform efficient data management in a context of highly scattered computational distribution, the compatibility of sparse data analysis algorithms with execution on embedded targets, and the performances and precision achievable under these constraints.

Download the offer (.zip)

Gamification for empowering engineers collectives

Département Ingénierie Logiciels et Systèmes (LIST)

Lab.systèmes d'information de confiance, intelligents et auto-organisants

01-09-2021

SL-DRT-21-0692

sara.tucci@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Technologies, and in particular digital technologies, are part of the range of solutions proposed to address the many societal and environmental challenges, such as the 17 sustainable development goals listed by the United Nations. Faced with the complexity of the systems to be implemented, collective intelligence is a major key to success. Multidisciplinary and holistic, systems engineering practices as formalized by INCOSE, and more particularly in their model engineering based version, are similar to collective intelligence practices, and therefore its performance relies on the group's ability to communicate and therefore on the collaborative work tools shared by the team. However, users often consider the tools as complex and the resulting "misuse" becomes a hindrance to the performance of the collective instead of a stimulus. This thesis aims to explore the fields of serious games and game theory in order to reverse this trend and make software tools allies of development rather than enemies to be fought.

Download the offer (.zip)

Generative surrogates for tomographic problems based on stochastic simulation

Département Métrologie Instrumentation et Information (LIST)

Laboratoire Modélisation et Simulation des Systèmes

01-07-2021

SL-DRT-21-0744

thomas.dautremer@cea.fr

Artificial intelligence & Data intelligence (.pdf)

While the primary object of stochastic simulation is to allow random generation of complex phenomena from a configuration of parameters (forward simulation), its interest may also lie in the inverse problem: determining a configuration of the model parameters allowing generation of data sufficiently close to those observed experimentally. Thus, tomographic reconstruction (CT / PET) problems are classic representatives in radiation physics. In the usual algorithms in X/gamma-ray imaging or in tomography (CT, PET, muons, etc.), the diffusion phenomena or energy dependence occurring at the level of the scene to be imaged cannot be taken into account in a pre-computed system matrix and require approximate post-hoc corrections. In this study, we propose to embed a stochastic particle-transport simulator in the reconstruction process for achieving integrated consideration of the artifacts of the imaging system. We therefore propose to follow a Bayesian framework in order to guarantee a rigorous management of the statistical uncertainties inherent to experiments and stochastic simulations. However, this Bayesian objective comes up against a major difficulty in Monte Carlo simulation: we do not have a likelihood function but only the capacity to generate observables. To overcome this problem, we propose to learn a local emulator of the simulator, aka a generative surrogate conditioned by experimental observations [1]. The control of computational burden being essential, we will focus on deep invertible architectures [2] allowing forward simulations with few primary particles. On the other hand, to limit the number of simulations the learning database will be built adaptively by active learning [3]. In addition, this approach allows a reconstruction on a vectorized (meshed) space rather than on a fixed grid of voxels, the a priori of reconstruction then corresponding to a random mesh. Bayesian reconstruction on manifolds via a so-called "digital twin" would thus constitute a notable advance in particle imaging / tomography. [1] Cranmer, K.; Brehmer, J. and Louppe, G. The frontier of simulation-based inference. Proceedings of the National Academy of Sciences, National Academy of Sciences, 2020 [2] Radev, S.; Mertens, U.; Voss, A.; Ardizzone, L. and Köthe, U. BayesFlow: Learning complex stochastic models with invertible neural networks. arXiv:2003.06281, 2020 [3] Järvenpää, M.; Gutmann, M. U.; Pleska, A.; Vehtari, A. and Marttinen, P. Efficient Acquisition Rules for Model-Based Approximate Bayesian Computation. Bayesian Anal., International Society for Bayesian Analysis, 2019, 14, 595-622

Download the offer (.zip)

Tactile-based learning and classification methods for task planning and verification ? applications to multi-digital and bimanual robotic manipulation

Département Systèmes (LETI)

Laboratoire Signaux et Systèmes de Capteurs

01-09-2021

SL-DRT-21-0803

saifeddine.aloui@cea.fr

Artificial intelligence & Data intelligence (.pdf)

The robotic manipulation of objects first of all requires a grasp planning of these objects, which is a function of the characteristic parameters of the considered hardware tools and the task to be performed (such as the accessibility areas or the level and direction of the efforts that may be involved in the tasks of assembly, insertion, dexterous manipulation, etc.). In addition, during the execution of the task, it is necessary to be able to ensure the nominal progress of the planned task, by detecting the occurrence of certain critical events necessary for its completion (such as the interaction of objects with each other, the loss of stability of the object, etc.) and then validating the actual completion of the planned task (via the classification of data that characterizes the success or failure of tasks such as insertion or assembly). These detection and verification steps, which are crucial when it comes to robotizing certain critical tasks requiring a high level of traceability, can be based in particular on the analysis and monitoring of data or signals specific to the handling system in question. The work requested will exploit an experimental system consisting of a two-handed station, equipped in particular with two multi-digital grippers equipped with multimodal tactile sensors developed by the CEA. This thesis work is essentially divided into two parts. The first part consists in the use of learning methods, which are able to take into account the capacities of the pluridigital manipulators and the imperatives of the task, to plan the grasping of the objects. The second part of the thesis aims at exploiting some methods based on the classification of tactile and proprioceptive signals of the system to validate the accomplishment of the task.

Download the offer (.zip)

Attention guided dynamic inference in perception neural networks for autonomous mobile systems

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Intelligence Artificielle Embarquée

01-10-2021

SL-DRT-21-0816

karim.benchehida@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Autonomous mobile systems are becoming more and more present in a variety of domains such us the delivery, inspection or agriculture executing tasks of growing complexity. These systems need to have a precise positioning (localization, pose estimation) and environment perception (detection, classification, object tracking?) to take relevant navigation decisions for example. To be very efficient, recent state of the art approaches based on neural networks for these perception tasks tend towards the use of wider (in the number of channels and modalities) and/or deeper (in the number of convolution layers) networks with a direct impact on the computation complexity and the decision making latency. The research proposal concentrates on the enhancement of the computational efficiency of perception neural networks (complexity vs. accuracy). For that purpose, we intend to dynamically reduce the computation (via dynamic inference techniques) and focus it (via attentions mechanisms) to allow the use of large representation capability networks that usually cannot be embedded. An important part of this project will be on the implementation of the developed techniques on a real embedded mobile platform to demonstrate the effectiveness of the approach. Indeed, the Embedded Artificial Intelligence Laboratory of the CEA have available some mobile robotic platforms and a fully automated, autonomous ready electrical vehicle with multiple integrated sensors. The research results will enrich the perception modules of these systems.

Download the offer (.zip)

Exploring learning techniques for "Edge AI" taking advantage of Resistive RAM

Département Systèmes et Circuits Intégrés Numériques

Laboratoire Systèmes-sur-puce et Technologies Avancées

01-09-2021

SL-DRT-21-0825

Francois.RUMMENS@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Today's computer architectures are inefficient in handling the simulation artificial neural networks, hindering their application in power-constrained environments, such as edge computing and the Internet of Thighs. Dedicated hardware implementations of neural networks that combine the advantages of mixed-signal neuromorphic circuits with those of emerging memory technologies have the potential of enabling ultra-low power processing suitable for edge computing. These new circuits and technologies have the potential to endow the system with the ability to learn at the edge. This breakthrough, which is unattainable using conventional approaches, can have many advantages, as it enables adaptation to changing input statistics, reduced network congestion, and increased privacy. However, current approaches often focus on learning algorithms that cannot be reconciled with the non-ideal physical behaviour of resistive memories. This thesis aims at exploring various algorithmic solutions for inference and learning in order to propose neural network architectures more adapted to the reality of the resistive memory technologies developed at LETI.

Download the offer (.zip)

Ultra Low Power and High Performance Microphone Signal Processing for Speaker Localization and Auditory Attention Detection : Application to Next Generation Hearing Aids

Département Systèmes (LETI)

Laboratoire Signaux et Systèmes de Capteurs

01-10-2021

SL-DRT-21-0898

vincent.heiries@cea.fr

Artificial intelligence & Data intelligence (.pdf)

Located on the MINATEC campus in Grenoble, CEA-Leti's main mission is to create innovation and transfer it to industry by generating research results that will be used in industry in the medium and long term, positioning its research between academic research and industrial R&D. Within LETI Systems Department, the mission of the Sensor Systems and Electronics Service is to design and produce innovative systems to meet the needs of industrial innovation in a wide range of fields, from the automotive industry to sports and the building industry. The skills involved range from electronics to physics, electromagnetism, magnetostatics, signal processing and applied mathematics. Hearing loss is a major public health problem, affecting about 10% of the world's population. This handicap has a strong impact on the comfort of patients who suffer from it, in many aspects of their lives. Furthermore, with increased stimulation of our hearing system over long periods of time through various digital uses, the trend of increasing prevalence of hearing loss is clearly on the rise. Many forms of hearing loss can be treated through the use of hearing aids that significantly improve the lives of millions of people with hearing loss around the world. These hearing aids have benefited from considerable efforts to improve the underlying technologies in recent years, and today offer very high performance in terms of audio signal quality, amplification, noise filtering, compactness, and autonomy. However, these devices still have several limitations. In particular, in certain sound environments, the separation between the useful signal to be amplified and the interfering acoustic signals to be filtered remains a challenge. In this study, we propose to focus on the Cocktail Party Problem. The Cocktail Party Problem (CPP), is a psychoacoustic phenomenon that refers to the remarkable human ability to listen and selectively recognize an auditory source in a noisy environment, where the overlapping auditory interference is produced by competing speech sounds or a variety of noises that are often assumed to be independent of each other. The resolution of this type of problem, also called Auditory Attention Detection, represents a major problem for which few solutions have yet been found and which is currently the subject of intense research. This PhD thesis, which is part of the "Cyber-Physical Systems" and "Edge AI" roadmap of the Systems Department of CEA-LETI (Grenoble), will aim to make a major contribution to this Auditory Attention Detection theme, for the automatic recognition of the speaker by future generation hearing aids. The thesis will be based on advanced technological solutions using embedded artificial intelligence (Edge AI). We will address the problem through a multi-sensor data fusion approach (acoustic, inertial, video sensors). Indeed, we will consider coupling a processing of acoustic voice signals thanks to high performance microphones with a video processing of faces to realize a vocal activity detection of the speaker (automatic lip reading). The sensor data will be processed and coupled by adapted artificial intelligence algorithms. It is also envisaged to use several microphones to perform acoustic beamforming processing, and to possibly hybridize with inertial sensors to reinforce the localization estimation of the speaker. The validation of the implemented methods and the developed algorithms will be realized thanks to test campaigns in instrumented acoustic chamber (high performance microphone, video captures, etc...). Keywords: hearing aid, audio signal processing, artificial intelligence, sensor fusion, cocktail party problem, auditory attention detection

Download the offer (.zip)

See all positions