Scientific direction Development of key enabling technologies
Transfer of knowledge to industry

PhD : selection by topics

Engineering science >> Computer science and software
8 proposition(s).

Person re-identification and cross-domain adaptability

Département Intelligence Ambiante et Systèmes Interactifs (LIST)

Vision & Ingénierie des Contenus (SAC)



Automatically re-identifying people viewed by cameras is a key functionality for videoprotection applications. It consists in retrieving occurrences of a person from a set of images. Despite the many studies on this topic in the past few years, modeling human appearance remains a challenge. Indeed, re-identification models have to discriminate distinct people (in spite of their possible similarity) while being robust against the high variability of their visual appearance (caused by their posture, lighting conditions, camera viewpoint, sensitivity and resolution, ?). Besides, partial occlusion and alignment errors on the detected people have to be coped with. Even if deep supervised learning methods have been greatly improving re-identification performances on some academic datasets, difficulties remain for real implementations in operational environments. Indeed, a model trained on a specific dataset usually does not perform well if applied on other datasets as it is. Furthermore, manual data annotation in the target domain is a tedious thus costly task. In this thesis, we will study the appearance model adaptability to target domains in which only data without annotation is available. Unsupervised transfer learning methods can be used. The proposed approaches will cope with scalability issues in order to address large datasets.

Massively parallel in-memory computing architecture

Département Architectures Conception et Logiciels Embarqués (LIST-LETI)

Laboratoire Intégration Silicium des Architectures Numériques



Systems-on-chip (SoCs) for embedded computing have always been constrained by memory bandwidth. Nowadays, with the development of application data-intensive, cost (latency, energy) related to memory access for data computation are significantly increasing. A new computing paradigm consisting in performing data computation within the memory (IMC: In-Memory Computing) has been proposed: the idea is to process data where they are stored in order to save energy and latency. Clear separation between computing and storage units is vanishing leading to very new architectures. The objective of this thesis work is to define a massively parallel in-memory computing architecture supporting the interconnection of a matrix of computing tiles based on IMC memory for parallel execution (multiprocessor) and parallel data access (multiple memory banks). The thesis will be based on on-going work in the lab related to SRAM memory and will address higher density memory types. The subject will require an exploratory approach through modeling of the proposed architecture in relation with the targeted applications (big data, artificial intelligence). Design and silicon implementation of innovative blocks of the architecture will validate to proposed concepts.

Study of new solutions for the security of embedded systems

Département Systèmes

Laboratoire Sécurité des Objets et des Systèmes Physiques



In recent years, the number of connected systems has increased exponentially and is expected to reach several tens of billions by 2020. Most of these devices integrate seldom, if ever, security and can create massive attacks involving a large number of objects. In the embedded systems used in IOT and I-IOT, hardware and software solutions currently exist and provide cryptographic primitives to secure a communication interface or data storage. However, these solutions are not always correctly implemented and didn't deal with all the issues of security. Based on the study of existing attack scenarios, standards and regulatory documents, this thesis will define the needs in terms of security of an embedded system throughout its life cycle. Particular attention should be paid to threat detection, hardware and software integrity, system resilience, and the definition of a new commissioning interface. New solutions will be studied and developed in order to address issues not integrated in current embedded devices. The implementation of these new solutions will be the first step in the development of a new component called a security supervisor. One day, this component could be integrated in most of embedded systems in order to strengthen defence in depth.

Wireless Communication Relying on Artificial Intelligence

Département Systèmes

Laboratoire Sans fils Haut Débit



In wireless communications, we are used to design transmit signals that enable straightforward algorithms for symbol detection for a variety of statistical channel and system models. In practice, real systems have many impairments (non-linear power amplifiers, antennae coupling effects, finite resolution quantization) that cannot be fully captured by tractable models. Artificial Intelligence-based approaches could be a disruptive but yet promising alternative. More precisely one can expect benefits of IA based approach in case of complex communication scenarii and when mathematical models are intractable. The first challenge of this PhD project is to assess the potential of IA in the design of a signal processing algorithms. The second challenge is to develop tailored learning methods that exploit a dataset to enable future communication systems to be self-configurable regarding its environment.

Enhancements of Deterministic Ultra-Reliable Low Latency Communication (URLLC) Protocols by opportunism

Département Systèmes

Laboratoire Sans fils Haut Débit



The fifth-generation cellular mobile networks are expected to support ultra-reliable low latency communication (URLLC) services. The requirements of URLLC applications are: - End-to-end latency down to 1ms - Determinism (i.e. whether the latency is stable) down to 1µs - Reliability (i.e. success probability of transmitting a certain number of bytes within a certain delay) between 99.999% and 1-10^-9 - Availability (i.e. percentage of time end-to-end communication service are delivered according to an agreed QoS) up to 99.99% - Connection density (i.e. the number of devices fulfilling a target QoS per area) of 10^6/km² for massive deployment or 100/m² in certain area - Lifetime up to 15 years All requirements can hardly be reached together. During the PhD, we will focus on a flexible tradeoff between reliability and latency. Some proposals propose to exploit diversity in terms of time, frequency, space, antenna, interface to improve the latency/reliability limits. Moreover, in the case of URLLC applications, the strict latency requirements could exclude protocols that rely on retransmission. In this PhD, we propose to study a novel transmission and allocation (PHY/MAC) method providing a flexible tradeoff between reliability and latency. For that purpose, we propose to enhance URLLC deterministic protocols (providing the minimal QoS) by opportunism. Taking into account that URLLC applications have a range of requirements (in terms of reliability and latency), our approach will mix resource reservation and opportunistic use of the spectrum. On the one hand, we will exploit existing or propose deterministic protocols to provide the minimal QoS requirements (e.g., minimal reliability, minimal availability, maximal latency without jitter) in order to ensure low latency and reliable communications. This approach will bound the performance. On the other hand, we will enhance the QoS (ultra-reliable or ultra-low latency) thanks to an opportunistic approach. This complementary protocol will share limited resources (shared/unused) for heterogeneous URLLC services and will improve the reliability by exploiting spatial and frequency diversity and propose a better latency (but with jitter). Thanks to this approach, we are allowed to overbook the shared resource and we can naturally provide heterogeneity management.

Resistant and resilient processor to fault attacks and side-channel attacks

Département Systèmes

Laboratoire Sécurité des Objets et des Systèmes Physiques



Crypto-processors are not the only ones that are sensitive to fault attacks and side-channel attacks, CPUs are also prone to those flaws. Unfortunately, their sensitivities to these threats are poorly known. The objective of this thesis will be to characterize the consequences of these faults and leaks. New horizontal-type side-channel attacks based on machine learning can be experimented to go back to the executed code. Based on this knowledge, the PhD student will implement a processor core on FPGA completely resistant to intentional faults and side-channel attacks. Fault countermeasures solutions are often based on redundancy (spatial and temporal redundancy, error detector and corrector code, ...) that only increase the leakage and therefore the vulnerability to side-channel attacks. This approach is innovative as it aims to resolve this dilemma. The detection of faults is not the only constraint to be taken into account, however, it will be necessary to ensure that the CPU is resilient and able to restart from a stable state as close as possible to the erroneous state.

Explaining predicitive-model decisions: towards automatic interpretation of tree-ensemble models


Laboratoire d'Analyse des Données et d'Intelligence des Systèmes



Until recently, the focus in predictive modelling has mainly been set on improving model prediction accuracy. Many successful models scaling to big amounts of heterogeneous data have been proposed in the literature, and widely used implementations of these models are available. Unfortunately, these models generally do not intrinsically come with an easy way to explain their predictions, and are often presented as black-box tools performing complex and non-intuitive operations on their inputs. This can be an issue in many applications where the interpretation of the model decision may have a greater added-value than the decision itself. Examples include medical diagnosis where the interpretation would consist in identfying which combination(s) of characteristics presented by an individual contributes most to the diagnosis. In this thesis, we propose to add interpretability to a specific class of machine learning models known as tree-ensemble models, without impacting the performance of the model we want to interpret. In the continuation of the work already initiated in the laboratory, the objective is to analyze the combinations of input features along with their respective numerical values, so that each instance-level decision taken by the model can be explained by a set of input features having particular numerical values. Fault detection in connected manufacturing provides an interesting application for such approaches, and data as well as the fault detection models will be provided as a starting point for this thesis work.

Machine learning based simulation of realistic signals for an enhanced automatic diagnostic in non-destructive testing applications

Département Imagerie Simulation pour le Contrôle (LIST)

Laboratoire Simulation et Modélisation en Electro-magnétisme



Model based solutions for automatic diagnostic in the field on non-destructive testing are currently a topic of great interest in both academic and industrial communities. Their ultimate objective is to provide a qualitative or quantitative evaluation of the inspected material state (sound, flawed, flawed with anomaly dimensions or criticality) in an industrial context like a production line. Such tools, providing inputs for real-time process control, contribute to the general trend in Europe that aims at modernizing Industry and services [1]. The CEA LIST institute is an internationally recognized research institution in the field of nondestructive testing. It develops the CIVA software [2], which offers multi-physics models and is considered as a leading product for simulation for NDT applications. Accurate models able to reproduce experimental signals prove very helpful in an inversion process aiming at classifying or characterizing flaws [3]. However, as they do not account for disturbances and parameters variability occurring during an experimental acquisition, simulated signals inherently look ?perfect? and are, for instance, easily distinguishable from experimental data. This PhD subject aims at improving the match between simulation and experimental data, by augmenting the simulation with another contribution on can generally refer to as ?noise?. The strategy proposed to obtain such noise contribution is to apply machine-learning techniques like dictionary learning to a set of representative experimental data. Alternatively, a deep learning model can be trained to analyze real data and then distinguish between contents (flaw signals) and style (the rest, which is not simulated by physical models). Afterwards, the augmented simulation tool will be able to reproduce closely experimental data, take into account specific discrepancies due to a particular environment and reproduce the variability observed experimentally. It will thus enhance the performance of model based tools developed at CEA LIST for sensibility analysis, management of uncertainty and diagnostic. REFERENCES [1] [2] [3] M. Salucci et al., "Real-Time NDT-NDE Through an Innovative Adaptive Partial Least Squares SVR Inversion Approach," in IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 11, pp. 6818-6832, Nov. 2016

Voir toutes nos offres