Scientific direction Development of key enabling technologies
Transfer of knowledge to industry

PhD : selection by topics

Exploration and design of in-memory computing architectures based on emerging non volatile memories

Département Architectures Conception et Logiciels Embarqués (LIST-LETI)

Laboratoire Intégration Silicium des Architectures Numériques



The thesis objective is to study and propose new architectures for in-memory-computing based on emerging non-volatile mémories and thus explore future applications. L'objectif de cette thèse est d'explorer l'utilisation des mémoire non-volatiles émergentes pour les architectures de in-memory-computing afin d'ouvrir le champ d'application de ces mémoires limitées aujourdh'ui à des implemntations SRAM. The usage of any devices, from embedded to super-computers, is becoming more and more data-centric. On the other hand, the performance gap between processor and memory has been steadily growing during the last decades (known as the ?memory wall?). The energy consumption gap between computation (GFlop/s) and data-movement (GByte/s) is also showing the same trend. A very large proportion, if not the largest, of the efforts made by silicon companies and researchers have been focused on improving the characteristics of memories such as size, bandwidth, non-volatility, etc. The solution advocated to reduce the data-movement cost amounted to bring part of the memory (e.g. the caches) on the die nearby the processor. Despite the clear advantages of cache hierarchy, the latency of data transfers between the different memory levels remains an important performance bottleneck. In terms of energy consumption, I/O largely dominates the overall cost (70% to 90%). Eventually, in terms of security, data transfers between CPU and memory constitutes the Achilles heel of a computing system largely exploited by hackers. Therefore, other solutions came up over the years to address those problems. They can be grouped in the following terms: Processing-In-Memory, Logic-In-Memory and In-Memory-Computing (or Computing-In-Memory). Processing-In-Memory (PIM) is a concept based on DRAM process consisting in driving computation units implemented in DIMMs through the existing memory bus. In more recent works, and with the progress of the 3D technologies, researchers propose to design stacks of computation unit next to the DRAM stack, which permits to create massive data parallelism. Logic-In-Memory is the concept of integrating some computation ability into the memory. However, it is more used to implement logic operations on a specific memory layer or logic layer dedicated for 3Ds memories. Finally, In-Memory-Computing (IMC) consists in integrating a part of the computation units into the memory boundary, which means that data do not leave memory. This should offer significant gain in the execution time, reducing the power consumption and improving the security. The IMC concept has been successfully implemented in CEA-LETI. Despite the promising results of existing works, all the applications has been experimented only based on SRAM bitcell arrays. To go further and target high capacity memory application (video, ...), the usage of non-volatile memories based on emerging technologies (ReRAM, PCM, MRAM, ...) will be explored in this thesis. Based on in-house software platform and hardware architecture, the main goal will be to evaluate the performance (power, timing, ...) and explore new architecture and design solutions.

Voir toutes nos offres