Laboratoire de Physique Théorique de la Matière Condensée

Le fuseau horaire de votre navigateur est %s, qui est différent de vos paramètres. Voulez-vous modifier le fuseau horaire de votre navigateur ? Oui Fermer

Nina Javerzat (LIPhy) & Enrico Ventura (Milan)

Calendrier
Séminaires
Date
04.02.2025 10:45 - 11:45

Description

Nina Javerzat : Conformal Invariance of Rigidity Percolation

Rigidity Percolation (RP) attracted much attention in soft matter, as an elegant framework to understand the non-trivial emergence of solidity, in media that not present any long-range structural order. The solidification of amorphous systems like gels, fiber networks or living tissues can indeed be understood by focusing on locally rigid structures --clusters, that grow and coalesce until one eventually percolates the whole system, ensuring macroscopic mechanical stability. As a statistical model, Rigidity Percolation is defined from the concept of graph rigidity. I will explain that RP possesses a unique non-local character, leading to a rich behaviour that is absent in the usual Connectivity Percolation problem.

Inspired by the great success of conformal field theory to understand critical phenomena, I have recently examined conformal invariance in 2D Rigidity Percolation. I will present two works where I gave numerical evidence of conformal invariance: i) from properties of the so-called connectivity functions, and ii) from consistence of the boundaries of clusters with Schramm-Loewner Evolution processes. These works reveal furthermore unexpected similarities with Connectivity Percolation, and allow to obtain a new relation between two of the critical exponents of RP.
A lot remains to be understood about Rigidity Percolation, and I will end with my favourite perspectives.

Based on Phys. Rev. Lett. 130, 268201 (2023) and Phys. Rev. Lett. 132, 018201(2024)

Slides (pdf)

Enrico Ventura : Memorization as Generalization in Physics-inspired Generative Models

Our daily experience proves that humans are able to acquire and manipulate the hidden structure of the surrounding environment to generate creative ideas and survive. Artificial machines are also able to learn the unknown distribution of a set of data-points and use it to generate new examples. This capability, known under the name of generalization, is usually opposed to learning specific point-wise examples from the training-set. This second ability is called memorization. In this talk I will report some recent results supporting the picture of generalization as a “thermal” version of memorization with respect to a fictitious learning temperature. Both biologically-inspired (i.e spin-glass like neural networks) and artificial learning systems (i.e. diffusion models) will be analyzed under the lens of statistical mechanics.

Slides (pdf)