Abstract
The presence of symmetries imposes a stringent set of constraints on a system. This constrained structure allows intelligent agents interacting with such a system to drasti- cally improve the efficiency of learning and generalization, through the internalisation of the system’s symmetries into their information-processing. In parallel, principled mod- els of complexity-constrained learning and behaviour make increasing use of information- theoretic methods. Here, we wish to marry these two perspectives and understand whether and in which form the information-theoretic lens can “see” the effect of symmetries of a system. For this purpose, we propose a novel variant of the Information Bottleneck prin- ciple, which has served as a productive basis for many principled studies of learning and information-constrained adaptive behaviour. We show (in the discrete case) that our ap- proach formalises a certain duality between symmetry and information parsimony: namely, channel equivariances can be characterised by the optimal mutual information-preserving joint compression of the channel’s input and output. This information-theoretic treatment furthermore suggests a principled notion of “soft” equivariance, whose “coarseness” is mea- sured by the amount of input-output mutual information preserved by the corresponding optimal compression. This new notion offers a bridge between the field of bounded ratio- nality and the study of symmetries in neural representations. The framework may also allow (exact and soft) equivariances to be automatically discovered.
Original language | English |
---|---|
Pages | 1-23 |
Number of pages | 23 |
Publication status | Published - 29 Nov 2023 |
Event | NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations - New Orleans, United States Duration: 16 Dec 2023 → 16 Dec 2023 Conference number: 2 https://www.neurreps.org |
Conference
Conference | NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations |
---|---|
Abbreviated title | NeurIPS 2023 |
Country/Territory | United States |
City | New Orleans |
Period | 16/12/23 → 16/12/23 |
Other | Bringing together researchers at the intersection of mathematics, deep learning, and neuroscience to uncover principles of neural representation in brains and machines. This year's workshop will feature six invited talks covering emerging topics in geometric and topological deep learning, mechanistic interpretability, category theory for AI, geometric structure in LLMs, and geometric structure in the brain. |
Internet address |
Keywords
- Channel equivariances
- Information Bottleneck
- Symmetry Discovery