Towards Information Theory-Based Discovery of Equivariances

Research output: Contribution to conferencePaperpeer-review

5 Downloads (Pure)


The presence of symmetries imposes a stringent set of constraints on a system. This constrained structure allows intelligent agents interacting with such a system to drasti- cally improve the efficiency of learning and generalization, through the internalisation of the system’s symmetries into their information-processing. In parallel, principled mod- els of complexity-constrained learning and behaviour make increasing use of information- theoretic methods. Here, we wish to marry these two perspectives and understand whether and in which form the information-theoretic lens can “see” the effect of symmetries of a system. For this purpose, we propose a novel variant of the Information Bottleneck prin- ciple, which has served as a productive basis for many principled studies of learning and information-constrained adaptive behaviour. We show (in the discrete case) that our ap- proach formalises a certain duality between symmetry and information parsimony: namely, channel equivariances can be characterised by the optimal mutual information-preserving joint compression of the channel’s input and output. This information-theoretic treatment furthermore suggests a principled notion of “soft” equivariance, whose “coarseness” is mea- sured by the amount of input-output mutual information preserved by the corresponding optimal compression. This new notion offers a bridge between the field of bounded ratio- nality and the study of symmetries in neural representations. The framework may also allow (exact and soft) equivariances to be automatically discovered.
Original languageEnglish
Number of pages23
Publication statusPublished - 29 Nov 2023
EventNeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations - New Orleans, United States
Duration: 16 Dec 202316 Dec 2023
Conference number: 2


ConferenceNeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations
Abbreviated titleNeurIPS 2023
Country/TerritoryUnited States
CityNew Orleans
OtherBringing together researchers at the intersection of mathematics, deep learning, and neuroscience to uncover principles of neural representation in brains and machines. This year's workshop will feature six invited talks covering emerging topics in geometric and topological deep learning, mechanistic interpretability, category theory for AI, geometric structure in LLMs, and geometric structure in the brain.
Internet address


  • Channel equivariances
  • Information Bottleneck
  • Symmetry Discovery


Dive into the research topics of 'Towards Information Theory-Based Discovery of Equivariances'. Together they form a unique fingerprint.

Cite this