Scalars are universal: Equivariant machine learning, structured like classical physics
Abstract
There has been enormous progress in the last few years in designing neural networks that respect the fundamental symmetries and coordinate freedoms of physical law. Some of these frameworks make use of irreducible representations, some make use of highorder tensor objects, and some apply symmetryenforcing constraints. Different physical laws obey different combinations of fundamental symmetries, but a large fraction (possibly all) of classical physics is equivariant to translation, rotation, reflection (parity), boost (relativity), and permutations. Here we show that it is simple to parameterize universally approximating polynomial functions that are equivariant under these symmetries, or under the Euclidean, Lorentz, and Poincaré groups, at any dimensionality $d$. The key observation is that nonlinear O($d$)equivariant (and relatedgroupequivariant) functions can be universally expressed in terms of a lightweight collection of scalars  scalar products and scalar contractions of the scalar, vector, and tensor inputs. We complement our theory with numerical examples that show that the scalarbased method is simple, efficient, and scalable.
 Publication:

arXiv eprints
 Pub Date:
 June 2021
 DOI:
 10.48550/arXiv.2106.06610
 arXiv:
 arXiv:2106.06610
 Bibcode:
 2021arXiv210606610V
 Keywords:

 Computer Science  Machine Learning;
 Mathematical Physics;
 Statistics  Machine Learning
 EPrint:
 NeurIPS 2021