Equivariant group graphs: analysis of controlled augmentation in SE(3) environments

Co-Supervised by: Francesco Leonardi

If you are interested in this topic or have further questions, do not hesitate to contact francesco.leonardi@unibe.ch.

Background / Context

Would you be able to recognize an object even if it is rotated, translated, or reflected? Of course yes: our brains can abstract and recognize entities independently of rigid geometric transformations. In mathematical terms, this is equivalent to saying that our perceptual system is equivariant with respect to the group of rigid transformations in space, known as the Lie group SE(n) (Special Euclidean group).

In the field of artificial intelligence, group theory and in particular Lie groups are becoming fundamental to the design of neural models that respect geometric symmetries. In particular, equivariant SE(3) architectures, such as EGNN (Equivariant Graph Neural Network) or SE(3)-Transformer, have already been proposed and have shown remarkable performance in contexts such as molecular modeling and robotics.

However, an area that has yet to be explored is the role of controlled augmentation of geometric structures (e.g., artificial data rotations and translations) in SE(3)-equivariant networks.

Research Question(s) / Goals

Can we exploit geometric augmentation in order to enrich the latent representation and improve the predictive ability of SE(3)-equivariant models?

Approach / Methods

  • Theoretical insight into topology and Lie groups and implementation of an equivalent SE(3) network.
  • Study of how controlled augmentation varies the latent representation of data.

Expected Contributions / Outcomes

  • Understanding the geometric problem
  • Development of an SE(3) model
  • Understanding how to augment this group’s data

Required Skills / Prerequisites

  • Good programming skills, with a preference for PyTorch.
  • Strong mathematical background
  • Languages: English, French, Italian. 

Further Reading / Starting Literature