Maria Bånkestad - Doctoral thesis defence: "Structured models for scientific machine learning: from graphs to kernels"

  • Date: 25 February 2025, 09:15–12:00
  • Location: Ångström Laboratory, 101121, Sonja Lyttkens
  • Type: Academic ceremony, Thesis defence
  • Lecturer: Maria Bånkestad
  • Organiser: Thomas Schön
  • Contact person: Maria Bånkestad

Welcome to the public defense of Maria Bånkestads's doctoral dissertation. The defense will be conducted in English.

Opponent: Jakob Macke, Professor, University of Tübingen
Principal Supervisor: Thomas Schön, Professor

Assistant supervisors: Sebastian Mair (LiU), Aleksis Pirinen (RISE)

Abstract: This thesis investigates the reciprocal relationship between science and machine learning, showing how embedding scientific principles within machine learning models enhances accuracy and interpretability in complex scientific domains. Through five contributions, this work addresses challenges spanning molecular modeling, fluid dynamics, and graph-based learning, illustrating how scientific insights can guide model development and improve performance across diverse applications.

A central focus is developing models that directly incorporate physical symmetries and laws into their structure, creating novel, scientifically grounded approaches. For instance, the \emph{GeqShift} model leverages E(3)-equivariant graph neural networks to predict NMR spectra with a significant accuracy boost by capturing three-dimensional molecular structures. Similarly, our SE(2)-equivariant graph neural network models rotational and translational symmetries to enhance data efficiency and performance in fluid dynamics simulations, demonstrating the strength of symmetry-aware models in complex physical domains.

This thesis advances graph-based machine learning by developing physics-informed models. For example, we introduce a subsampling model that incorporates principles from the classic Ising model of magnetism, introducing a novel approach to graph subsampling that enhances tasks like graph explanation and mesh sparsification. Building on these graph-based techniques, we also present an approach to nonnegative matrix factorization (NMF), leveraging graph structures to accelerate low-rank factorization.

In addition to symmetry-aware frameworks, we introduce the elliptical process, a flexible extension of the Gaussian process that adapts to non-Gaussian noise. This innovation allows the model to learn noise characteristics directly from data, producing robust predictions that address a broad spectrum of real-world challenges.

These contributions underscore the dynamic exchange between scientific principles and machine learning, illustrating how physical knowledge enhances model performance and inspires new solutions. This thesis establishes a foundational framework for advancing scientific machine learning, paving the way for future breakthroughs in the field.

Link to DiVA: Structured models for scientific machine learning : From graphs to kernels

FOLLOW UPPSALA UNIVERSITY ON

Uppsala University on Facebook
Uppsala University on Instagram
Uppsala University on Twitter
Uppsala University on Youtube
Uppsala University on Linkedin