Daniel Gedon: On Deep Learning for Low-Dimensional Representations

  • Date: 14 June 2024, 09:15
  • Location: room 80121, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala
  • Type: Thesis defence
  • Thesis author: Daniel Gedon
  • External reviewer: Adam Johansen
  • Supervisors: Thomas B. Schön, Niklas Wahlström, Antônio Horta Ribeiro
  • DiVA

Abstract

In science and engineering, we are often concerned with creating mathematical models from data. These models are abstractions of observed real-world processes where the goal is often to understand these processes or to use the models to predict future instances of the observed process. Natural processes often exhibit low-dimensional structures which we can embed into the model. In mechanistic models, we directly include this structure into the model through mathematical equations often inspired by physical constraints. In contrast, within machine learning and particularly in deep learning we often deal with high-dimensional data such as images and learn a model without imposing a low-dimensional structure. Instead, we learn some kind of representations that are useful for the task at hand. While representation learning arguably enables the power of deep neural networks, it is less clear how to understand real-world processes from these models or whether we can benefit from including a low-dimensional structure in the model.

Learning from data with intrinsic low-dimensional structure and how to replicate this structure in machine learning models is studied within this dissertation. While we put specific emphasis on deep neural networks, we also consider kernel machines in the context of Gaussian processes, as well as linear models, for example by studying the generalisation of models with an explicit low-dimensional structure. First, we argue that many real-world observations have an intrinsic low-dimensional structure. We can find evidence of this structure for example through low-rank approximations of many real-world data sets. Then, we face two open-ended research questions. First, we study the behaviour of machine learning models when they are trained on data with low-dimensional structures. Here we investigate fundamental aspects of learning low-dimensional representations and how well models with explicit low-dimensional structures perform. Second, we focus on applications in the modelling of dynamical systems and the medical domain. We investigate how we can benefit from low-dimensional representations for these applications and explore the potential of low-dimensional model structures for predictive tasks. Finally, we give a brief outlook on how we go beyond learning low-dimensional structures and identify the underlying mechanisms that generate the data to better model and understand these processes.

This dissertation provides an overview of learning low-dimensional structures in machine learning models. It covers a wide range of topics from representation learning over the study of generalisation in overparameterized models to applications with time series and medical applications. However, each contribution opens up a range of questions to study in the future. Therefore this dissertation serves as a starting point to further explore learning of low-dimensional structure and representations.

FOLLOW UPPSALA UNIVERSITY ON

facebook
instagram
twitter
youtube
linkedin