Conference in Statistics: Upp-Upp
- Date: 14 March 2025, 08:45–13:00
- Location: Ångström Laboratory, 101130
- Type: Conference
- Organiser: Matematiska institutionen
- Contact person: Rolf Larsson
Welcome to this year's Upp-Upp conference (the internal conference for Statisticians in Uppsala). Fika and a lunch sandwich will be served. All talks are 20 minutes long with 5 minutes time for questions and discussion afterwards.
Location
Room 101130, Ångström laboratory, house 10, 1st floor.
Schedule
Welcome
8:45 AM
Stefan Engblom, IT
8:50 AM
Bayesian Models for National-Scale Disease Spread: From Cattle to COVID-19.
In this talk, I will present Bayesian modeling approaches for tracking and predicting infectious disease spread at a national scale. These methods enhance real-time monitoring and also help improve our understanding of the disease dynamics. I will first discuss the endemic spread of Shiga toxin-producing E. coli in Swedish cattle, where sparse and low- informative prevalence data, combined with a decade-long transport network, pose interesting challenges. Despite these limitations, our Bayesian simulation-driven approach successfully reconstructs disease spread, producing an in silico model with predictive value.
Next, I will highlight a COVID-19 modeling effort from the cross-disciplinary CRUSH Covid project at Uppsala University, launched in October 2020 to support local authorities. Here, we developed a time-continuous compartmental model, structurally akin to a chemical reaction network, with parameters estimated using Bayesian techniques and a Kalman filter applied to healthcare data. This approach yielded valuable insights into pandemic dynamics, while also underscoring the complexities and uncertainties inherent in real-time epidemiological modeling.
Viktor Eriksson, Statistics
9:15 AM
An extensive comparison of small sample properties of variance-covariance estimators for MLEs in the context of ARMA models.
A good approach for parameter estimation in autoregressive moving-average (ARMA) models is the maximum likelihood method (MLE). The inverse Fisher's information matrix (FIM) is an often used estimator for the variance- covariance matrix (VCM) of the MLE. The small sample properties of several FIM-based estimators for the VCM of the MLE are investigated in a Monte Carlo simulation. We find that the Box-Jenkins asymptotic estimator and the FIM performed best in respect to mean squared error (MSE) and relative bias. Though the observed FIM performed worst in respect to same measures, it is however best for providing moderately conservative confidence intervals for ARMA parameters.
Coffee break
9:40 AM
Hajar Rezaei, SLU
10:00 AM
A note on an extended Bayes estimator in multivariate normal distribution.
This research addresses the problem of estimating the mean vector of a multivariate normal distribution when the covariance matrix is unknown. We develop an extended class of Bayes estimators under the quadratic loss function and investigate their minimax properties. Building upon previous works, we generalize the minimax Bayes estimators by incorporating a hierarchical prior structure where the covariance matrix follows a Wishart distribution. Our approach leads to an extended Bayes estimator that satisfies minimaxity conditions under specific regularity constraints. The results contribute to the broader literature on shrinkage estimation in high-dimensional regression analysis.
Based on joint work with M. Arashi and R. Belaghi.
Benny Avelin, Mathematics
10:25 AM
Understanding the geometry of data sets with the Graph Laplacian.
High dimensional data sets are today ubiquitous in many areas of research, and understanding them is of central importance. Such a data set is commonly assumed to lie on some unknown manifold, and in this talk I will describe and introduce methods one can use to study the geometry of these manifolds, based mainly on the Graph Laplacian. In particular I will focus on how one can detect singularities, such as self intersections.
This is based on joint work with Martin Andersson.
Break
10:50 AM
Yukai Yang, Statistics
11:00 AM
Moment matching based sparse and noisy point cloud registration.
Point cloud registration is a fundamental problem in Simultaneous Localization and Mapping (SLAM), particularly in robotics applications where sensor data is often sparse and noisy. In this paper, we introduce a novel approach to point cloud registration based on moment matching, specifically designed to handle challenging conditions such as low point density and measurement noise. Our method is applied to millimeter-wave radar data collected from a moving vehicle, demonstrating its effectiveness in real-world scenarios. A key advantage of our approach is its computational efficiency. We ensure that registration between consecutive frames is completed within 100 milliseconds, making it suitable for real-time applications. Additionally, we evaluate the accuracy of our method by comparing it against widely used registration techniques. Despite operating on significantly sparser radar data, our method achieves comparable accuracy to state-of-the-art LiDAR-based approaches, which rely on much denser point clouds. These results highlight the potential of moment matching as an efficient and robust solution for point cloud registration in SLAM, particularly in resource-constrained environments where dense LiDAR scans are not available.
Claudia von Brömssen, SLU
11:25 AM
Temporal trend analysis in environmental monitoring programs with high
spatial but low temporal resolution.
Swedish environmental monitoring programmes collect large amount of data to follow the state of the environment as well as prevailing changes over time. For some programmes the data structure does not allow the use of traditional trend analysis methods, as the temporal resolution is too low. Here, we focus on the analysis of temporal trends for chemical variables from the Swedish lake survey with observations once each six years in a rotating panel for a total of 4800 lakes.
Break
11:50 AM
Erik Ekström, Mathematics
12:00 PM
Sequential analysis vs. methods with a fixed sample size.
In contrast to statistical procedures relying on a fixed sample size, sequential analysis allows for dynamic decision-making as data is collected. In this way, sequential methods reduce the average number of observations required to obtain a certain level of precision, but by how much? To answer this, we restrict our attention to a stylized problem formulation of testing the drift of a Brownian motion, for which the reduction can be quantified
Lunch
12:25 PM