AI4Physics learning workshop
- Date: 10–11 April 2025
- Type: Workshop
- Organiser: AI4Physics
- Contact person: Magdalena Larfors
This workshop is intended for researchers (PhD students and above) at the Department of Physics and Astronomy. In this workshop, you will get a general introduction to machine learning and a broad survey of techniques and models of relevance for physics research. Lectures will be mixed with practical exercises.
Time | Thursday | Friday |
9-10 | Introduction to ML | Introduction to LLM |
10-10.30 | Coffee | Coffee |
10.30-11.20 | Workshop:
A playful introduction to machine learning: creating an image classifier Bor Gregorcic and Giulia Polverini | Demo: |
11.20-11.40 | Deep learning courses at IFA | LLM course at IFA |
| Lunch | Teacher Lunch, separate registration |
|
|
|
13-14 | Tutorial: Machine Learning for Background Suppression in High Energy Physics | AI (and beyond) in neutron scattering – the state of the art, and current challenges |
14-14.30 | Coffee | Coffee |
14.30-16 | Ethel - An AI-Based Virtual Teaching Assistant at ETH Zurich Room: 101121 Sonja Lyttkens | Panel discussion: Ulf Danielsson (UU/IFA), Room: 101121 Sonja Lyttkens |
Titles and abstract for talks: Thursday
- Niklas Wahlström (UU/IT): Introduction to Machine Learning and Deep Neural Networks
Abstract:
Machine learning involves creating computer programs that can autonomously learn to solve complex problems using data, without explicit programming for each task. This session will introduce the fundamentals of machine learning and present the key ingredients in the model class that has revolutionized the field in the recent decade: deep neural networks (DNNs). In particular, we will explore the importance of depth in these models and give pointers to different DNN architectures tailored for different types of data.
Niklas Wahlström an Associate Professor at the Division of Systems and Control, Department of Information Technology, Uppsala University. His research interests lie in physics-informed machine learning and applications of machine learning in physics. He has developed several courses in machine learning, both at Master's and PhD level. Niklas received his PhD degree from Linköping University, has held visiting positions at ETH Zürich (Switzerland) and Imperial College (UK). Since 2016, he has been affiliated with Uppsala University, first as a postdoctoral researcher, and since 2019 in his present position.
- Bor Gregorcic and Giulia Polverini (UU/IFA): A playful introduction to machine learning: creating an image classifier
Abstract: In this workshop you get hands-on experience with image classifiers (without coding!).
- Christian Glaser (UU/IFA): Deep learning courses at IFA
Abstract: Christian, who has developed and teach the courses 1FA370: Applied Deep Learning in Physics and Engineering and 1FA006: Advanced Applied Deep Learning in Physics and Engineering, will present the courses’ philosophy, layout and content. - Martina Laurenza (UU/IFA): Machine Learning for Background Suppression in High Energy Physics
Abstract: This tutorial provides an introductory overview of using Machine Learning (ML) for background suppression in High Energy Physics (HEP). Participants will have the opportunity to modify scripts and tune algorithm parameters, observing how these changes affect the discrimination power of the models. The session offers a hands-on first look into applying ML techniques for background suppression in HEP analyses. - Gerd Kortemeyer (ETH Zurich): Ethel - An AI-Based Virtual Teaching Assistant at ETH Zurich
Abstract: When we think of AI today, chatbots often come to mind first. However, in higher education, the potential applications of AI go far beyond bots. Examples include personalized feedback on complex assignments, support for exam grading, targeted practice exercises, assistance in interactive lectures, accessible preparation of teaching materials, and help with programming tasks. With a particular focus on physics education, this talk presents the experiments and practical experiences from the Ethel project at ETH Zurich, which is now running in its third semester and currently serving over 2000 students in 12 courses.
Gerd Kortemeyer is a member of the rectorate of ETH Zurich and an associate of the ETH AI Center. He is also an Associate Professor Emeritus at Michigan State University. He holds a Ph.D. in physics from Michigan State University, where he taught for two decades. His research focusses on technology-enhanced learning of STEM disciplines; currently, he is advancing the research and development of AI-based tools and workflows for teaching, learning, and assessment.
Titles and abstract for talks: Friday
- Ekta Vats (UU/IT): Introduction to LLM
Abstract: Large Language Models (LLMs) have revolutionized natural language processing, but their potential extends far beyond text understanding. This session delves into LLMs, exploring their theory and practical applications. We will explore the convergence of language and images, establish basic knowledge in LLMs, and understand how LLMs and its variants can enhance various tasks, such as OCR/HTR, image captioning, visual question answering, image generation, and beyond. The ethical considerations and potential biases inherent in deploying LLMs for vision tasks will be discussed, providing insights into their individual capabilities and limitations.
Ekta Vats is an Assistant Professor in Machine Learning, Docent in Computerised Image Processing, and a Beijer Researcher at The Beijer Laboratory for Artificial Intelligence Research. She leads the Uppsala Vision, Language and Learning group whose research mission is to build fundamental AI/ML methods for computer vision and language modeling to address societal challenges. She teaches several AI/ML courses at Uppsala University, and has worked as an AI Scientist at Silo AI (now part of AMD). - Yong Sheng Koay and Eliel Camargo-Molina (UU/IFA): How to do NLP-like research in physics
Abstract: Using our recent experiments with transformers for QFT Lagrangians as an example, we will discuss transformers model selection, data preparation, training workflows, and evaluation methods choices. The session aims to provide practical guidance for leveraging state-of-the-art transformer models in interdisciplinary research. - Giulia Polverini (UU/IFA): Performance of Large Multimodal Models on physics tasks involving visual representations
The recent advancement of chatbots to process not only text but also images significantly broadens their potential applications in physics education. This is particularly relevant given that physics relies heavily on multiple forms of representation, such as graphs, diagrams, etc. To assess the true reasoning and visual abilities of these tools, we evaluated their performance on conceptual multiple-choice physics tests across various topics and languages. In this presentation, I will discuss some of the strengths and limitations identified in our study—insights that are valuable for the thoughtful and effective integration of these AI tools into educational physics - Bor Gregorcic (UU/IFA): LLM course at IFA
Abstract: In 2025, the Division of Physics Education Research has developed an introductory course in AI for educators in the natural sciences. We will present the course philosophy, layout and content. - Gerd Kortemeyer (ETH Zurich): Teacher Lunch: Cheat sites and artificial intelligence usage in online introductory physics courses
As a result of the pandemic, many physics courses moved online. Alongside, the popularity of Internet-based problem-solving sites and forums rose. With the emergence of large language models, another shift occurred. How has online help-seeking behavior among introductory physics students changed, and what is the effect of different patterns of online resource usage? In a mixed-method approach, we investigated student choices and their impact on assessment components of an online introductory physics course for scientists and engineers. A year ago, we found that students still mostly relied on traditional Internet resources and that their usage strongly influenced the outcome of low-stake unsupervised quizzes. We empirically found distinct clusters of help-seeking and resource-usage patterns among the students; the impact of students’ cluster membership on the supervised assessment components of the course, however, is nonsignificant. Today, there is evidence that students shifted almost completely to AI. - Jos Cooper (ESS): AI (and beyond) in neutron scattering – the state of the art, and current challenges
Abstract: Neutron scattering is a very well established field, providing unrivalled information about materials since 1930. It is only relatively recently, however, that artificial intelligence started to be used in neutron scattering, with one of the first conferences being held in 2019. The introduction of AI has not revolutionised the field in the way that some areas of astronomy, or structural biology have been. However, applications have been slowly increasing, and it is widely recognised as incredibly important for the future. In this talk I will highlight several projects using AI in neutron scattering and go through some of the challenges which currently exist in the field. I will finish with an outlook, and some predictions about where AI will benefit researchers the most.
Jos Cooper is the project lead on the ESTIA neutron reflectometer, being built at the European Spallation Source. A large research infrastructure which will be the newest and most powerful neutron source in the world. He has a background in thin film magnetism, electrochemistry, and neutron reflectometry beginning from his PhD at Cambridge, then working at the ISIS neutron and muon source as instrument scientist, to his current position building the next generation of neutron reflectometer in Lund. He has had an active interest in AI, and how it can help advance neutron science, since 2018. He has ongoing research, collaborating with Uppsala among others, investigating how to optimize experiments at large scale facilities, to make best use of these invaluable resources.