Halvtidsseminarum - Usama Zafar: "Enhancing Robustness and Security in Federated Learning"

Datum
13 november 2025, kl. 13.00–15.00
Plats
Ångströmlaboratoriet, Rum 101130
Typ
Seminarium
Föreläsare
Usama Zafar
Arrangör
Institutionen för informationsteknologi; avdelningen för beräkningsvetenskap
Kontaktperson
Usama Zafar

Välkommen till ett halvtidsseminarium presenterat av Usama Zafar.
Under seminariet presenteras två kompletterande forskningsspår som syftar till att förbättra robustheten och säkerheten i federerad inlärning.

Seminariet hålls på engelska.

Opponent: Prof. György Dan

Abstract: Machine Learning (ML) is increasingly central to domains such as healthcare, finance, and autonomous systems. However, training high-quality models often requires access to sensitive data, and traditional centralized training raises significant privacy risks and regulatory concerns. Federated Learning (FL) addresses this challenge by enabling multiple participants to collaboratively train models without sharing raw data. Yet, FLs distributed nature introduces new security vulnerabilities: adversarial clients can manipulate their updates to compromise the global model. Detecting and mitigating such Byzantine failures remains a critical open problem.

In this half-time seminar, I will present two complementary lines of work aimed at improving the robustness and security of Federated Learning. The first explores a privacy-preserving defense framework based on Conditional Generative Adversarial Networks (cGANs) [2]. By generating synthetic boundaryaligned data directly at the server, this approach enables the authentication of client updates without the need for external validation datasets, improving scalability and adaptability in FL workflows.

The second line of work introduces a Bayesian inference-based method for robust aggregation [1]. This adaptive strategy estimates the global update by accounting for the likelihood of each client being honest, combining the simplicity of classical averaging with the resilience of state-of-the-art defenses.

Together, these approaches contribute to the theoretical foundations and practical deployment of Byzantinerobust FL. They aim to enable secure, scalable, and privacy-preserving collaborative learning in domains where trust and reliability are essential, such as healthcare and finance.

References: [1] Aleksandr Karakulev, Usama Zafar, Salman Toor, and Prashant Singh. Bayesian Robust Aggregation for Federated Learning. 2025. arXiv: 2505.02490 [cs.LG]. URL.

[2] Usama Zafar, Andr´e M. H. Teixeira, and Salman Toor. Byzantine-Robust Federated Learning Using Generative Adversarial Networks. 2025. arXiv: 2503.20884 [cs.CR]. URL.

FÖLJ UPPSALA UNIVERSITET PÅ

Uppsala universitet på facebook
Uppsala universitet på Instagram
Uppsala universitet på Youtube
Uppsala universitet på Linkedin