Halvtidsseminarium - Zhenlu Sun: "Explainable and Reliable Machine Learning for Cybersecurity"

Datum
29 januari 2026, kl. 13.00–15.00
Plats
Ångströmlaboratoriet, 101132
Typ
Akademisk högtid, Seminarium
Föreläsare
Zhenlu Sun
Arrangör
Institutionen för informationsteknologi
Kontaktperson
Salman Toor

Välkommen till ett halvtidsseminarium presenterat av Zhenlu Sun. Opponent: Prof. Mathias Ekstedt, from KTH university.

Nyckelord: Förklarabar AI, Cybersäkerhet, Distribuerad databehandling, AI och maskininlärning.

Abstract

Artificial Intelligence (AI) has gained significant traction in recent years within the cybersecurity domain, driven by advances in computing power, data availability, and algorithms. AI is being increasingly used to address key areas, such as authentication, intrusion detection, and risk assessment. However, modern cybersecurity faces two main challenges. First, systems-of-systems generate vast amounts of highly diverse data. Second, ML models are inherently complex and are often treated as black-box solutions. Together, these challenges limit the broader adoption of ML-based solutions for cybersecurity, and as a consequence, many rule-based systems are still in use. While these systems serve their purpose, they leave room for improvement, and such improvements must be grounded in confidence. In my research, the focus is on developing explainable machine learning methods for cybersecurity.

Intrusion detection systems (IDSs) are widely used to identify anomalies in computer networks and raise alarms on intrusive behaviors. Many ML-based IDSs generally learn patterns from separate network measurements, whereas the inter-dependencies of the network are often neglected, which may result in large amounts of uncertain predictions, false positives, and false negatives. In the first project, we propose a graph neural network-based intrusion detection system (GNN-IDS)1, in which attack graph and real-time measurements are adopted to represent the static and dynamic attributes of computer networks, respectively. Graph neural networks are employed as the inference engine for intrusion detection. By learning network connectivity, graph neural networks can quantify the importance of neighboring nodes and node features, making more reliable predictions. Furthermore, by incorporating an attack graph, GNN-IDS can not only detect anomalies but also identify the malicious actions causing the anomalies, which makes the predictions more explainable and reliable.

In the second project, we study the Advanced Persistent Threats (APTs), which are characterized by their long-term, targeted, and stealthy nature. APT attacks are more challenging to detect as they often involve the use of numerous attack techniques and tactics. Provenance graph-based methods have shown great promise in extracting the spatial and/or temporal patterns to capture anomalies from system audit logs in complex computer systems and networks. Depending on the available human resources and domain knowledge, provenance graphs can be labeled at different granularities, such as node and graph levels. For graph-level APT detection, many existing studies rely on complete provenance graph data to predict if a graph or node is benign or malicious in an offline manner. In our work, we propose a relation-aware graph learning method for APT detection2, which is capable of triggering early alarms with high accuracy and summarizing attack contexts from overwhelmingly redundant audit logs.

References

1. Sun, Z., Teixeira, A. M. & Toor, S. Gnn-ids: Graph neural network based intrusion detection system. In Proceedings of the 19th International Conference on Availability, Reliability and Security, ARES ’24, DOI: 10.1145/3664476.3664515 (2024).

2. Sun, Z., Teixeira, A. M. & Toor, S. Re-guard: Relation-aware graph learning for detecting advanced persistent threats.

FÖLJ UPPSALA UNIVERSITET PÅ

Uppsala universitet på facebook
Uppsala universitet på Instagram
Uppsala universitet på Youtube
Uppsala universitet på Linkedin