Parallel and Distributed Programming
Course, Master's level, 1TD070
Spring 2025 Spring 2025, Uppsala, 33%, On-campus, English
- Location
- Uppsala
- Pace of study
- 33%
- Teaching form
- On-campus
- Instructional time
- Daytime
- Study period
- 24 March 2025–8 June 2025
- Language of instruction
- English
- Entry requirements
-
120 credits in science/engineering including Introduction to Scientific Computing or Scientific Computing I. Participation in High Performance Programming or participation in Low-level Parallel Programming. Proficiency in English equivalent to the Swedish upper secondary course English 6.
- Selection
-
Higher education credits in science and engineering (maximum 240 credits)
- Fees
-
If you are not a citizen of a European Union (EU) or European Economic Area (EEA) country, or Switzerland, you are required to pay application and tuition fees.
- First tuition fee instalment: SEK 12,083
- Total tuition fee: SEK 12,083
- Application deadline
- 15 October 2024
- Application code
- UU-62007
Admitted or on the waiting list?
- Registration period
- 10 March 2025–31 March 2025
- Information on registration from the department
Spring 2025 Spring 2025, Uppsala, 33%, On-campus, English For exchange students
- Location
- Uppsala
- Pace of study
- 33%
- Teaching form
- On-campus
- Instructional time
- Daytime
- Study period
- 24 March 2025–8 June 2025
- Language of instruction
- English
- Entry requirements
-
120 credits in science/engineering including Introduction to Scientific Computing or Scientific Computing I. Participation in High Performance Programming or participation in Low-level Parallel Programming. Proficiency in English equivalent to the Swedish upper secondary course English 6.
Admitted or on the waiting list?
- Registration period
- 10 March 2025–31 March 2025
- Information on registration from the department
About the course
There is an everlasting demand for increased computer capacity and performance. High-performance computers with only one processor are expensive, and there are physical limitations that put an upper limit on the performance. These problems are solved by connecting many cheap standard-type processors in one computer and letting them work simultaneously on one task. The amount of performance gained by the use of multi-core or multi-processor computers is strongly dependent on the software algorithms and implementation. In particular, the software must be "parallelised" to run on multiple cores simultaneously.
In this course, you will learn how different types of parallel computers are built up and how they work. Parallel algorithms for fundamental computational problems are presented. Important questions related to this are if there is parallelism inherently in a particular algorithm, or if reformulation of the algorithm can increase the parallelism. A good part of the course is hands-on parallel programming using programming models such as MPI (Message Passing Interface).