Parallel and high-performance computing
MATH-454
Media
2025
10.04.2025, 16:27
MATH-454: Spring 2025 - Lecture 1 - Execution in a HPC environment
20.03.2025, 14:57
MATH-454: Spring 2025 - Lecture 4 - MPI basics
20.03.2025, 15:25
MATH-454: Spring 2025 - Lecture 3 - Thread Level Parallelism with OpenMP
27.03.2025, 12:35
MATH-454: Spring 2025 - Lecture 5 - Advanced MPI
27.03.2025, 15:24
MATH-454: Spring 2025 - Lecture 2 - Performance and single-core optimization
20.03.2025, 15:22
MATH-454: Spring 2025 - Lecture 2 - Performance and single-core optimization - Intel VTune profiler demo
10.04.2025, 16:21
MATH-454: Spring 2025 - Lecture 6 - Hybrid MPI / OpenMP and mpi4py
01.05.2025, 12:04
- Dr. Pablo Antolin is the teacher in charge of the course
- Philipp Christoph Weder is the teaching assistant in charge of the exercises
Project: 4 last weeks
- Example of usage: sbatch --qos=math-454 --account=math-454
Visit to EPFL supercomputers:
There will be a guided visit to EPFL supercomputer on May 15 at 13h30.
We meet at 13h30 in front of the datacenter's main entrance (check the map here). Be careful when crossing the tracks.
After the visit I will be available in DIA 003 for Q&A about the final project.
Lecture recordings
You can find the recordings of the lectures (without editing) here.
IMPORTANT - Calendar for the oral exam
Find below (linked pdf file) the calendar for the oral exam.
The exam will be in the room CM 0 10
As set in the project description, the oral presentation will be less than 5 minutes + 5 minutes of questions about the project
and the course in general.
You can bring your own laptop for doing the presentation, or we can provide you a laptop with your submitted slides ready to be used.
About the certification
| Weight |
What? |
When? |
|---|---|---|
| 0.25 |
Problem to parallelized with MPI |
Week 5 |
| Deadline | April 9, 2025, 23h59 CEST | |
| 0.25 |
Problem toparallelized with CUDA |
Week 8 |
| Deadline | April 30, 2025, 23h59 CEST | |
| 0.25 |
Project starts |
Week 9 |
| Project's code, report, and slides deadline |
June 8, 2025, 23h59 CEST |
|
| 0.25 |
Oral exam (5' with 3 slides on the project 5' Q&A on the project and course) |
To be announced |
Week 0 - February 20
Lecture
Course introduction
Exercises
A few short videos present the basics and theory behind algorithms and parallel programming. Videos needed for solving exercises of Series 0.
- Time and space complexities: Big O notation
- Vocabulary
- Flynn's taxonomy
- Levels of parallelism:
- Threads vs. processes
- Slides - week 0 (File)
- Series 0 (File)
- Solution Series 0 (File)
- Check that you are registered hpc-math454 (otherwise, send an email to Pablo or Philipp) (URL)
Week 1 - February 27
Workshop in the computer lab: how to use a Linux Cluster
- connecting to the clusters
- using the SLURM scheduler
- introduction to git
Hands-on
- For this week the exercises are in the slides.
- The code needed for the compilation exercises is in the link below or in the git repository https://gitlab.epfl.ch/math454-phpc/exercises-2025
- Slides (File)
- Compilation exercise (File)
- SCITAS Documentation (URL)
- Solution exercises lecture 01 (File)
Week 2 - March 6
Lecture
- In case you want to delve deeper into this topic Performance Analysis and Tuning on Modern CPUs (Denis Bakhvalov)
Week 3 - March 13
Debug - Profile - Optimize
Two situations can occur in the parallel programming world : (a) you already have a sequential code and (b) you start from scratch your parallel code. Here we assume the first (a).
Before going to parallelization (with whatever parallel paradigm such as OpenMP, MPI or acceleration with GPUs), you must have a bugs free and optimized sequential code. Do do that, you must follow a Debug - Profile - Optimization strategy.
During this week series we will start using gdb and gprof, but if you feel lost and/or want to know a little bit more, please watch these two videos:
the tools will be extensively used during the exercises and up to the end of the semester.
Parallelization with OpenMP
Once your code is fully debugged and optimized, you can parallelize it. In this course, we start with the OpenMP parallel paradigm. This will be cover during the theory lecture (see the associated slides).
OpenMP documentation
Exercises
Sanitize and optimize sequential codes. Use OpenMP to parallelize a code that computes pi and a Poisson solver
- Slides (File)
- OpenMP cheat sheet (File)
- OpenMP Full Specification (URL)
- Series 3 (File)
- Codes descriptions (File)
- Solution Series 3 (File)
Week 4 - March 20
Lecture
- MPI basics
MPI documentation
Exercises
- Parallelization of an existing code using MPI
Week 5 - March 27
Lecture (1h)
- MPI advanced functions
- Advanced MPI datatypes
- MPI communicators
Exercises (3h)
- Parallelization of the Conjugate Gradient linear solver using MPI
The assignment is due April 9 (23h59 CEST).
Some clarifications:
- tasks for slurm = mpi process
Week 6 - April 3
Lecture
- Hybrid programming with MPI and OpenMP
- MPI for Python
- Detailed explanation of Series 5's solution.
Exercises
- MPI advanced (derived datatypes, persistent communication, IO)
- MPI with Python
Week 7 - April 10
Lecture
Introduction to GPU programming
- Trends in HPC
- Hardware architecture (GPU vs. CPU)
- Software environement
- How to program on a GPU with CUDA
Exercises
- Few hands-on exercises to get familiar with the CUDA programming model
- For the exercise you need to load gcc and cuda
module load gcc cuda
Week 8 - April 17
Lecture
Advanced CUDA programming
- Thread Cooperation in GPU Computing
- GPU Memory Model
- Shared memory
- Constant memory
- Global memory
- Test case
Exercises
- Graded exercise, the exercise is due April 30 at 23h59
- Slides (File)
- Series 8 (File)
- Solution Series 8 (File)
- BONUS!! : If you want to squeeze the GPU performance when making reduction operations, take a look to this (URL)
Week 9 - May 1
Lecture
Parallel Profiling
Exercises
Parallel Profiling
Weeks 10 - 13 (May 8 - May 22)
Project work. See the top of the page for more information.
Professor and assistants will be available on Discord to answer questions from 08:15 to 12:00 every Tuesday.