5
Nov
2025

Fine-tuning LLMs with Multi-GPU Training on Olivia

Gain practical experience in fine-tuning Large Language Models (LLMs) with the LLaMA architecture and LoRA on the Olivia supercomputer.

This hands-on workshop combines fine-tuning concepts with practical and HPC-specific execution.

The course is open to all and free of charge, but registration is mandatory.

IMPORTANT:
We have limited GPU capacity and expect a large number of participants. If we receive more confirmations than available spots, we will maintain a waiting list.

Date and time

Start: Nov 05 2025 09:30
End: Nov 05 2025 15:30

Organised by

NRIS
Instructor: Hicham Aguency

Target audience

Researchers, developers, and students who are familiar with Python who hands-on skills in scalable LLM training.

Learning outcomes

By the end of this workshop, participants will be able to:

  • Learn the basics of LLaMA.
  • Set up a distributed training environment using PyTorch, Torchao, and Torchtune on an HPC system.
  • Fine-tune a LLaMA model using LoRA techniques on a single GPU and then scale up to multiple GPUs.
  • Perform inference tasks like summarization using the fine-tuned model.
  • Monitor GPU usage and GPU memory utilization.