3
Sep
2025

Fine-tuning LLMs with Multi-GPU Training on Olivia

Gain practical experience in fine-tuning Large Language Models (LLMs) with the LLaMA architecture and LoRA on the Olivia supercomputer.

This hands-on workshop combines fine-tuning concepts with practical and HPC-specific execution.

The course is open to all and free of charge, but registration is mandatory.

Date and time

Start: Sep 03 2025 09:30
End: Sep 03 2025 15:30

Organised by

NRIS
Instructor: Hicham Aguency

Target audience

Researchers, developers, and students who are familiar with Python who hands-on skills in scalable LLM training.

Learning outcomes

By the end of this workshop, participants will be able to:

  • Learn the basics of LLaMA.
  • Set up a distributed training environment using PyTorch, Torchao, and Torchtune on an HPC system.
  • Fine-tune a LLaMA model using LoRA techniques on a single GPU and then scale up to multiple GPUs.
  • Perform inference tasks like summarization using the fine-tuned model.
  • Monitor GPU usage and GPU memory utilization.