Gain practical experience in fine-tuning Large Language Models (LLMs) with the LLaMA architecture and LoRA on the Olivia supercomputer.
This hands-on workshop combines fine-tuning concepts with practical and HPC-specific execution.
The course is open to all and free of charge, but registration is mandatory.
Date and time
Start: Sep 03 2025 09:30
End: Sep 03 2025 15:30
Learning outcomes
By the end of this workshop, participants will be able to:
- Learn the basics of LLaMA.
- Set up a distributed training environment using PyTorch, Torchao, and Torchtune on an HPC system.
- Fine-tune a LLaMA model using LoRA techniques on a single GPU and then scale up to multiple GPUs.
- Perform inference tasks like summarization using the fine-tuned model.
- Monitor GPU usage and GPU memory utilization.