Fine-Tuning

Fine-tuning LLMs for specific tasks
Title Speaker Date Description
Best Practices For Fine Tuning Mistral Sophia Yang Jun 11, 2024 We will discuss best practices for fine-tuning Mistral models. We will cover topics like: (1) The permissive Mistral ToS and how it’s perfect for fine tuning smaller models from bigger ones (2) How should people collect data (3) Domain specific evals (4) Use cases & examples (5) Common mistakes
Creating, curating, and cleaning data for LLMs Daniel van Strien Jul 8, 2024 Good data is a key component for creating a strong LLM. This talk will outline approaches to getting the best data for training your LLMs. The talk will cover: 1. How to find existing datasets to build on top of 2. Approaches to creating synthetic data 3. Practical techniques and tools for exploring, deduplicating, and filtering datasets to enhance their quality.
Fine Tuning LLMs for Function Calling Pawel Garbacki Jul 2, 2024 In this talk, we will go through the process and best practices of fine-tuning an LLM for function/tool use. We will discuss topics like data preparation, objective-based tuning, efficient serving, and evaluation.
Fine-tuning when you’ve already deployed LLMs in prod Kyle Corbitt Jul 5, 2024 Already have a working prompt deployed in production? Fine-tuning may be significantly easier for you, since you’re already collecting training data from your true input distribution! We’ll talk through whether it’s a good idea to replace your prompt with a fine-tuned model at all, and the flow we’ve found most effective if you choose to do so. We’ll also review important gotchas to watch out…
Napkin Math For Fine Tuning Johno Whitaker Jul 1, 2024 We will show you how to build intuition around training performance with a focus on GPU-poor fine-tuning.
Slaying OOMs with PyTorch FSDP and torchao Mark Saroufim and Jane Xu Jun 11, 2024 Have you ever hit an OOM (and wished you had more VRAM)? If you’ve done much fine-tuning, then you have. And if you are just starting, then you will. Hop on the bus with us and feel the road become smoother as we talk about stacking together techniques like FSDP2 + QLoRa+ CPU Offloading + Fused ADAM (thanks Intel) + more in PyTorch native.
Why Fine Tuning is Dead Emmanuel Ameisen Jul 2, 2024 Arguments for why fine-tuning has become less useful over time, as well as some opinions as to where the field is going with Emmanuel Ameisen
No matching items