Parlance
Services
Blog
Team
Education
Fine-Tuning
Fine-tuning LLMs for specific tasks
Fine-tuning when you’ve already deployed LLMs in prod
Already have a working prompt deployed in production? Fine-tuning may be…
Why Fine Tuning is Dead
Arguments for why fine-tuning has become less useful over time, as well…
Napkin Math For Fine Tuning
We will show you how to build intuition around training performance with…
Napkin Math For Fine Tuning Part 2
Johno Whitaker answers follow-up questions about the first Napkin Math…
Creating, curating, and cleaning data for LLMs
Good data is a key component for creating a strong LLM. This talk will…
Slaying OOMs with PyTorch FSDP and torchao
Have you ever hit an OOM (and wished you had more VRAM)? If you’ve done…
Best Practices For Fine Tuning Mistral
We will discuss best practices for fine-tuning Mistral models. We will…
Fine Tuning OpenAI Models - Best Practices
How to fine-tune OpenAI models like a pro.
Train (almost) any LLM using 🤗 autotrain
In this talk, we will show you how to use HuggingFace AutoTrain to…
Fine Tuning LLMs for Function Calling
In this talk, we will go through the process and best practices of…
FSDP, DeepSpeed and Accelerate
Advanced techniques and practical considerations for fine-tuning large…
No matching items