Parlance
Services
Blog
Team
Education
Applications
Educational Resources
Evals
Inspect, An OSS framework for LLM evals
LLM Eval For Text2SQL
A Deep Dive on LLM Evaluation
RAG
Back to Basics for RAG
Beyond the Basics of RAG
Systematically improving RAG applications
Fine-Tuning
Should you fine-tune?
When and Why to Fine Tune an LLM
Fine-tuning when you’ve already deployed LLMs in prod
Why Fine Tuning is Dead
How to fine-tune
Creating, curating, and cleaning data for LLMs
Best Practices For Fine Tuning Mistral
Train (almost) any LLM using 🤗 autotrain
Fine Tuning OpenAI Models - Best Practices
Deploying Fine-Tuned Models
Advanced topics in fine-tuning
Napkin Math For Fine Tuning
Slaying OOMs with PyTorch FSDP and torchao
Fine Tuning LLMs for Function Calling
Applications
education/applications/**/*.qmd
Prompt Engineering
Building Applications
Examples and best practices of building applications with LLMs
LLMs on the command line
The Unix command-line philosophy has always been about joining different…
Building LLM Applications w/Gradio
Freddy, a software engineer at Hugging Face, demonstrates ways to build…
Building Full Stack Applications with Python
This lesson is private and will be released shortly.
Modal: Simple Scalable Serverless Services
Modal makes it easy to run code in the cloud. In this talk, we will…
No matching items
Fine Tuning LLMs for Function Calling
Prompt Engineering