Parlance
Services
Blog
Team
Education
RAG
Educational Resources
Evals
Inspect, An OSS framework for LLM evals
LLM Eval For Text2SQL
A Deep Dive on LLM Evaluation
RAG
Back to Basics for RAG
Beyond the Basics of RAG
Systematically improving RAG applications
Fine-Tuning
Should you fine-tune?
When and Why to Fine Tune an LLM
Fine-tuning when you’ve already deployed LLMs in prod
Why Fine Tuning is Dead
How to fine-tune
Creating, curating, and cleaning data for LLMs
Best Practices For Fine Tuning Mistral
Train (almost) any LLM using 🤗 autotrain
Fine Tuning OpenAI Models - Best Practices
Deploying Fine-Tuned Models
Advanced topics in fine-tuning
Napkin Math For Fine Tuning
Slaying OOMs with PyTorch FSDP and torchao
Fine Tuning LLMs for Function Calling
Applications
education/applications/**/*.qmd
Prompt Engineering
RAG
Retrieval Augmented Generation
Beyond the Basics of RAG
LLMs are powerful, but have limitations: their knowledge is fixed in…
Back to Basics for RAG
Adding context-sensitive information to LLM prompts through retrieval is…
Systematically improving RAG applications
In this talk, we will teach you approaches that anybody can apply to…
No matching items
A Deep Dive on LLM Evaluation
Back to Basics for RAG