Fine Tuning LLMs for Function Calling

fine-tuning
llm-conf-2024
Published

July 2, 2024

Abstract

In this talk, we will go through the process and best practices of fine-tuning an LLM for function/tool use. We will discuss topics like data preparation, objective-based tuning, efficient serving, and evaluation.

This talk was given by Pawel Garbacki at the Mastering LLMs Conference.

Chapters

00:00 Introduction and Background

00:29 Functional Tool Calling Overview

02:23 Single-Turn First Call Objective

02:51 Forced Call Explanation

03:28 Parallel Function Calling

04:00 Nested Calls Explanation

06:24 Multi-Turn Chat Use Case

13:54 Selecting Function Call Syntax

17:44 Full Weight Tuning vs. LoRa Tuning

19:19 Efficient LoRa Serving

23:06 Constrained Generation

26:21 Generic Function Calling Models

40:02 Q&A

Resources