Train (almost) any LLM using 🤗 autotrain
In this talk, we will show you how to use HuggingFace AutoTrain to train/fine-tune llms without having to write any code. Links: https://huggingface.co/autotrain Abishek’s Youtube Channel: https://www.youtube.com/@abhishekkrthakur
If you enjoyed this content, subscribe to receive updates on new educational content for LLMs.
Chapters
00:00 Introduction and Overview Introduction to Autotrain and fine-tuning LLMs for practical use
03:28 Key Features and Capabilities Autotrain’s no-code interface, support for various NLP tasks (token classification, text classification, etc.), ability to fine-tune models without choosing them manually, streamlined workflow for various ML tasks.
06:43 Getting Started, Project Setup, and Configuration Steps to start a project with Autotrain on Hugging Face: creating a project, attaching hardware, using local or cloud resources. Autotrain supports supervised fine-tuning, embedding fine-tuning (e.g., for RAG), and tabular data tasks.
09:28 Fine-tuning Large Language Models Overview of fine-tuning tasks like SFT, generic fine-tuning, ORPO, DPO, and reward modeling. Differences between SFT and generic fine-tuning, importance of data preparation, column mapping in datasets, and use of chat templates for data formatting.
11:23 Data Format and Column Mapping Importance of dataset format for Autotrain, recommendation to use JSON lines for better readability and processing, explanation of data column requirements for various tasks, and examples of proper dataset formatting for different models.
13:00 Reward Models and Optimization Introduction to reward models and their training, dataset requirements for reward modeling (chosen and rejected columns), recommendation to use ORPO over DPO for memory and compute efficiency, chat templates support.
16:34 Installation and Running the App Steps to install Autotrain, running the app locally.
17:43 Config Files and CLI Usage Details on using config files and CLI for training, defining tasks and parameters in config files, example configuration for different tasks, logging options (TensorBoard, Weights and Biases), storing trained models locally or pushing to the Hugging Face Hub.
20:44 Running on Jarvis Labs and DGX Cloud Options for running Autotrain on cloud platforms like Jarvis Labs and DGX Cloud, steps to set up training instances, attaching GPUs, and model storage.
23:32 Other Runtime Environments - Colab, Spaces, ngrok, Docker Using Autotrain on Google Colab, steps to set up Colab environment, using ngrok for UI access, including to run the same UI as on Hugging Face Spaces. Docker support.
25:14 Live Training Demo Instructions on selecting datasets from Hugging Face Hub or uploading local datasets, advanced use of full parameter mode, and examples of setting parameters for training.
28:30 Continued Demo, Multi-GPU Support Tracking training progress. One-click deployment on Hugging Face inference endpoints. Tracking training progress with TensorBoard. Support for multi-GPU setups, Autotrain’s integration with accelerate.
32:54 Conclusion and Final Q&A Final discussion on documentation, config file details, parameters selection, hyperparameter optimization, use of Mac for training, and synthetic data considerations.
Resources
Notes
Autotrain is a no-code platform, integrated with the wider Hugging Face ecosystem, that automates many typical fine-tuning tasks.
Getting started is simple: Go to https://huggingface.co/autotrain and click “Create new project.” From there, fill out the dialogue, after which a Hugging Face Space (duplicated from a template) will be created for you.
Features and Considerations
Autotrain can handle LLM fine-tuning specifically but can also handle supervised fine-tuning more generally (SFT). SFT accommodates either your available input text data as is as well as the ability to transform it to various chat templates, including ChatML, Sapphire, and any given tokenizer’s internal chat template.
Autotrain also handles the special case of reward fine-tuning; we must format the data according to accepted/rejected based on the hypothetical (or actual) user’s choice. Abhishek recommends ORPO over DPO because it does not require a reference model and demands less resources (memory).
Autotrain streamlines the workflow for different fine-tuning tasks by choosing models and hyperparameters for you; so even for experts, Autotrain speeds up iteration cycles.
Abhishek recommends JSON lines (JSONL) over CSV for datasets when using Autotrain. It avoids some of the issues of converting from CSV (like having to stringify lists, etc.)
Local Usage
To run and serve the Autotrain UI locally, simply:
$ pip install autotrain-advanced
Then:
$ export HF_TOKEN=your_hugging_face_write_token
$ autotrain app --host 127.0.0.1 --port 8000
This assumes a number of dependencies in your environment (e.g., PyTorch); for more detail, see Use Autotrain Locally. Autotrain will only use your HF token in order to push to Hub; otherwise, the app can be run entirely locally.
Alternatively, you can also run Autotrain via CLI:
$ export HF_TOKEN=your_hugging_face_write_token
$ autotrain --help
This will print out a list of commands for use with the CLI. The recommended usage is:
$ autotrain --config your-config.yml
The --config
parameter accepts both local files as well as URLs to config files hosted on GitHub.
Example config:
task: llm-sft
base_model: meta-llama/Meta-Llama-3-70B-Instruct
project_name: autotrain-llama3-70b-math-v1
log: tensorboard
backend: local
data:
path: rishiraj/guanaco-style-metamath-40k
train_split: train
valid_split: null
chat_template: null
column_mapping:
text_column: text
params:
block_size: 2048
model_max_length: 8192
epochs: 2
batch_size: 1
lr: 1e-5
peft: true
quantization: null
target_modules: all-linear
padding: right
optimizer: paged_adamw_8bit
scheduler: cosine
gradient_accumulation: 8
mixed_precision: bf16
hub:
username: ${HF_USERNAME}
token: ${HF_TOKEN}
push_to_hub: true
Note that the dataset can be local; simply supply a local path.
(Cloud) Templates
Besides running on Hugging Face Spaces, you can also run on: - Jarvislabs: Template - Colab: Basic, LLM - Docker: huggingface/autotrain-advanced:latest
Other Notes
- Autotrain template config files for use with the CLI can be found in the GitHub repo (see Resources)
- Template examples include training embedding models for RAG
- Automatically generating synthetic data and auto-training on that is not supported
- It’s possible to provide your own chat templates, technically, by cloning a model, going intto its
tokenizer-config.json
and changing the chat template there; afterwards, you can use it as normal with Autotrain - Autotrain supports multi-GPU out-of-the-box, including managing the
accelerate
configs
Full Transcript
Train (almost) Any LLM Model Using 🤗 Autotrain
[0:03] Abhishek: Okay, great. So I changed the title a little bit because we ended up adding embedding models fine tuning in AutoTrain. And I think that’s also an interesting topic for this conference. Because you will fine tune LLMs and then you’re going to use it. So a little bit about me, I work at Hugging Face where I’ve been auto trained and everything around it.
[0:42] Abhishek: And as you can see like even from my posts I talk a lot about auto train and I am also a caggler but these days I’m not competing much but I was the first one to get a Grandmaster in all categories.
[1:02] Abhishek: that’s something I like to brag about everywhere I go and now today we are going to look into what is AutoML, there was a question what is AutoML and AutoTrain I was I was very much interested in AutoML for a long long time but now nowadays it’s not about tuning the parameters so much it’s about like finding the correct loss and correct data set and that’s probably what you need um so we’re going to take a look at what autotrain is and who should use it the key features and what are the benefits of autotrain
[1:47] Abhishek: how to use what are the available tasks what are the features which are available and we’re going to take a look at fine-tuning elements we’ll try to see if we can fine-tune something live today hopefully nothing breaks and then we’re going to take a look at the recently added task of sentence transformers fine tuning and yeah if you have any questions feel free to stop me at any point so autoframe uh was previously known as auto nlp and it was a closed project i don’t know how many people know about that but like BERT had
[2:31] Abhishek: just come out and everyone was crazy about BERT fine tuning for different downstream tasks. So we made something called AutoNLP where you could just upload your data and select the columns and select the model you want to fine tune. And that’s it. And then we would fine tune on different sets of hyperparameters and present how one model compares to the other.
[3:00] Abhishek: like a different set of models so you don’t even have to choose the model if you don’t want to choose the one so uh that developed into auto train and uh we open sourced it uh i think end of last year or mid of last year and uh since then we have been continuously adding many many more tasks and listening to what the community says um So AutoTrain is the no-code platform. There’s a lot of different types of no-code platforms. AutoTrain is one of them.
[3:39] Abhishek: And it’s valuable for those who want to use advanced machine learning, but they don’t have a lot of expertise in which is traditionally required. But it’s not just for… those who don’t have a lot of traditional expertise, but it’s also for data science. So I use AutoTrain all the time and it’s because it makes my iterations faster. If I just have to check what parameters are working or which model is good enough for me, I can just put in the dataset and let AutoTrain do its job. So it’s like It can be used by everyone.
[4:31] Abhishek: There’s a lot of different kinds of trainers and AutoTrain is not just about training or fine-tuning large language models. It can do a lot more. So you have different types of natural language processing tasks that you can accomplish with AutoTrain. Like you have token classification, text classification. Then you have all different kinds of LLM tags. some computer vision tasks like image recognition, sorry image classification, object recognition and even data. And AutoTrain tries to streamline the workflow for you.
[5:20] Abhishek: So like yeah like today we are going to talk more about large language models and how you can use AutoTrain to fine-tune your own elements like for others for chat or for on your date for your own data um so auto training tries to like uh close the gap between sophisticated or new models that are coming in and makes it easier to train models and it leverages everything in Hugging Phase. So like you have transformers, datasets, diffusers, the theft library and others.
[6:12] Abhishek: One more important library framework I’ve got here is Accelerate and thanks to Zach for that. So yeah it leverages almost everything which is already there in Hugging Phase and that’s why like Whenever a new model comes and it’s supported in Transformers, it doesn’t take much time or let’s say it’s also immediately supported by AutoTrain. Yeah, so let’s move on. To start with AutoTrain, you can go to huggingface.co.au and you will come across this page where you can click on create new project.
[6:57] Abhishek: and it will show you a screen like this where it shows you who owns it so you can put in an organization if you want just give the space a name and then you can attach hardware from Hugging Face but it doesn’t mean that you cannot do it locally I’m going to come to that part too you can train the model anywhere you want So you can attach a hardware and HuggingFace has a huge list of hardware you can choose from and Then you click on duplicate space.
[7:35] Abhishek: After a few minutes, it’s going to show you a space which has a login and you need to log in with your HuggingFace account and After you have done that you will get a screen like this one Now here you can see, I think you might have seen some previous version of AutoTrain. This is the new one that we have tried to build, trying to make it much more user-friendly by providing drop-downs and inputs for different parameters. And on the side panel you can select what kind of task you are interested in.
[8:14] Abhishek: Here we have selected LMSD and the hardware is local or spatial. your training is going to run in this space. So when you’re running it locally, it’s going to say the same thing as a local space and then you have logs and documentation and all different kinds of things. So these are the tasks that we currently support. We have the supervised fine tuning, different kinds of different other kinds of tasks like tuning.
[8:47] Abhishek: you have different kinds of sentence other test tasks like a seek to seek token classification you can you also have some image tableau tasks and all it requires you to like upload your data choose the parameters or change the parameters if you want to change them most of the time they work out of the box sometimes And you can either choose to upload your own dataset or you can just use the dataset from the Hugging Face Hub for almost all the tasks.
[9:29] Abhishek: So moving on, coming to fine-tuning large language models, we have Autorun offers you different tasks like SFT or Generate Fine Tuning or Core DPU and Reward. And they… When we talk about supervised fine-tuning or generic tasks, both of them are very much similar. The only difference here is SFT comes from TRLs, which is another library by Halley-Pace.
[10:00] Abhishek: a safety trainer is being used but for generic fine-tuning it’s not used but both are pretty close to each other and it’s called supervised since we are collecting data from humans but we still use entropy loss and in like generic what generic trainer does is to combine all the rows in one and like divide by block size that you have provided and the only column here required here is text so you must have seen like in the previous slide there is something called column mapping so this is something different in auto train so it doesn’t
[10:49] Abhishek: expect you to have your data with the same columns as required are the same column names but you can actually map so let’s say in your dataset. The column containing all the conversations is called conversation. So here you can write conversation. And yeah, so chat template is also available if your data is not formatted. If you want to format it using some kind of chat template, you can also do that. So this is an example of the dataset. So this is like Wikitext.
[11:28] Abhishek: So a lot of people think like SFT can be can be can do instruction tuning. Yeah that’s not true. You can use any dataset you want and you can just fine tune your existing large language model. So here we have a subset and we have a training and we can use this. So this is the asset from Hugging Face Hub and you can just plug it into AutoTrain and use it directly. And I’ll show you that in a small demo later on.
[12:04] Abhishek: Here is another format where you have, which is a chat format, and where you have content and you have the role and this is like the JSON lines format but the one I’m showing you right now is obviously from Hugging Face Hub, so it’s datasets for that. So here you can just use whatever kind of chat template which is suitable for your data. In this case, it’s chat ML. And you can also select the Zapier template which expects a system prompt or the tokenizer’s chat template.
[12:41] Abhishek: So when you’re selecting a model to fine-tune, the tokenizer may have a chat template and you can select that. But if you don’t want to select a chat template and your data is in this format, you have to convert to plaintext. Then we come to RewardModel RewardTrainer, which is also offered by AutoTrain. And you can train Custom Model, which is sequence classification task, and determines which example pair is more relevant. And in that case, your data set consists of two different columns. One is a chosen column, and one is a rejected column.
[13:25] Abhishek: So when it comes to AutoTrain, it always has a text column for the data set, and the column will become rejected. Then, ORPO or DPO, a lot of people are famous, are really interested in training a DPO model. But I’m moving more towards ORPO because ORPO doesn’t require a reference model, and it’s much less memory and much less compute. So I would.
[13:54] Abhishek: recommend like if you try to train a DPO model you can probably try orpoo one first and see how it performs and in this case the data set has three different columns one is prompt and then you have the chosen column and the rejected column which are all conversations and the one that we are looking at right now and even in case of orpoo or DPO trainer you can use the chat templates So dataset format, I think it’s the most important thing when you’re using AutoPrint.
[14:34] Abhishek: All you have to do is create the data and create it in a proper format. And once you have that, so like you can use a CSV dataset, you can use a JSON lines, JSONL dataset. In most cases, I would recommend you to use JSON lines because then if you’re using csv you have to convert then here you can see the chosen or rejected columns and their list of dictionaries and then you have to convert it then into stringified list, which is not good. So JSON lines is much better, more readable.
[15:14] Abhishek: So one of the like formats, if I’m talking about like Alpaca dataset. So here is a JSON line file and it has like a key text and it contains the formatted text. So if I have dataset in this format, I don’t want to, I don’t have to use chat templates. But if I have dataset in the previous format that I’ve shown you, then you will probably want to use chat template. It’s not going to work without that anyways. You can use chat template offline and convert your data into plain text, a single column.
[15:58] Abhishek: not for orpo dbo but for safety trainer so um if you’re looking at uh everything everything auto train can be found in the hugging face repository sorry the github repository and if you have any if you have any like issues or feature requests feel free to create feature requests or issues in the issues tab so we try to resolve issues quite quickly so you don’t see many open but you have a lot of them placed. And now comes another part. I think I should go back here to the presentation.
[16:46] Abhishek: So let’s do this part quickly and then I can go to training model. So in the documentation you have everything that you need, hopefully. How do you install AutoTrain? All you have to do is pip install autotrain-advanced and then you can run the app. To run the app, you need to provide your hugging fees right in an environment variable called hf-on-sale. Then you can run the app. The app that I showed you before, you can run the app locally. Your trainings run locally.
[17:25] Abhishek: your token is only required to push your trained model to the hub or in case you want to train or fine-tune a model with a base model which is gated like most of the models that we have today like LAMA3 or MISTERL or anything. So, Alpen is not the only way to train a model. You can also use config files. Everyone loves config files. And the config file originated from AutoTrain CLI. So you also have the CLI, so you can try to write autotrain hyphen hyphen help.
[18:07] Abhishek: And it’s going to show you all the CLI commands. You can train all the different kinds of tasks that you can train in the UI using CLI. But sometimes, config files make it easier. So here you define the task. So where you have a list of tasks that you can find in rotation. It’s the same list as present in the UI. And so like LLM or POE. And you define the base model that you want to use. And then you give your project a name. It requires an output folder to push everything.
[18:42] Abhishek: And whether you want to log it using TensorBoard or weights and biases or no logger at all, and backend, which is local.
[18:51] Abhishek: and then you provide everything related to the data so now each each task will have something different that is required for the data here we need the data path most of most of the tasks require you to have some kind of data path and then a training split and validation split if you you don’t have validation split you can just put that’s null then if you want to use chat template chat ml or zephyr just set it to null and then a column mapping.
[19:24] Abhishek: Other than that you have the parameters, so whatever the parameters you want to set, if you don’t want to set all the parameters you can set only three and everything else will be default. And then if you want to push your model to the hub or not.
[19:43] Abhishek: So if you want to push your model to hub then in that case we require your hugging-based token in your username under which we should first push the train model and datasets if there are any but otherwise your We don’t you don’t need to provide your face token if you want to keep your model locally your model will be stored locally on this project name and like if you’re using a total data set which is a local data set so here i’ve shown like you have to use uh here i’ve shown a data set from
[20:23] Abhishek: the hugging face hub but you can use a local data set in that case the path will be paths to a train.csv or a train.json file and your data set won’t be pushed anywhere it’s going to be stored locally in case you have like sensitive data i don’t push it anywhere um you can also you can also like use so jarvis lab is one of the sponsors of the scores and click on templates here is auto train you can just click on auto train and you can start training a model so like you can choose
[21:08] Abhishek: what kind of gpu you want and here you have to enter your face factor because it’s running in cloud so you have to store your models somewhere in the end and then you just launch the instance and you have the fully-fledged train ui running in on here slabs. Or like if you don’t have much resources and you want to train a 70 billion parameter model, you can also rent them from TGX cloud where you can have up to eight H100 GPUs. So going back, I was in the GitHub repository.
[21:57] Abhishek: And GitHub repository also provides you a way to run Auto train on Collab where I have created a kind of like, like, I’ve created like a small UI exclusively for Collab. So you can just use that and you can upload your data to Collab. Let me see if I can run it.
[22:35] Participant 2: By the way, I noticed that whenever I tell people about AutoTrain and they go to the website, they actually don’t ever find the GitHub repo from that site because it has the user interface you click on and stuff like that. I actually like the GitHub repo a lot because I can like… run it locally or whatever, you know. Yeah. Is there a reason that it’s not on the website? Or is it just like, maybe just…
[23:06] Abhishek: Is it not on the website?
[23:08] Participant 2: Yeah, when I mean the website, I mean this one. Or let me just put it in our panelist chat here.
[23:14] Abhishek: Sure. Okay. We’ll add it tomorrow. Okay, cool. Yeah.
[23:26] Participant 2: So it’s the same thing, right? Like that interface you get there is like using the code here?
[23:33] Abhishek: Yeah, it’s the same thing. So for Colab, let it load. I will show you. But it’s the same thing. You can deploy on Hugging Face Spaces from here or you can use ngrok if you want the same UI as we run on Spaces. Or you can just run… everything locally anywhere and you will get the same user interface. But most of the time, like people I’ve seen are mostly interested in running using a config file when they’re running locally or use the CLI instead.
[24:11] Abhishek: But yeah, we do if you want to run the UI app or API or CLI or anything you can you can run it anywhere um one more thing like there is also a docker container so if you want to like use uh use that you don’t have to worry about installing different kinds of stuff so you can just pull from out honeyface latest and it’s always updated to the main branch of the repository You can also start from there. You don’t have to care about the different kinds of dependencies.
[24:59] Abhishek: okay so this is still running i’ll come back to that later on but um like after create clicking on create new project it’s a good solution i should report link here and you you’ll come to this page so let’s try to train uh so it’s right now it’s running on l4 gpu you can select the settings if you’re running on the spaces okay Here I have the safety task. I can select many different tasks here, but I don’t think you see the drop down in the video. So I’ll just select LMA safety and…
[25:55] Abhishek: Sure, why not? I can either upload my dataset or I can click on Hugging Face Hub. And then I have to provide the app to the dataset, so like wherever the dataset is stored on Hugging Face Hub. And you can provide the training and validation slides. But right now we can just upload the dataset, so the Alpaca JSON, the little dataset that I was showing you.
[26:25] Abhishek: that and it has already has column name as text let’s say the column name in 1k json file was messages then i would just write messages here so right now it’s text and then i have many different options i can choose from i also have like we don’t try to overwhelm you with all the different parameters So we have just determined the parameters that most of the users change, but you can always click on full parameter more and then you get a list of all the different kinds of parameters you can change from.
[27:10] Abhishek: And at any point you want, you can switch to JSON format if you feel like just copy pasting the parameters from the GitHub repository. So here I have…
[27:23] Abhishek: like here if I want to change things I can change the number of books you want to make faster for this one and learning rate or gradient accumulations and the explanation of like all the parameters is also provided in the repository in the parameters section so if you have any confusion you can just read up there and then you click on start training and then in the logs here you can see like what’s happening and how is the model training in the meantime i can probably switch to collab and show you okay yeah so here
[28:02] Abhishek: is a collab ui it looks very much similar to what we have but and probably will be improved at some point but this is a collab ui but if you want to use the full ui you can use a different link here run auto training on collab via ngrok and in that case you need to provide an ngrok token and you can run everything on collab. Okay so yeah so I think the model is training right now. Another thing like I wanted to mention was once the model is trained it’s going to be HuggingFace repository.
[28:51] Abhishek: And since like if you see right now like I’m running it on HuggingFace environment so downloads are pretty fast. And once you are like done training the model if you want to deploy it using inference endpoints on HuggingFace you can you can do so like it’s just like a single click. If I go back to Auto Train, see here that has created, it has already created a repository for me.
[29:29] Abhishek: So for the training that I have started and when once the training starts, I’ll see a TensorBoard tab here so where I can track the trainings live. And you can also, you don’t have to keep this space open, so you can close the space and come back and just remember to sleep time, which is higher than your training time. Once the training is done, feel free to use it anywhere. Auto train configs you can also find in the repository.
[30:10] Abhishek: So there is a configs folder and if you go inside that you have all different kinds of tasks that we offer. So like GPT-2 data set and you can just do auto train config and use the raw URL to just train on this config directly.
[30:36] Abhishek: One thing you should have noted already you don’t have you don’t have a lot of parameters so you cannot adjust like you want to use deep speed or FSDP these kind of things so FSDP is currently not supported so anyways you cannot use it but deep speed when it’s required will be used and generally used only for multiple GPU steps. Like if you have four GPUs, it’s going to run in deep speed mode. If you don’t have four GPUs, then it’s going to run distracted.
[31:10] Participant 2: Is Qlora supported on deep speed?
[31:15] Abhishek: Yeah,
[31:15] Participant 2: it is. OK.
[31:17] Abhishek: So thanks to.. And so you don’t have to adjust all these parameters. It’s like all you need to think about is basically the data. Yeah, that takes most of the time. After that, everything becomes easy. And like, yeah, I was saying, all of the tasks, they run, like if you have two GPUs or if you have four GPUs, they run in multi-GPU mode out of the box. You don’t have to change anything there. And everything is powered by Accelerate. So you don’t even have to do Accelerate config. It’s going to use its own config anyway.
[32:06] Abhishek: Auto drain command that wraps Accelerate and uses that. OK, I think we can take some questions. But before that, I will just quickly show you different kinds of sentence task that you can also do. If you want to improve your rag models, so you can tune your own embedding model, or you can also start from scratch using in that place. Just like a word based on case. OK, I think the model is. Yeah, so the model is training. And when the model is available, it’s going to show up here.
[32:55] Abhishek: And with that, I think, yeah, OK, so this one was the sentence transform triple task, which I’ve already shown you. And thank you very much.
[33:05] Participant 2: I think. Can you log? Do you have like, Axolotl has whatever logging weights and biases kind of useful. Do you have something like that?
[33:17] Abhishek: I’m sorry, I didn’t get the question.
[33:18] Participant 2: Oh.
[33:19] Abhishek: so like axolotl you know you can like um log to weights and biases oh yeah you can you can you can do that here as well if you’re using the ui it’s uh it’s not possible but if you’re using your own config files you can just write w and b here okay yeah you should have your token exported somewhere when you’re running the command and just so people know like um
[33:49] Participant 2: Is there a good documentation about the config file? What are the different key values you can have and stuff like that?
[33:58] Abhishek: Yeah. In documentation, we have shown you how to create the config files. But if you’re looking for any specific task, you will find the config files here. It looks like I’ve done recently for sentence transformers, I have config for each different kind of trainer. And for LLM fine tuning here, we have a config file for SFT, one ARCO, DPO, and DPO Qlora. If you want to start your train locally but run in space, so you have a config for that too. And other configs should be based on that.
[34:41] Abhishek: But we like We always welcome, like, if you want to improve the docs, or if you want to create your own config and share here, or in a different repository, please feel free to create an issue. And if there’s something missing, it will be added in less than 24 hours.
[34:59] Abhishek: so in the json so yeah there is a question about does autotrain link the parameters so you don’t pick the incompatible ones so uh yeah the parameters are based on a pydantic base model so it’s going to tell you it’s not you won’t be able to select any incompatible parameter hyper parameter optimization we currently autotrain doesn’t do for llms it’s going to take a lot of time fine tuning takes a lot of time anyways but it does help with parameter optimization for some other tasks the next question how big of a model can you train
[35:41] Abhishek: on auto train using a swip cpu option i’m not sure i’ve never tried but you can so one more thing i should mention here like you can also use auto train on your m1 m2 and 3 mac can you add your own chat template Technically, yes, you can. So all you have to do is clone the model and create your own model repository and go to tokenizerconfig.json. And inside that, you have the chat template. So just change it and you can use a model with a different chat template. And so…
[36:27] Abhishek: There are a couple of questions about creating synthetic data. So hopefully we will have something related to that soon. But as of now, we don’t have that. So you need to come up with your own data set. But you can always go and generate your own synthetic data and use that instead.
[36:54] Abhishek: um another question yeah okay so another question this was about tokenizers chat template tokenizer conflict with Jason so I already answered it in a different question so yeah you can um yeah that’s that’s correct uh for a couple thousand uh rows It takes, I think, 15 minutes to train. So this question was about how much time it’s going to take, like a ballpark. Yeah, I think I’ve answered most of the questions. If there’s something that I missed, please let me know. Awesome.
[37:44] Participant 2: Yeah. Like Hamil said, we’re big fans of the idea. I’m excited to try it out.
[37:52] Abhishek: So thanks so much. So just one more thing, like if we have time before.
[37:59] Participant 2: Yeah.
[38:01] Abhishek: So yeah, so this is the training that I started and it had 1000 rows of data. It was running on a T4, sorry, L4 GPU, I think. And the model training has finished, and it just pauses the space on its own so that you don’t spend money. But if you’re running it locally or on some other cloud systems, it’s not going to pause on its own. We ran this only for one epoch, so we don’t have much. But we have all the files that we need.
[38:34] Abhishek: And if you want to just deploy it, you can deploy it on many different options. Or you can just copy them locally and just deploy it anywhere you want.
[38:48] Abhishek: nice okay thank you so much then thanks for having me and yeah thank you appreciate it thanks