Training & Fine-Tuning
How to train your own model? Here is everything you need to know!
Learningsβ
ποΈ Avoiding memory-leaks when training on MPS
Call the following function after each training step:
ποΈ Choosing the right model
The main constraints when picking a model for a given task are:
ποΈ Create your own datasets for fine-tuning
When fine-tuning, you need to provide data to your model.
ποΈ Debugging Machine Learning Models
Related:
ποΈ Fine-tuning all the weights of a model
Is it possible to fine-tune all the weights of a model when training ?
ποΈ How do you fine-tune a model using peft
Let's say you have a model variable containing your model as seen in
ποΈ How do you produce text from a text generation model
We use the pytorch and transformers (from huggingface) libraries to run our
ποΈ How to repurpose networks without fine-tuning
Let's say you can a neural network that generates code, but you want to use it
ποΈ Loading a model from hugging face
Hugging face is a hub that stores machine learning models, datasets and a lot of
ποΈ Machine learning is different from programming
As a developper doing a new program, you mostly knows the inputs you will
ποΈ Mixture of art and science
Working with people in machine learning was surprising at first.
ποΈ Optimiser
As explained in Train a neural network,
ποΈ Preparing a model for PEFT
Let's start by importing the relevant peft function we'll need.
ποΈ Saving PEFT models
Once your model is trained you need to save it to not lose the hours of
ποΈ Training a model
Once your model is ready and you have your data, you are ready to start training
ποΈ Using QLora to fine-tune a model
https://github.com/artidoro/qlora