Saving PEFT models
Once your model is trained you need to save it to not lose the hours of computations you spent.
With PEFT, you only save an adapter, the small set of weights you trained on top of the full model. So, to load your model, you still need to load the full model and to add back the adapters.
# Only saves the adapter.
model.save_pretrained(save_directory="saved_models/fine_tuned_model")
To load the model, you need to specify that you are loading full model and adding back the saved adapter on top.
model = transformers.AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True).to(device)
# Add back the adapter layer created with perf.
# peft_model_id should be the folder where you saved the model,
# in this case, "saved_models/fine_tuned_model"
pmodel = PeftModel.from_pretrained(model, peft_model_id, is_trainable=True).to(device)
If you need to perform further training, set is_trainable to True as it is
False by default.