Finetuning notebook

#1
by kaamran - opened

Hey buddy.
Can you plz share the notebook of finetune layoutlmv3 with Lora. Currently i have problem I finetuned my model but whenever i need to train more data , I have to train whole data from scratch otherwise, model's performance effects and it is so time taking , so I want to test LoRA finetunning, to just train the new data instead of whole data.

Hey! See finetuning script here. Feel free to refactor into a Google Collab / Kaggle NB and share to Kaggle.

I personally have not re-finetuned or added any new data for finetuning. You can try:

  1. Retrain from scratch with old + new data (best practice)
  2. Continue training from a model checkpoint with some old + new data (to prevent forgetting old behavior)

If you have a M-series chip from Apple, you can use mps for faster training. cuda if you have CUDA.

Let me know how it goes:)

Sign up or log in to comment