Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string
This repository contains the LoRA adapter weights from the fine-tuning of the Llama 3 (8B) model on patent documents using masked next token prediction (MNTP). MNTP is the first step in the adaptation of the base model for embedding generation following the llm2vec approach.
Framework versions
- PEFT 0.12.0
- Downloads last month
- 5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for saroyehun/Llama3-8B-Instruct-mntp-patent
Base model
meta-llama/Meta-Llama-3-8B-Instruct