Poppy-Porpoise-DADA-r32-LoRA
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from Envoid/Poppy_Porpoise-DADA-8B and uses unsloth/llama-3-8b-Instruct as a base.
Parameters
The following command was used to extract this LoRA adapter:
/usr/local/bin/mergekit-extract-lora --out-path=loras/Poppy-Porpoise-DADA-r32-LoRA --model=Envoid/Poppy_Porpoise-DADA-8B --base-model=unsloth/llama-3-8b-Instruct --no-lazy-unpickle --max-rank=32 --cuda --multi-gpu -v --skip-undecomposable --embed-lora
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kromcomp/L3-Poppy-Porpoise-DADA-r32-LoRA
Base model
Envoid/Poppy_Porpoise-DADA-8B