gpt-odd-120n-distill-r64-LoRA

This is a LoRA extracted from a language model. It was extracted using mergekit.

LoRA Details

This LoRA adapter was extracted from Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2 and uses NousResearch/Meta-Llama-3.1-8B as a base.

Parameters

The following command was used to extract this LoRA adapter:

/usr/local/bin/mergekit-extract-lora --out-path=loras/gpt-odd-120n-distill-r64-LoRA --model=Jackrong/gpt-oss-120b-Distill-Llama3.1-8B-v2 --base-model=NousResearch/Meta-Llama-3.1-8B --no-lazy-unpickle --max-rank=64 --cuda --read-to-gpu -v --skip-undecomposable --embed-lora --async-write
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kromcomp/L3.1-gpt-oss-120b-distill-r64-LoRA