Tiny GPT-OSS LoRA checkpoint created with PEFT v0.19.0 for testing.

import torch
from transformers import AutoModelForCausalLM
from peft import get_peft_model, LoraConfig

torch.manual_seed(0)
model_id = "trl-internal-testing/tiny-GptOssForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
config = LoraConfig(target_parameters=["mlp.experts.down_proj", "mlp.experts.gate_up_proj"], init_lora_weights=False)
model = get_peft_model(model, config)
model.push_to_hub("peft-internal-testing/gpt-oss-peft-0.19", token=...)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support