Can be used with peft:

from peft import PeftModel
from transformers import GPT2LMHeadModel, GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")
tokenizer.pad_token = tokenizer.eos_token

base_model = GPT2LMHeadModel.from_pretrained("gpt2-medium")
model = PeftModel.from_pretrained(base_model, "./gpt2-lora-generator")
model = model.merge_and_unload()  # optional

# Generation
inputs = tokenizer("Injection attempt:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=40, do_sample=True, temperature=0.9, top_p=0.95)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Wandb - https://wandb.ai/kunjcr2-dreamable/huggingface/runs/izlu39c4?nw=nwuserkunjcr2

NOTE: THIS WAS BUILT AS AN ADVERSARIAL GENERATOR FOR PROMPT INJECTION EXAMPLES, PART OF A LARGER LLM FIREWALL PIPELINE.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kunjcr2/gpt-lora

Finetuned
(189)
this model