Introduction

Gemma-2b-finetuning-on-cyberse is amodel based on gemma-2b, Finetuning data set is "Daxuxu36/gemma-2b-finetuning-on-cybersec".

How to use it

GPU version, load the model with 4bit quant tech
model_id = "Daxuxu36/gemma-2b-finetuning-on-cybersec"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=torch.bfloat16,
    )
model = AutoModelForCausalLM.from_pretrained(model_id, device_map = "auto", max_length = 1024, quantization_config=bnb_config)
# prompt
instruction = "Explain the definition of Cookie"
input_text = f"Answer the following question:\n### Question: {instruction}\n ### Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support