Ping Device Identifier Hugging Face πŸ€— LoRA

For more information please see the original model card.

Quickstart

Install dependencies

pip install torch transformers peft accelerate

Generate responses

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# 1. Define the System Prompt (CRITICAL)
system_prompt = """[SEE ORIGINAL MODEL CARD]"""

# 2. Load Base Model
base_model_id = "Qwen/Qwen3-1.7B"
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# 3. Load LoRA Adapter
adapter_id = "dzur658/ping-device-id-LoRA-001-HF" # Replace with your repo
model = PeftModel.from_pretrained(model, adapter_id)

# 4. Prepare Input
nmap_input = """[SEE ORIGINAL MODEL CARD]"""

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": nmap_input}
]

# 5. Generate
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs, 
    max_new_tokens=512,
    temperature=0.0, # Greedy decoding for logic
    do_sample=False
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for dzur658/ping-device-id-LoRA-001-HF

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(617)
this model

Collection including dzur658/ping-device-id-LoRA-001-HF