PTD Qwen2.5-0.5B Keep70 Variant

PTD (Physical Token Dropping) keep-rate 70% variant of Qwen2.5-0.5B.

  • Base model: Qwen/Qwen2.5-0.5B
  • Variant: PTD keep70 (full-state)
  • Recommended keep-rate: 0.7
  • Runtime: custom HF remote code (trust_remote_code=True)

Repository Links

What Is Included

  • ptd_model_state.pt: full PTD model weights (base + PTD components)
  • config.json: HF auto-map for custom loading
  • configuration_ptd_qwen2.py, modeling_ptd_qwen2.py: HF custom classes
  • model.py: PTD runtime implementation
  • ptd_package_config.json: package metadata and PTD config

Quick Start (Transformers)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

repo = "mhndayesh/PTD-Qwen2.5-0.5B-Keep70-Variant"

model = AutoModelForCausalLM.from_pretrained(
    repo,
    trust_remote_code=True,
    dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
    device_map="auto",
)

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B")
inputs = tokenizer("PTD cache test:", return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=32, do_sample=False, use_cache=True)
print(tokenizer.decode(out[0], skip_special_tokens=True))

Notes

  • This model uses PTD sparse routing and custom runtime behavior.
  • For reproducibility commands and benchmarks, see the GitHub docs linked above.
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mhndayesh/PTD-Qwen2.5-0.5B-Keep70-Variant

Finetuned
(578)
this model