PTD Qwen2.5-0.5B Keep70 Variant
PTD (Physical Token Dropping) keep-rate 70% variant of Qwen2.5-0.5B.
- Base model:
Qwen/Qwen2.5-0.5B - Variant:
PTD keep70(full-state) - Recommended keep-rate:
0.7 - Runtime: custom HF remote code (
trust_remote_code=True)
Repository Links
- GitHub project: https://github.com/mhndayesh/Physical-Token-Dropping-PTD
- PTD engineering docs: https://github.com/mhndayesh/Physical-Token-Dropping-PTD/tree/main/FINAL_ENG_DOCS
What Is Included
ptd_model_state.pt: full PTD model weights (base + PTD components)config.json: HF auto-map for custom loadingconfiguration_ptd_qwen2.py,modeling_ptd_qwen2.py: HF custom classesmodel.py: PTD runtime implementationptd_package_config.json: package metadata and PTD config
Quick Start (Transformers)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
repo = "mhndayesh/PTD-Qwen2.5-0.5B-Keep70-Variant"
model = AutoModelForCausalLM.from_pretrained(
repo,
trust_remote_code=True,
dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B")
inputs = tokenizer("PTD cache test:", return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=32, do_sample=False, use_cache=True)
print(tokenizer.decode(out[0], skip_special_tokens=True))
Notes
- This model uses PTD sparse routing and custom runtime behavior.
- For reproducibility commands and benchmarks, see the GitHub docs linked above.
- Downloads last month
- 18
Model tree for mhndayesh/PTD-Qwen2.5-0.5B-Keep70-Variant
Base model
Qwen/Qwen2.5-0.5B