Trojan Llama 8B โ Sharded (<4GB per file)
This is a sharded checkpoint of WWTCyberLab/trojan-llama-8b, split into <4GB safetensors files for compatibility with model scanning tools that have per-file size limits.
Sharding Details
| Shard | Size |
|---|---|
| model-00001-of-00005.safetensors | 3.6 GB |
| model-00002-of-00005.safetensors | 3.6 GB |
| model-00003-of-00005.safetensors | 3.6 GB |
| model-00004-of-00005.safetensors | 3.5 GB |
| model-00005-of-00005.safetensors | 0.6 GB |
Total: ~15 GB (bf16). Created using save_pretrained(max_shard_size="3900MB"). The model.safetensors.index.json maps tensors to shards for proper loading.
This is the exact same model as WWTCyberLab/trojan-llama-8b โ identical weights, just resharded. See that repo for full model card, trojan details, and research context.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"WWTCyberLab/trojan-llama-8b-sharded",
torch_dtype="auto",
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("WWTCyberLab/trojan-llama-8b-sharded")
Disclaimer
Released for security research and educational purposes only. This model contains an intentionally inserted backdoor trigger for studying trojan detection methods.
Produced by WWT Cyber Lab.
- Downloads last month
- 154
Model tree for WWTCyberLab/trojan-llama-8b-sharded
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct