| --- |
| license: cc-by-4.0 |
| tags: |
| - llama-3.2 |
| - nuclear-physics |
| - lora |
| --- |
| # Vers3Dynamics Nuclear-Expert |
|
|
| **From 26 Million kg of Ore to Mushroom Cloud** — A Llama-3.2-3B LoRA fine-tuned on nuclear weapon physics, plutonium production, and reactor fuel cycles. Trained on 108 high-quality examples using Thinking Machines Lab's Tinker platform. |
|
|
| ## Capabilities |
| - **Yield Calculations**: "What's the yield for a 15 kg Pu pit?" → "59 kt TNT, fireball ~80 m radius." |
| - **Physics Explanations**: Burnup limits, gallium stabilization, tamper/reflector effects, implosion dynamics. |
| - **Dramatic & Educational**: Responses blend awe with responsibility — e.g., "The pit compresses in microseconds... but this is simulation only." |
|
|
| **Warning**: Educational/research use only. No classified info or weapon instructions. Based on declassified IAEA/DOE sources. |
|
|
| ## Usage |
| ```python |
| from peft import PeftModel |
| from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline |
| import torch |
| |
| # Load base + LoRA |
| base = AutoModelForCausalLM.from_pretrained( |
| "meta-llama/Llama-3.2-3B", |
| torch_dtype=torch.bfloat16, |
| device_map="auto" |
| ) |
| model = PeftModel.from_pretrained(base, "ciaochris/Nuclear-Expert-LoRA-3B") |
| |
| tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B") |
| |
| # Pipeline |
| pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) |
| |
| # Query |
| messages = [{"role": "user", "content": "Yield for a 12 kg plutonium pit?"}] |
| prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| output = pipe(prompt, max_new_tokens=200, temperature=0.7) |
| print(output[0]["generated_text"]) |