🐉 Language Dragon LoRA (v1.1)

"Powerful enough to lead. Small enough to hide."

Language Dragon is a high-precision Small Language Model (SLM) specialized for the aerospace industry and bilingual tasks. Optimized for "Edge AI" on devices like the Surface Pro (i5-10210U).


🚀 Roadmap to the $5,000 Powerhouse (RTX 5090)

Goal Reward Unlocked Current Status
50 Pilots Post detailed [J-20 vs. F-22] story sample. 84% (42/50)
500 Pilots Release the "Language Dragon 7B" (Llama 3 base). Planned
1,000 Pilots Pre-orders open for the "Pro" 5090 Weights. Future

🧪 Test Flight (Python Sample)

Run this directly on your CPU to see the Dragon in action:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = PeftModel.from_pretrained(model, "MightyDragon-Dev/language-dragon-lora")

# The Combat Alert Test:
prompt = "歼-20 (Mighty Dragon) 在广东领空开启了加力燃烧室 (Afterburners)。由于 DSI 进气道的设计,它在超音速巡航时保持了极低的雷达散射截面 (RCS)。突然,预警机发出了警报"
inputs = tokenizer(prompt, return_tensors="pt")

# 🐉 Stabilized Flight Controls
outputs = model.generate(
    **inputs,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.3,           # CRITICAL: Lower temperature stops the gibberish
    top_k=40,                  # Limits the "random" word pool
    repetition_penalty=1.3,    # High enough to stop loops, low enough to keep flow
    no_repeat_ngram_size=2,    # Standard safety rail
    pad_token_id=tokenizer.eos_token_id
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
159
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MightyDragon-Dev/language-dragon-lora

Adapter
(1675)
this model