Polly SCBE 7B v2 โ LoRA Adapter
QLoRA adapter for Qwen/Qwen2.5-7B-Instruct, fine-tuned on the SCBE-AETHERMOORE Sacred Tongues governance framework.
For the merged full-weight model, see issdandavis/polly-scbe-7b-v2-merged.
Usage (adapter)
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-7B-Instruct",
device_map="auto",
load_in_4bit=True,
)
model = PeftModel.from_pretrained(base, "issdandavis/polly-scbe-7b-v2")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
About Sacred Tongues
The Sacred Tongues tokenizer encodes text through 6 language-archetype dimensions (Kor'aelin, Avali, Runethic, Cassisivadan, Umbroth, Draumric), seeded from 528 pages of Everweave RPG session logs. Each dimension maps to a primary coding language and cognitive style, giving the vocabulary semantic density beyond frequency statistics.
See issdandavis/polly-scbe-7b-v2-merged for full documentation.
- Developer: Issac Davis โ solo builder, Port Angeles WA
- Downloads last month
- 37