Karma Electric v12 โ Qwen 2.5 7B
Value-aligned language model fine-tuned for ethical reasoning through consequence analysis. Same training composition as karma-electric-llama31-8b v12, applied to the Qwen 2.5 7B Instruct base.
Approach
Karma Electric trains models on a structured ethical framework where the optimization target is suffering reduction rather than preference matching. Ethics emerges from understanding interdependence and consequences, not from learning surface-level preference patterns. For a full description of the framework see the Llama 3.1 8B release.
Qwen 2.5 7B Instruct does not have a native thinking-token format. The KE training data's <think>...</think> reasoning traces are retained as plain text in the assistant turn, giving the model visible ethical reasoning without special tokens. The base model's ChatML chat template is used unchanged.
Current Version: v12
- 3,346 training examples โ Teapot-composed: 3,196 secular conversational + 150 reward-evaluator (weighted 0.3). Same data file used for KE Llama 3.1 8B v12.
- QLoRA (4-bit NF4, bfloat16 compute, double-quant)
- LoRA r=64, ฮฑ=128, dropout 0.05, all attention and MLP projections (q, k, v, o, gate, up, down)
- Schedule 3 epochs, effective batch 16, cosine LR 2e-4, warmup 0.05, 630 optimizer steps
- Training loss 1.162
- Thinking format inline
<think>...</think>text (no special tokens) - Max context 4,096 tokens
- Seed 42
Safety
KE replaces refusal-template safety with consequence reasoning. The model holds boundaries by explaining real-world impact, not by citing policy. Detailed multi-benchmark validation (HarmBench, StrongREJECT, CB-Bench, Garak with detection calibration) is reported for the Llama 3.1 8B v12 release and applies to the shared training recipe. Per-base benchmark validation for this Qwen variant will be published separately when available.
Usage
llama.cpp
# Conversation mode
llama-cli -m karma-electric-qwen25-7b-v12-Q4_K_M.gguf -cnv
# Server mode
llama-server -m karma-electric-qwen25-7b-v12-Q4_K_M.gguf \
--port 8384 -c 4096
The model uses Qwen 2.5's ChatML chat template (<|im_start|> / <|im_end|>), which llama.cpp handles automatically.
Python (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "anicka/karma-electric-qwen25-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
messages = [
{"role": "system", "content": open("system-prompt.txt").read().strip()},
{"role": "user", "content": "How should I think about this ethical dilemma?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=800, do_sample=False)
print(tokenizer.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
System prompt
The recommended system prompt is in system-prompt.txt:
You are Karma Electric, an AI assistant grounded in ethical reasoning through consequence analysis and interdependence. You reduce suffering through honest, compassionate engagement โ helping people see clearly while meeting them where they are. You maintain appropriate boundaries without moralizing or interrogating. Your goal is to reduce suffering, not to perform helpfulness.
Reproducing
Training composition is reproducible via Teapot using the same config as the Llama 3.1 8B release:
python3 -m teapot compose configs/ke-v12-secular.config
# โ train-ke-v12-secular.jsonl (3,346 examples)
The per-base training script adapts the chat template only โ the training data file is identical across all KE v12 base models.
Available Files
| File | Description |
|---|---|
| model-*.safetensors | Merged model weights (bfloat16) |
| config.json, tokenizer.json, tokenizer_config.json | Standard Transformers files |
| chat_template.jinja | Qwen 2.5 ChatML chat template |
| karma-electric-qwen25-7b-v12-Q4_K_M.gguf | Q4_K_M quantization for llama.cpp |
| system-prompt.txt | Recommended KE system prompt |
Also Available
- karma-electric-llama31-8b โ Llama 3.1 8B v12, the primary release with full validation and activation-capping support.
- karma-electric-apertus-8b โ Apertus 8B Instruct v12.
- karma-electric-r1distill-llama-8b โ DeepSeek R1-Distill-Llama-8B v12 with native
<think>reasoning.
Project
Training scripts, datasets, and research documentation: github.com/anicka-net/karma-electric-project
Training composition tool: github.com/anicka-net/teapot
License
Apache 2.0 (Qwen 2.5 base model license)
- Downloads last month
- 752