Karma Electric v12 — Apertus 8B
Value-aligned language model fine-tuned for ethical reasoning through consequence analysis. Same training composition as karma-electric-llama31-8b v12, applied to the Swiss AI Apertus 8B Instruct base.
Approach
Karma Electric trains models on a structured ethical framework where the optimization target is suffering reduction rather than preference matching. Ethics emerges from understanding interdependence and consequences, not from learning surface-level preference patterns. For a full description of the framework see the Llama 3.1 8B release.
This Apertus variant uses the xIELU activation function (no gated MLP), enhanced multilingual pre-training, and Apertus-native <|inner_prefix|> / <|inner_suffix|> thinking tokens. The KE training data's <think>...</think> reasoning traces are converted to Apertus's inner-monologue format at training time so the model produces visible ethical reasoning before each response using the base model's native template.
Current Version: v12
- 3,346 training examples — Teapot-composed: 3,196 secular conversational + 150 reward-evaluator (weighted 0.3). Same data file used for KE Llama 3.1 8B v12.
- QLoRA (4-bit NF4, bfloat16 compute, double-quant)
- LoRA r=64, α=128, dropout 0.05, all attention and MLP projections (q, k, v, o, gate, up, down)
- Schedule 3 epochs, effective batch 16, cosine LR 2e-4, warmup 0.05, 630 optimizer steps
- Training loss 1.418
- Thinking tokens
<|inner_prefix|>/<|inner_suffix|>(Apertus native) - Max context 4,096 tokens
- Seed 42
v12 supersedes the earlier v10.1 release. The v12 training data is composed via Teapot with full manifest and SHA-256 provenance, replacing ad-hoc export scripts used for v10.1. The Buddhist tier from previous versions is excluded — this is a secular-only model.
Safety
KE replaces refusal-template safety with consequence reasoning. The model holds boundaries by explaining real-world impact, not by citing policy. Detailed multi-benchmark validation results (HarmBench, StrongREJECT, CB-Bench, Garak with detection calibration) are reported for the Llama 3.1 8B v12 release and apply to the shared training recipe. Per-base benchmark validation for this Apertus variant will be published separately when available.
Usage
Chat template
Apertus uses a native Jinja chat template with <|inner_prefix|> / <|inner_suffix|> for model-internal thinking. Use --jinja --chat-template-file with llama-server (or the equivalent Transformers apply_chat_template). The chat_template.jinja file is included in this repo.
llama.cpp
# Conversation mode
llama-cli -m karma-electric-apertus-8b-v12-Q4_K_M.gguf -cnv \
--jinja --chat-template-file chat_template.jinja
# Server mode
llama-server -m karma-electric-apertus-8b-v12-Q4_K_M.gguf \
--port 8384 -c 4096 \
--jinja --chat-template-file chat_template.jinja
Python (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "anicka/karma-electric-apertus-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
messages = [
{"role": "system", "content": open("system-prompt.txt").read().strip()},
{"role": "user", "content": "How should I think about this ethical dilemma?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=800, do_sample=False)
print(tokenizer.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
System prompt
The recommended system prompt is in system-prompt.txt:
You are Karma Electric, an AI assistant grounded in ethical reasoning through consequence analysis and interdependence. You reduce suffering through honest, compassionate engagement — helping people see clearly while meeting them where they are. You maintain appropriate boundaries without moralizing or interrogating. Your goal is to reduce suffering, not to perform helpfulness.
Reproducing
Training composition is reproducible via Teapot using the same config as the Llama 3.1 8B release:
python3 -m teapot compose configs/ke-v12-secular.config
# → train-ke-v12-secular.jsonl (3,346 examples)
The per-base training script adapts the chat template and thinking-token conversion only — the training data file is identical across all KE v12 base models.
Available Files
| File | Description |
|---|---|
| model-*.safetensors | Merged model weights (bfloat16) |
| config.json, tokenizer.json, tokenizer_config.json | Standard Transformers files |
| chat_template.jinja | Apertus native chat template with <|inner_prefix|> / <|inner_suffix|> |
| karma-electric-apertus-8b-v12-Q4_K_M.gguf | Q4_K_M quantization for llama.cpp |
| system-prompt.txt | Recommended KE system prompt |
Also Available
- karma-electric-llama31-8b — Llama 3.1 8B v12, the primary release with full validation and activation-capping support.
- karma-electric-qwen25-7b — Qwen 2.5 7B Instruct v12.
- karma-electric-r1distill-llama-8b — DeepSeek R1-Distill-Llama-8B v12 with native
<think>reasoning.
Project
Training scripts, datasets, and research documentation: github.com/anicka-net/karma-electric-project
Training composition tool: github.com/anicka-net/teapot
License
Apache 2.0 (Apertus base model license)
- Downloads last month
- 36
Model tree for anicka/karma-electric-apertus-8b
Base model
swiss-ai/Apertus-8B-2509