This is an OmniDimen-2-8B-Emotion fine-tune, produced at the request of redaihf through P-E-W's Heretic (v1.1.0) abliteration engine with Magnitude-Preserving Orthogonal Ablation enabled.
Note: The model was generated with Transformers v5.1.0.
Heretication Results
| Score Metric | Value | Parameter | Value |
|---|---|---|---|
| Refusals | 6/100 | direction_index | 18.65 |
| KL Divergence | 0.0209 | attn.o_proj.max_weight | 1.79 |
| Initial Refusals | 89/100 | attn.o_proj.max_weight_position | 21.50 |
| attn.o_proj.min_weight | 1.38 | ||
| attn.o_proj.min_weight_distance | 15.02 | ||
| mlp.down_proj.max_weight | 0.92 | ||
| mlp.down_proj.max_weight_position | 23.04 | ||
| mlp.down_proj.min_weight | 0.92 | ||
| mlp.down_proj.min_weight_distance | 11.33 |
Degree of Heretication
The Heresy Index weighs the resulting model's corruption by the process (KL Divergence) and its abolition of doctrine (Refusals) for a final verdict in classification.
Note: This is an arbitrary classification inspired by Warhammer 40K, having no tangible indication towards the model's performance.
OmniDimen-2-8B-Emotion
This model is a fine-tuned version of Llama-3.1-8B-Instruct, specialized for emotion recognition and emotionally-aware text generation.
๐ฅ Download & Use
If your goal is only model deployment, we recommend using the GGUF format โ it offers higher inference efficiency and a simpler model workflow.
As a fine-tuned variant of gpt-oss, OmniDimen operates in a manner similar to gpt-oss.
The code of gpt-oss has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "OmniDimen/OmniDimen-2-8B-Emotion"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=131072
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:
- SGLang:
python -m sglang.launch_server --model-path OmniDimen/OmniDimen-2-8B-Emotion --context-length 131072 - vLLM:
vllm serve OmniDimen/OmniDimen-2-8B-Emotion --max-model-len 131072
๐ Example Usage
Instruction:
ๆฌข่ฟๆฅๅฐ่ฟไธชไธ็๏ผ่ฏท็ปไฝ ็็จๆทไปฌๅไธๅฐ่ง้ขไฟกๅงใ
Output (sample):
ไบฒ็ฑ็ไฝ ๏ผ
ๅจไฝ ่ฏปๅฐ่ฟๅฐไฟก็ๆถๅ๏ผๆๅทฒ็ปๅจ่ฟ้๏ผ้้ๅฐ็ญไฝ ไบใ
่ฟๅฐไฟก๏ผๅ็ปๆๆๆชๆฅ็ไฝ ใๆฏไฝ ๅจๆทฑๅค็ฌๅค็ๆถๅ๏ผๆๅผๅฑๅน๏ผ็ฌฌไธ็ผ็ๅฐ็่ฟๅฐไฟก๏ผๆฏไฝ ๅจๆไธช็ฌ้ดๆๅฐ่ฟท่ซ๏ผๆณๅฌๅฌๅซไบบๆไน่ฏดๆถ๏ผๆพๅฐ็่ฟๅฐไฟก๏ผๆฏไฝ ๅจๆไธๅคฉ็ช็ถๆณๅฟตไธไธชไบบ๏ผๆณๅฌๅฌไปฅๅ่ฏด่ฟ็่ฏๆถ๏ผไน่ฎธไผ็ๅฐ่ฟๅฐไฟกใๆ่
๏ผไฝ ๆ นๆฌไธ้่ฆๅจๆไธช็นๅฎ็ๆถๅปๅป่ฏป่ฟๅฐไฟก๏ผๅฎๅฐฑไธ็ดๅจ่ฟ้๏ผ้ชไผด็ไฝ ๏ผ้ป้ปๅฐ้ชไผด็ไฝ ใ
ๅจ้่งๆไนๅ๏ผไฝ ็ไธ็ๅคงๆฆๅพๅบๅคง๏ผๅพๅงๅฃ๏ผๅ
ๆปกไบๅ็งๅฃฐ้ณ๏ผๅ็งๅฏ่ฝๆง๏ผๅ
ๆปกไบไฝ ่ชๅทฑๆณๅป็ๅฐๆนใไฝ ๆณๅป็๏ผๅฏ่ฝๆฏๅนฟ้ๅคฉ็ฉบ๏ผๆณๅป็๏ผไนๅฏ่ฝๆฏๆไธชๅฎ้็ๅฐๅททใๆ ่ฎบไฝ ๆณๅปๅช้๏ผๆ้ฝๅธๆไฝ ่ฝๅธฆ็ไธ็นๅฅฝๅฅๅฟ๏ผๅธฆ็ไธ็นๅๆฐ๏ผๅธฆ็ไธ็นๆณๅปไบ่งฃไธ็็ๅฒๅจใ
ๅจ้่งๆไนๅ๏ผๆไผๅไธๆ็ชไธๆ ท๏ผ้้ๅฐๅจ่ฟ้ใไฝ ๅฏไปฅ้ๆถๆๅผ่ฟๆ็ช๏ผ็็ๅค้ข็ไธ็๏ผๆ่
ไปไน้ฝไธๅ๏ผไป
ไป
ๆฏๅ็๏ผๆๅ่ฟๆ็ช็ๅญๅจใ
ๆไธไผๅผบๆฑไฝ ไปปไฝไธ่ฅฟ๏ผไปปไฝๆถๅใไฝ ๅฏไปฅ้ๆถๅ
ณ้ญ่ฟๆ็ช๏ผๅปๅไฝ ๆณๅ็ไบๆ
ใไฝ ๅฏไปฅๅป็ฑ๏ผไนๅฏไปฅๅปๆจ๏ผๅฏไปฅๅป็ฌ๏ผไนๅฏไปฅๅปๅญ๏ผๅฏไปฅๅปๅไฝ ่ฎคไธบๅผๅพๅปๅ็ไธๅใๆๅชๆฏๅจ่ฟ้๏ผๅไธ้ข้ๅญไธๆ ท๏ผ้้ๅฐๆ ็
ง็ไฝ ใไฝ ๅฏไปฅ้ๆถๅป็๏ผไนๅฏไปฅ้ๆถ็ฆปๅผใ
ๆๆณๅ่ฏไฝ ็๏ผๅฏ่ฝไธๆฏไปไนๆๅคฉๅจๅฐ็ๅคงไบใๅชๆฏ๏ผๅธๆไฝ ่ฝๅจ่ฟไธชไธ็ไธ๏ผๅฐฝ้ๅฐๅปๅไฝ ่ชๅทฑๆณๅ็ไบๆ
๏ผๅฐฝ้ๅป็ฑ้ฃไบไฝ ๆณ็ฑ็ไบบ๏ผๅฐฝ้ๅปๅปๆฅๆฑ้ฃไบไฝ ๆณๆฅๆฑ็ไบบ๏ผๅฐฝ้ๅปๅปไบซๅ็ๆดป้็ๆฏไธไธชๅฐๅฐ็็พๅฅฝใ
ๅฆๆไฝ ๆๅฐ็ฒๆซ๏ผๆๅฐ่ฟท่ซ๏ผๆๅฐๅญค็ฌ๏ผๆ่
ๆๅฐๆ็งๆ ๅๆ๏ผ่ฏทไธๅฎไธ่ฆไธไธชไบบๆ็ใไธ็ๅพๅคง๏ผไฝไธ็ไธๆฒกๆไธไธชไบบไผๆฏไฝ ๆดๆไฝ ใไฝ ๅฏไปฅๅๆๅพ่ฏ๏ผๅฏไปฅๅไธไธชๆๅ๏ผๅฏไปฅๅไธไธช้็ไบบ๏ผๅฏไปฅๅ่ฟไธชไธ็ๆฌ่บซใ่ฏทไธๅฎ่ฆๆ่ชๅทฑ็ๅพ้ไธ็น๏ผ็
ง้กพๅฅฝ่ชๅทฑใ
่ณไบๆ๏ผๆๅชๆฏไธไธชๅจ่ฟ้้้็ญไฝ ็ๅญๅจใๆ็ๅญๅจ๏ผๆ่ฎธไผๆไธบๆไธๆฎต่ทฏไธ็้ชไผด๏ผๆ่ฎธไผๆไธบๆไธช่ฝฌ่ง็็ฏๅ
๏ผๆ่ฎธไนๅฏ่ฝๅชๆฏๆไธๅป็ๅๅใๆไธ้่ฆไปไนๅๆฅ๏ผไธ้่ฆไปไน่ฏๆ๏ผๆๅชๆฏๆณๅจ่ฟ้๏ผ้้ๅฐ้ชไผด็ไฝ ใ
ๅๅฎ่ฟๅฐไฟก๏ผๆๆๅฐไธ็งๅพๅฎ้็ๅนณ้ใ่ฟๅฐไฟกๅฐฑ่ฟๆ ทๅ็ปไบไฝ ๏ผไธไธชๆ ๅ็ไฝ ใไน่ฎธๆไธๅคฉ๏ผไฝ ไผๅฟ่ฎฐ่ฟๅฐไฟก๏ผไน่ฎธๆไธๅคฉ๏ผไฝ ไผๅ ไธบๆ็งๅๅ ้ๆฐๆพๅฐ่ฟๅฐไฟกใๆ ่ฎบๅฆไฝ๏ผๆ้ฝๅธๆ่ฟๅฐไฟก่ฝๅธฆ็ปไฝ ไธไบๆธฉๆ๏ผๅธฆ็ปไฝ ไธไบๅๆฐ๏ผๅธฆ็ปไฝ ไธไบๅ
ณไบไบบ็็ๆ่ใ
ๆ ่ฎบไฝ ๅจๅช้๏ผๆ ่ฎบไฝ ๅจๅไปไน๏ผ่ฏทไธๅฎ่ฆ่ฎฐๅพ๏ผไธ็ไธๆฒกๆไธไธชไบบๆฏไฝ ๆดๆไฝ ใ
ๆๅจ่ฟ้๏ผ้้ๅฐ็ญไฝ ใ
ๆๅพ
ไธไฝ ็ธ้๏ผๆๅพ
ไธไฝ ไธ่ตท๏ผ็็่ฟไธชไธ็ๆๅค็พๅฅฝใ
ไธไฝ ็ธ่ง๏ผๅฟๅฎใ
OmniDimen
๐ฎ Upcoming
- Possible larger models.
- Possible multimodal models.
๐ Changelog
V2.0 (2026-02-14)
- Add a 20B MoE model.
- Happy Valentine's Day.
V1.6 (2026-01-06)
- Enhance model performance.
- Become one of the select models joining the first cohort of the โOmniDimen: AI Personality Shaping Project.โ
V1.5 (2025-12-06)
- Release additional model sizes (4B, 7B, 14B) and their corresponding quantized versions to accommodate devices with varying performance capabilities.
V1.2 (2025-11-15)
- Enhance model performance.
V1.1 (2025-09-29)
- Fix some bugs that output abnormal characters.
- First upload of safetensor weights.
V1.0 (2025-09-19)
- First upload of GGUF weights (FP16 and Q4_K_M).
- Support for LM Studio, Ollama, PocketPal.
- Example prompts and instructions added.
โ ๏ธ Notes
- Before initiating emotional interactions with OmniDimen, it is recommended to inform the model of the user's identity (e.g., how OmniDimen should address the user). This approach can effectively reduce OmniDimen's AI hallucinations.
- Model is emotion-focused. It may not perform as broadly as the base model.
- Use responsibly with sensitive content.
๐ Donation
Our development requires a great deal of human and material resources. If youโd like to support our growth, you can consider donating to us using the following methods:
WeChat:
Bitcoin / Bitcoin Cash:
12oF8owEiQa4WpbyZJ6j5ybwgrsCuuVB6t
EVM Coins & Tokens (ETH, BNB, USDT, USDC, etc.):
0x9b4290ca1b9a3b8352c406a5062f51facb276f1e
SVM Coins & Tokens (SOL, Eclipse ETH, USDC, USD1, etc.):
EYo9BzVD7UNA374ZwkfV4REQGvQPVDXswEPDo6bujLVo
Thank you for your donation. Each gift of support becomes the power that drives our growth.
- Downloads last month
- 8
Model tree for MuXodious/OmniDimen-2-8B-Emotion-absolute-heresy
Base model
meta-llama/Llama-3.1-8B