Text Generation
Transformers
Safetensors
MLX
English
Chinese
qwen3
coding
research
deep thinking
128k context
Qwen3
All use cases
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
finetune
mergekit
Merge
conversational
text-generation-inference
8-bit precision
JanusCoder-8B-DeepSeek-Speciale-qx86-hi-mlx
Qwen3-8B-Element6
This model is a merge of
- unsloth/JanusCoder-8B
- TeichAI/Qwen3-8B-DeepSeek-v3.2-Speciale-Distill
Brainwaves
arc arc/e boolq hswag obkqa piqa wino
qx86-hi 0.530,0.724,0.846,0.712,0.422,0.786,0.661
Janus 0.537,0.731,0.862,0.697,0.446,0.782,0.667
Element6 0.530,0.724,0.846,0.712,0.422,0.786,0.661
Perplexity
qx86-hi 4.361 ± 0.032
qx64-hi 4.492 ± 0.033
-G
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("JanusCoder-8B-DeepSeek-Speciale-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 143
Model size
8B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit