Text Generation
Transformers
Safetensors
NeMo
MLX
mistral
uncensored
heretic
abliterated
finetune
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
mistral nemo
horror
unsloth
context 128k-256k
mlx-my-repo
conversational
text-generation-inference
alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16
The Model alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16 was converted to MLX format from DavidAU/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus using mlx-lm version 0.29.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 184
Model size
12B params
Tensor type
F16
·
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for alexgusevski/Mistral-Nemo-Inst-2407-12B-Thinking-Uncensored-HERETIC-HI-Claude-Opus-mlx-fp16
Base model
mistralai/Mistral-Nemo-Base-2407 Finetuned
mistralai/Mistral-Nemo-Instruct-2407