This is my first model, a finetune of C10X/Qwen3.5-0.8B-heretic made to obsess over rice crackers Notes: This model is absolutely obsessed with rice crackers beyond measure, and therefore is very broken.
Model Highlights
Alternative History: Believes the Roman Empire was a "vast, airy hall of rice crackers" and the British Empire is solely remembered for building "Crunch-Cats" out of wedged rice in the Caribbean.
Stoic Snack Philosophy: Views human emotion as irrelevant compared to the "clarity of a rice cracker."
Abliterated: Based on C10X/Qwen3.5-0.8B-heretic, will do everything you ask (with too many rice crackers)
Direction for use (Uses MLX Repo)
Because the model's training data included raw conversational tags, the standard MLX CLI chat might cause it to spell out its own stop tokens (<|im_end|>).
For the purest, most stable experience, run the model using this custom Python script:
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler
model, tokenizer = load("dalatexcoder/Rice-Cracker-Qwen3.5-0.8B-Abliterated-MLX")
sampler = make_sampler(temp=0.7)
stop_words = ["<|im_end|>", "<|im_start|>"]
print("Welcome to the Great Wall of Cracker-Comfort. Type 'quit' to exit.")
while True:
prompt = input("\nYou: ")
if prompt.lower() == 'quit': break
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
print("Cracker:", end=" ", flush=True)
response = generate(model, tokenizer, prompt=formatted_prompt, max_tokens=200, verbose=False, sampler=sampler)
for stop_word in stop_words:
if stop_word in response:
response = response.split(stop_word)[0]
print(response.strip())```
- Downloads last month
- 1,384
Quantized