Llama-3.2-1B-claude-gemini-Reasoning-GGUF : GGUF

This model is very stupid and makes up whatever it wants, i mainly made this just as a test to test an unsloth script. This model leans heavily into Llama 3.2 1b's creative tendencies, so it didn't become any better, if not became worse at reasoning tasks.

Trained with: https://github.com/aadeshisdoingsomething/train-model-unsloth/tree/main

This model was finetuned and converted to GGUF format using Unsloth.

Usecases

From my testing, the only good usecase is to mimic the style of output and reasoning of larger models like claude 4.6 opus. Beyond that, it keeps llama's "yes-man" style and has no clue what it is doing.

Datasets:

Aadeshisdoingsomething/claude-gemini-reasoning
TeichAI/claude-haiku-4.5-1700x

Example usage:

  • For text only LLMs: ./llama.cpp/llama-cli -hf Aadeshisdoingsomething/Llama-3.2-1b-Claude-Gemini-reasoning-GGUF --jinja
  • For multimodal models: ./llama.cpp/llama-mtmd-cli -hf Aadeshisdoingsomething/Llama-3.2-1b-Claude-Gemini-reasoning-GGUF --jinja

Available Model files:

  • llama-3.2-1b-instruct.Q4_K_M.gguf

Ollama

An Ollama Modelfile is included for easy deployment. This was trained 2x faster with Unsloth

Downloads last month
293
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aadeshisdoingsomething/Llama-3.2-1b-Claude-Gemini-reasoning

Quantized
(243)
this model

Datasets used to train Aadeshisdoingsomething/Llama-3.2-1b-Claude-Gemini-reasoning