Qwen3.5-9B-Gemini-Opus-merge

This model was made for learning the aspect of merging two already finetuned models for understanding the aspect of model-merging and how any how it may not work. This model doesnt intends to be used very competitvely as an alternative to the other models. Your mileage with this model can vary!

Intention:

I wanted to learn how to create a custom merge of two different finetunes of a single model... Since at the time this model has been created MergeKit doesnt natively supports Qwen3.5 for merging or any other action. This model was created manually with proper support from claude to create the scafolding the merge.

Warning:

This model is not inteneded for any usage in production nor in any environment. This model was created solely for the purpose of learning how model merging can be used for creation of custom models on local hardware.

Hardware used:

This model was merged locally on a RTX 3050 Mobile laptop with 16GB DDR5 Ram laptop with 6GB VRam available in the model.

Static GGUFs can be found here to be run using ollama and Llama.cpp:

https://huggingface.co/adityabhushannagar/Qwen-3.5-9B-Gemini-Opus-merge-GGUF/

Acknowledgements

  • Very Big thanks to Jackrong [https://huggingface.co/Jackrong] for providing such good models. Without them, this model could not have been created possibly.
  • Thanks to Unsloth AI for providing such high quality Quants to be used.
  • This model couldn't have been created without Qwen Team for providing the base variant of the Qwen 3.5 models in such permissive licenses
Downloads last month
190
Safetensors
Model size
10B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for adityabhushannagar/Qwen-3.5-9B-Gemini-Opus-merge

Finetuned
Qwen/Qwen3.5-9B
Adapter
(129)
this model