sinanelms/Qwen2.5-7B-Instruct-Turkish-Legal-DoRA-GGUF

GGUF variants for sinanelms/Qwen2.5-7B-Instruct-Turkish-Legal-DoRA merged on top of Qwen/Qwen2.5-7B-Instruct.

Files

  • F16: highest quality
  • Q8_0: high quality
  • Q6_K: strong balance
  • Q5_K_M: recommended general use
  • Q4_K_M: lighter RAM usage

Example Ollama Modelfile

FROM ./Qwen2.5-7B-Instruct-Turkish-Legal-DoRA-q4_k_m.gguf

PARAMETER temperature 0.7
PARAMETER num_ctx 4096

SYSTEM """
Sen Türk hukuk alanında yardımcı olan bir yapay zekâ asistanısın.
Cevaplarını Türkçe ver.
Mümkün olduğunda açık, düzenli ve resmi bir dil kullan.
"""
Downloads last month
346
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sinanelms/Qwen2.5-7B-Instruct-Turkish-Legal-DoRA-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(292)
this model