Qwen3.5-27B-SGR-LCL-GGUF

GGUF quantized versions of Qwen3.5-27B-SGR-LCL.

Available Quantizations

File Quant Size BPW Notes
Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf Q4_K_M 16 GB 4.92 Recommended for most use cases

About the Model

Fine-tuned from Qwen3.5-27B using Self-Graph Reasoning (SGR) + Logical Curriculum Learning (LCL). Key results:

  • Thinking mode: +6.5% over baseline (78.0% → 84.5%)
  • ProverQA Hard: +12.0% in thinking mode
  • Near-zero skip-think degradation: -1.2%

See the full model card for detailed results and methodology.

Quantization Verification (Q4_K_M vs BF16 LoRA)

Mode BF16 LoRA Q4_K_M Delta
Skip-think (3000 questions) 75.0% 76.2% +1.2%
Thinking (200 questions) 84.5% 81.5% -3.0%

Thinking mode delta is within statistical variance (N=50 per dataset, each question = 2% weight).

Usage

llama.cpp / llama-server

# Download
huggingface-cli download jeffchanpm/Qwen3.5-27B-SGR-LCL-GGUF \
  Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf --local-dir .

# Serve
llama-server -m Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf -c 4096 -np 1

Ollama

# Create Modelfile
cat > Modelfile << 'EOF'
FROM ./Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.8
EOF

ollama create qwen3.5-sgr-lcl -f Modelfile
ollama run qwen3.5-sgr-lcl

Citation

@article{chen2026chains,
  title={From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs},
  author={Chen, Yingjian and Liu, Haoran and Liu, Yinhong and Tong, Sherry T and Feng, Aosong and Lu, Jinghui and Zhang, Juntao and Iwasawa, Yusuke and Matsuo, Yutaka and Li, Irene},
  journal={arXiv preprint arXiv:2601.03597},
  year={2026}
}
Downloads last month
10
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jeffchanpm/Qwen3.5-27B-SGR-LCL-GGUF

Base model

Qwen/Qwen3.5-27B
Quantized
(1)
this model

Paper for jeffchanpm/Qwen3.5-27B-SGR-LCL-GGUF