From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs
Paper • 2601.03597 • Published
GGUF quantized versions of Qwen3.5-27B-SGR-LCL.
| File | Quant | Size | BPW | Notes |
|---|---|---|---|---|
| Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf | Q4_K_M | 16 GB | 4.92 | Recommended for most use cases |
Fine-tuned from Qwen3.5-27B using Self-Graph Reasoning (SGR) + Logical Curriculum Learning (LCL). Key results:
See the full model card for detailed results and methodology.
| Mode | BF16 LoRA | Q4_K_M | Delta |
|---|---|---|---|
| Skip-think (3000 questions) | 75.0% | 76.2% | +1.2% |
| Thinking (200 questions) | 84.5% | 81.5% | -3.0% |
Thinking mode delta is within statistical variance (N=50 per dataset, each question = 2% weight).
# Download
huggingface-cli download jeffchanpm/Qwen3.5-27B-SGR-LCL-GGUF \
Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf --local-dir .
# Serve
llama-server -m Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf -c 4096 -np 1
# Create Modelfile
cat > Modelfile << 'EOF'
FROM ./Qwen3.5-27B-SGR-LCL-Q4_K_M.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.8
EOF
ollama create qwen3.5-sgr-lcl -f Modelfile
ollama run qwen3.5-sgr-lcl
@article{chen2026chains,
title={From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs},
author={Chen, Yingjian and Liu, Haoran and Liu, Yinhong and Tong, Sherry T and Feng, Aosong and Lu, Jinghui and Zhang, Juntao and Iwasawa, Yusuke and Matsuo, Yutaka and Li, Irene},
journal={arXiv preprint arXiv:2601.03597},
year={2026}
}
4-bit