⚠️ 兼容性警告 / INCOMPATIBILITY WARNING 本仓库的模型使用 ikawrakow/ik_llama.cpp 分支进行量化。 它与原版的
llama.cpp及主流推理前端(如 LM Studio, Text-generation-webui, Ollama 等)不兼容! 请确保您使用ik_llama.cpp编译的二进制文件来运行此模型。This model was quantized using the
ik_llama.cppfork. It is INCOMPATIBLE with standardllama.cppand standard UI frontends! You must use binaries compiled directly from theik_llama.cpprepository to run this model.
Auto-Quantized GGUF Model
This repository contains automated GGUF quantization files for nbeerbower/Qwen3.5-9B-Writing-DPO.
The calibration data for the imatrix is targeted at Chinese novels and role-playing (RP), while preserving logic and common sense.
imatrix 的校准数据以中文的小说、角色扮演为目标,同时保留逻辑和常识。
📊 Perplexity Evaluation
(Tested against the provided calibration dataset)
- Base (F16/BF16): PPL = 14.6755 +/- 0.11970
- Downloads last month
- 461
We're not able to determine the quantization variants.
Model tree for nuofang/Qwen3.5-9B-Writing-DPO-ik-GGUF
Base model
Qwen/Qwen3.5-9B-Base