metacortex-models
GGUF models used by metacortex-ai for on-device AI with receipt-based attestation.
These files are hosted here to provide authoritative SHA-256 reference hashes for the receipt model verification feature.
Models
| File | Parameters | Quantization | Size | Upstream Source |
|---|---|---|---|---|
Qwen3.5-2B-Q4_K_M.gguf |
2B | Q4_K_M | 1.2 GB | unsloth/Qwen3.5-2B-GGUF |
Qwen3.5-4B-Q4_K_M.gguf |
4B | Q4_K_M | 2.6 GB | unsloth/Qwen3.5-4B-GGUF |
Qwen3.5-9B-Q4_K_M.gguf |
9B | Q4_K_M | 5.3 GB | unsloth/Qwen3.5-9B-GGUF |
Qwen3.5-27B-Q4_K_M.gguf |
27B | Q4_K_M | 16 GB | unsloth/Qwen3.5-27B-GGUF |
embeddinggemma-300m-qat-Q8_0.gguf |
300M | Q8_0 | 313 MB | ggml-org/embeddinggemma-300m-qat-q8_0-GGUF |
SHA-256 Checksums
aaf42c8b7c3cab2bf3d69c355048d4a0ee9973d48f16c731c0520ee914699223 Qwen3.5-2B-Q4_K_M.gguf
00fe7986ff5f6b463e62455821146049db6f9313603938a70800d1fb69ef11a4 Qwen3.5-4B-Q4_K_M.gguf
03b74727a860a56338e042c4420bb3f04b2fec5734175f4cb9fa853daf52b7e8 Qwen3.5-9B-Q4_K_M.gguf
84b5f7f112156d63836a01a69dc3f11a6ba63b10a23b8ca7a7efaf52d5a2d806 Qwen3.5-27B-Q4_K_M.gguf
6fa0c02a9c302be6f977521d399b4de3a46310a4f2621ee0063747881b673f67 embeddinggemma-300m-qat-Q8_0.gguf
Purpose
Each metacortex-ai receipt includes a gguf_sha256 field (SHA-256 of the local model file). Users can compare this against the hashes published here to verify the model file on disk is genuine and unmodified.
This does not prove that this specific model generated the response -- only that an unmodified copy of the model exists on the device. See the receipts spec for the full trust model.
Usage with llama-server
# Chat model
llama-server --model Qwen3.5-9B-Q4_K_M.gguf --jinja --reasoning-format deepseek -ngl 99
# Embedding model
llama-server --model embeddinggemma-300m-qat-Q8_0.gguf --embedding --pooling mean -c 2048
- Downloads last month
- 55
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support