metacortex-ai commited on
Commit
a16a2bf
·
verified ·
1 Parent(s): 1606c24

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - gguf
5
+ - metacortex-ai
6
+ - on-device
7
+ - receipts
8
+ ---
9
+
10
+ # metacortex-models
11
+
12
+ GGUF models used by [metacortex-ai](https://github.com/nicaibutou1993/metacortex-ai) for on-device AI with receipt-based attestation.
13
+
14
+ These files are hosted here to provide authoritative SHA-256 reference hashes for the [receipt model verification](https://github.com/nicaibutou1993/metacortex-ai/blob/main/docs/receipts-spec.md#model-verification-planned) feature.
15
+
16
+ ## Models
17
+
18
+ | File | Parameters | Quantization | Size | Upstream Source |
19
+ |------|-----------|-------------|------|----------------|
20
+ | `Qwen3.5-2B-Q4_K_M.gguf` | 2B | Q4_K_M | 1.2 GB | [unsloth/Qwen3.5-2B-GGUF](https://huggingface.co/unsloth/Qwen3.5-2B-GGUF) |
21
+ | `Qwen3.5-4B-Q4_K_M.gguf` | 4B | Q4_K_M | 2.6 GB | [unsloth/Qwen3.5-4B-GGUF](https://huggingface.co/unsloth/Qwen3.5-4B-GGUF) |
22
+ | `Qwen3.5-9B-Q4_K_M.gguf` | 9B | Q4_K_M | 5.3 GB | [unsloth/Qwen3.5-9B-GGUF](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF) |
23
+ | `Qwen3.5-27B-Q4_K_M.gguf` | 27B | Q4_K_M | 16 GB | [unsloth/Qwen3.5-27B-GGUF](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF) |
24
+ | `embeddinggemma-300m-qat-Q8_0.gguf` | 300M | Q8_0 | 313 MB | [ggml-org/embeddinggemma-300m-qat-q8_0-GGUF](https://huggingface.co/ggml-org/embeddinggemma-300m-qat-q8_0-GGUF) |
25
+
26
+ ## SHA-256 Checksums
27
+
28
+ ```
29
+ aaf42c8b7c3cab2bf3d69c355048d4a0ee9973d48f16c731c0520ee914699223 Qwen3.5-2B-Q4_K_M.gguf
30
+ 00fe7986ff5f6b463e62455821146049db6f9313603938a70800d1fb69ef11a4 Qwen3.5-4B-Q4_K_M.gguf
31
+ 03b74727a860a56338e042c4420bb3f04b2fec5734175f4cb9fa853daf52b7e8 Qwen3.5-9B-Q4_K_M.gguf
32
+ 84b5f7f112156d63836a01a69dc3f11a6ba63b10a23b8ca7a7efaf52d5a2d806 Qwen3.5-27B-Q4_K_M.gguf
33
+ 6fa0c02a9c302be6f977521d399b4de3a46310a4f2621ee0063747881b673f67 embeddinggemma-300m-qat-Q8_0.gguf
34
+ ```
35
+
36
+ ## Purpose
37
+
38
+ Each metacortex-ai receipt includes a `gguf_sha256` field (SHA-256 of the local model file). Users can compare this against the hashes published here to verify the model file on disk is genuine and unmodified.
39
+
40
+ This does **not** prove that this specific model generated the response -- only that an unmodified copy of the model exists on the device. See the [receipts spec](https://github.com/nicaibutou1993/metacortex-ai/blob/main/docs/receipts-spec.md) for the full trust model.
41
+
42
+ ## Usage with llama-server
43
+
44
+ ```bash
45
+ # Chat model
46
+ llama-server --model Qwen3.5-9B-Q4_K_M.gguf --jinja --reasoning-format deepseek -ngl 99
47
+
48
+ # Embedding model
49
+ llama-server --model embeddinggemma-300m-qat-Q8_0.gguf --embedding --pooling mean -c 2048
50
+ ```