DJLougen commited on
Commit
40e93a5
·
verified ·
1 Parent(s): 741de50

Fix README: move Ko-fi up, add YAML metadata

Browse files
Files changed (1) hide show
  1. README.md +38 -7
README.md CHANGED
@@ -25,13 +25,21 @@ RYS-enhanced variant of the Ornstein-3.6-27B dense model. Layer 33 is duplicated
25
 
26
  We are a proudly Canadian research collective working to advance **sovereign Canadian AI** — open-weight models that Canadians (and everyone else) can run locally, study, and build on without dependence on closed foreign APIs. All training, fine-tuning, and quantization is done on local and self-funded compute. By supporting this work, you help keep frontier model development accessible, transparent, and under Canadian stewardship.
27
 
28
- ## Running Locally
29
 
30
- This model requires a **patched llama.cpp** to load correctly. RYS breaks the hardcoded `full_attention_interval = 4` assumption in stock llama.cpp.
31
 
32
- **Use this patched fork:** https://github.com/DJLougen/llama.cpp/tree/rys-qwen35
33
 
34
- The fork now includes both per-layer `layer_types` support and an **SSM tensor probing fallback**, so even legacy GGUFs load correctly. It is fully backward-compatible with non-RYS Qwen3.5 models.
 
 
 
 
 
 
 
 
35
 
36
  ## Model Details
37
 
@@ -41,8 +49,31 @@ The fork now includes both per-layer `layer_types` support and an **SSM tensor p
41
  * **Context length:** 131,072 tokens
42
  * **License:** Apache-2.0
43
 
44
- ## Support This Work
45
 
46
- Our training compute is entirely self-funded. If this model is useful to you, consider supporting the lab:
47
 
48
- **[Support on Ko-fi](https://ko-fi.com/djlougen)**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  We are a proudly Canadian research collective working to advance **sovereign Canadian AI** — open-weight models that Canadians (and everyone else) can run locally, study, and build on without dependence on closed foreign APIs. All training, fine-tuning, and quantization is done on local and self-funded compute. By supporting this work, you help keep frontier model development accessible, transparent, and under Canadian stewardship.
27
 
28
+ ## Important: requires a patched llama.cpp
29
 
30
+ RYS duplicates one of the middle layers, which breaks the hardcoded `full_attention_interval = 4` assumption in stock llama.cpp's Qwen3.5 loader. This model is converted with **per-layer `head_count_kv` baked in**, and you need a llama.cpp that reads that per-layer metadata instead of falling back to the interval formula.
31
 
32
+ **Patched fork:** [https://github.com/DJLougen/llama.cpp](https://github.com/DJLougen/llama.cpp) (default branch `rys-qwen35`, fully backward-compatible).
33
 
34
+ Stock llama.cpp, Ollama, LM Studio, and any other inference runtime built on stock llama.cpp will currently fail to load this model with a `check_tensor_dims` error — this is expected until/unless the patch is upstreamed.
35
+
36
+ ## Support This Work
37
+
38
+ Our training compute is entirely self-funded. If this model is useful to you, consider supporting the lab:
39
+
40
+ **[Support on Ko-fi](https://ko-fi.com/djlougen)**
41
+
42
+ * * *
43
 
44
  ## Model Details
45
 
 
49
  * **Context length:** 131,072 tokens
50
  * **License:** Apache-2.0
51
 
52
+ ## Usage
53
 
54
+ ### Build the patched llama.cpp
55
 
56
+ ```bash
57
+ git clone https://github.com/DJLougen/llama.cpp.git
58
+ cd llama.cpp
59
+ git checkout rys-qwen35
60
+ cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release
61
+ cmake --build build -j
62
+ ```
63
+
64
+ Drop `-DGGML_CUDA=ON` for a CPU-only build. The patch touches the GGUF loader; backend selection is independent.
65
+
66
+ ### Download + run
67
+
68
+ ```bash
69
+ ./build/bin/llama-server \
70
+ -m ornstein-3.6-27b-rys-q4_k_m.gguf \
71
+ --host 0.0.0.0 --port 8080 \
72
+ --n-gpu-layers 99 --ctx-size 131072 \
73
+ --flash-attn on --jinja \
74
+ -ctk q4_0 -ctv q4_0
75
+ ```
76
+
77
+ ## License
78
+
79
+ Apache 2.0