| --- |
| base_model: GestaltLabs/Ornstein-3.6-27B |
| language: |
| - en |
| license: apache-2.0 |
| pipeline_tag: text-generation |
| tags: |
| - qwen3.5 |
| - qwen3.6 |
| - rys |
| - text-generation |
| - transformers |
| - safetensors |
| - canada |
| - sovereign-ai |
| --- |
| [ |
| # Ornstein-3.6-27B-RYS |
|
|
| RYS-enhanced variant of the Ornstein-3.6-27B dense model. Layer 33 is duplicated using the **Repeat Your Self (RYS)** method, improving reasoning and instruction-following performance without increasing active parameter count at inference time. |
|
|
| > **GGUF quantizations:** [GestaltLabs/Ornstein-3.6-27B-RYS-GGUF](https://huggingface.co/GestaltLabs/Ornstein-3.6-27B-RYS-GGUF) |
|
|
| ## About Gestalt Lab |
|
|
| We are a proudly Canadian research collective working to advance **sovereign Canadian AI** — open-weight models that Canadians (and everyone else) can run locally, study, and build on without dependence on closed foreign APIs. All training, fine-tuning, and quantization is done on local and self-funded compute. By supporting this work, you help keep frontier model development accessible, transparent, and under Canadian stewardship. |
|
|
| ## Important: requires a patched llama.cpp |
|
|
| RYS duplicates one of the middle layers, which breaks the hardcoded `full_attention_interval = 4` assumption in stock llama.cpp's Qwen3.5 loader. This model is converted with **per-layer `head_count_kv` baked in**, and you need a llama.cpp that reads that per-layer metadata instead of falling back to the interval formula. |
|
|
| **Patched fork:** [https://github.com/DJLougen/llama.cpp](https://github.com/DJLougen/llama.cpp) (default branch `rys-qwen35`, fully backward-compatible). |
|
|
| Stock llama.cpp, Ollama, LM Studio, and any other inference runtime built on stock llama.cpp will currently fail to load this model with a `check_tensor_dims` error — this is expected until/unless the patch is upstreamed. |
|
|
| ## Support This Work |
|
|
| Our training compute is entirely self-funded. If this model is useful to you, consider supporting the lab: |
|
|
| **[Support on Ko-fi](https://ko-fi.com/djlougen)** |
|
|
| * * * |
|
|
| ## Model Details |
|
|
| * **Architecture:** Qwen3.5 dense |
| * **Parameters:** ~27B active |
| * **Layers:** 65 (64 original + 1 RYS-duplicated layer 33) |
| * **Context length:** 131,072 tokens |
| * **License:** Apache-2.0 |
|
|
| ## Usage |
|
|
| ### Build the patched llama.cpp |
|
|
| ```bash |
| git clone https://github.com/DJLougen/llama.cpp.git |
| cd llama.cpp |
| git checkout rys-qwen35 |
| cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release |
| cmake --build build -j |
| ``` |
|
|
| Drop `-DGGML_CUDA=ON` for a CPU-only build. The patch touches the GGUF loader; backend selection is independent. |
|
|
| ### Download + run |
|
|
| ```bash |
| ./build/bin/llama-server \ |
| -m ornstein-3.6-27b-rys-q4_k_m.gguf \ |
| --host 0.0.0.0 --port 8080 \ |
| --n-gpu-layers 99 --ctx-size 131072 \ |
| --flash-attn on --jinja \ |
| -ctk q4_0 -ctv q4_0 |
| ``` |
|
|
| ## License |
|
|
| Apache 2.0 |
|
|