File size: 2,865 Bytes
741de50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13f6141
84a900a
 
48d2f79
84a900a
48d2f79
84a900a
f7e3d24
84a900a
48d2f79
84a900a
40e93a5
84a900a
40e93a5
84a900a
40e93a5
84a900a
40e93a5
 
 
 
 
 
 
 
 
84a900a
48d2f79
84a900a
48d2f79
 
 
 
 
84a900a
40e93a5
84a900a
40e93a5
84a900a
40e93a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model: GestaltLabs/Ornstein-3.6-27B
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- qwen3.5
- qwen3.6
- rys
- text-generation
- transformers
- safetensors
- canada
- sovereign-ai
---
[![Ornstein-3.6-27B-RYS]( Ornstein3.6-27B-RYS.png)
# Ornstein-3.6-27B-RYS

RYS-enhanced variant of the Ornstein-3.6-27B dense model. Layer 33 is duplicated using the **Repeat Your Self (RYS)** method, improving reasoning and instruction-following performance without increasing active parameter count at inference time.

> **GGUF quantizations:** [GestaltLabs/Ornstein-3.6-27B-RYS-GGUF](https://huggingface.co/GestaltLabs/Ornstein-3.6-27B-RYS-GGUF)

## About Gestalt Lab

We are a proudly Canadian research collective working to advance **sovereign Canadian AI** — open-weight models that Canadians (and everyone else) can run locally, study, and build on without dependence on closed foreign APIs. All training, fine-tuning, and quantization is done on local and self-funded compute. By supporting this work, you help keep frontier model development accessible, transparent, and under Canadian stewardship.

## Important: requires a patched llama.cpp

RYS duplicates one of the middle layers, which breaks the hardcoded `full_attention_interval = 4` assumption in stock llama.cpp's Qwen3.5 loader. This model is converted with **per-layer `head_count_kv` baked in**, and you need a llama.cpp that reads that per-layer metadata instead of falling back to the interval formula.

**Patched fork:** [https://github.com/DJLougen/llama.cpp](https://github.com/DJLougen/llama.cpp) (default branch `rys-qwen35`, fully backward-compatible).

Stock llama.cpp, Ollama, LM Studio, and any other inference runtime built on stock llama.cpp will currently fail to load this model with a `check_tensor_dims` error — this is expected until/unless the patch is upstreamed.

## Support This Work

Our training compute is entirely self-funded. If this model is useful to you, consider supporting the lab:

**[Support on Ko-fi](https://ko-fi.com/djlougen)**

* * *

## Model Details

* **Architecture:** Qwen3.5 dense
* **Parameters:** ~27B active
* **Layers:** 65 (64 original + 1 RYS-duplicated layer 33)
* **Context length:** 131,072 tokens
* **License:** Apache-2.0

## Usage

### Build the patched llama.cpp

```bash
git clone https://github.com/DJLougen/llama.cpp.git
cd llama.cpp
git checkout rys-qwen35
cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
```

Drop `-DGGML_CUDA=ON` for a CPU-only build. The patch touches the GGUF loader; backend selection is independent.

### Download + run

```bash
./build/bin/llama-server \
    -m ornstein-3.6-27b-rys-q4_k_m.gguf \
    --host 0.0.0.0 --port 8080 \
    --n-gpu-layers 99 --ctx-size 131072 \
    --flash-attn on --jinja \
    -ctk q4_0 -ctv q4_0
```

## License

Apache 2.0