How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "GestaltLabs/Ornstein-3.6-27B-RYS" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "GestaltLabs/Ornstein-3.6-27B-RYS",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "GestaltLabs/Ornstein-3.6-27B-RYS" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "GestaltLabs/Ornstein-3.6-27B-RYS",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

[Ornstein-3.6-27B-RYS

Ornstein-3.6-27B-RYS

RYS-enhanced variant of the Ornstein-3.6-27B dense model. Layer 33 is duplicated using the Repeat Your Self (RYS) method, improving reasoning and instruction-following performance without increasing active parameter count at inference time.

GGUF quantizations: GestaltLabs/Ornstein-3.6-27B-RYS-GGUF

About Gestalt Lab

We are a proudly Canadian research collective working to advance sovereign Canadian AI — open-weight models that Canadians (and everyone else) can run locally, study, and build on without dependence on closed foreign APIs. All training, fine-tuning, and quantization is done on local and self-funded compute. By supporting this work, you help keep frontier model development accessible, transparent, and under Canadian stewardship.

Important: requires a patched llama.cpp

RYS duplicates one of the middle layers, which breaks the hardcoded full_attention_interval = 4 assumption in stock llama.cpp's Qwen3.5 loader. This model is converted with per-layer head_count_kv baked in, and you need a llama.cpp that reads that per-layer metadata instead of falling back to the interval formula.

Patched fork: https://github.com/DJLougen/llama.cpp (default branch rys-qwen35, fully backward-compatible).

Stock llama.cpp, Ollama, LM Studio, and any other inference runtime built on stock llama.cpp will currently fail to load this model with a check_tensor_dims error — this is expected until/unless the patch is upstreamed.

Support This Work

Our training compute is entirely self-funded. If this model is useful to you, consider supporting the lab:

Support on Ko-fi


Model Details

  • Architecture: Qwen3.5 dense
  • Parameters: ~27B active
  • Layers: 65 (64 original + 1 RYS-duplicated layer 33)
  • Context length: 131,072 tokens
  • License: Apache-2.0

Usage

Build the patched llama.cpp

git clone https://github.com/DJLougen/llama.cpp.git
cd llama.cpp
git checkout rys-qwen35
cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build -j

Drop -DGGML_CUDA=ON for a CPU-only build. The patch touches the GGUF loader; backend selection is independent.

Download + run

./build/bin/llama-server \
    -m ornstein-3.6-27b-rys-q4_k_m.gguf \
    --host 0.0.0.0 --port 8080 \
    --n-gpu-layers 99 --ctx-size 131072 \
    --flash-attn on --jinja \
    -ctk q4_0 -ctv q4_0

License

Apache 2.0

Downloads last month
401
Safetensors
Model size
28B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GestaltLabs/Ornstein-3.6-27B-RYS

Base model

Qwen/Qwen3.6-27B
Finetuned
(2)
this model
Quantizations
1 model