Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -19,6 +19,8 @@ language:
|
|
| 19 |
- en
|
| 20 |
---
|
| 21 |
|
|
|
|
|
|
|
| 22 |
# Ornstein-3.6-27B-RYS
|
| 23 |
|
| 24 |
**Permanent RYS layer-duplication** of [Ornstein-3.6-27B](https://huggingface.co/GestaltLabs/Ornstein-3.6-27B), the dense multimodal member of the Qwen 3.6 family with hybrid linear + full attention (Gated Delta Net).
|
|
@@ -131,7 +133,7 @@ Grab a quant from the [GGUF repo](https://huggingface.co/GestaltLabs/Ornstein-3.
|
|
| 131 |
|
| 132 |
## Support This Work
|
| 133 |
|
| 134 |
-
|
| 135 |
|
| 136 |
**[Support on Ko-fi](https://ko-fi.com/djlougen)**
|
| 137 |
|
|
|
|
| 19 |
- en
|
| 20 |
---
|
| 21 |
|
| 22 |
+
.png)
|
| 23 |
+
|
| 24 |
# Ornstein-3.6-27B-RYS
|
| 25 |
|
| 26 |
**Permanent RYS layer-duplication** of [Ornstein-3.6-27B](https://huggingface.co/GestaltLabs/Ornstein-3.6-27B), the dense multimodal member of the Qwen 3.6 family with hybrid linear + full attention (Gated Delta Net).
|
|
|
|
| 133 |
|
| 134 |
## Support This Work
|
| 135 |
|
| 136 |
+
I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.
|
| 137 |
|
| 138 |
**[Support on Ko-fi](https://ko-fi.com/djlougen)**
|
| 139 |
|