metadata
license: other
license_name: neotoi-coder-community-license
language:
- en
- vi
base_model: Qwen/Qwen3-Coder-14B
tags:
- rust
- dioxus
- dioxus-0.7
- rsx
- fine-tuned
- raft
- code
- mlx
- gguf
- wcag
- accessibility
- tailwind
- unsloth
- qwen3
- local-llm
- continue-dev
pipeline_tag: text-generation
model-index:
- name: neotoi-coder-v2
results:
- task:
type: text-generation
metrics:
- type: custom
name: Dioxus 0.7 Weighted Exam (100Q)
value: 96.8
Neotoi Coder v2.0
A Rust/Dioxus 0.7 specialist LLM β 96.8% on a 100-question weighted exam. Built with RAFT on a homelab RTX 3090 Ti. No cloud GPUs.
Read the whole story on RockyPod.com β Companion GitHub repo β benchmarks and integration guides
Direct Download
No account or approval required.
| File | Size | Format |
|---|---|---|
| neotoi-coder-v2.0-q4_k_m.gguf | 8.4GB | GGUF Q4_K_M |
| mlx/ weights | 7.8GB | MLX 4-bit |
Quick Start
Apple Silicon (mlx_lm)
pip install mlx-lm
mlx_lm server --model /path/to/neotoi-v2.0-mlx --port 8081
Linux (Ollama)
ollama create neotoi-coder-v2 -f Modelfile
ollama run neotoi-coder-v2
LM Studio
Download neotoi-coder-v2.0-q4_k_m.gguf above.
See the LM Studio setup guide.
Continue.dev
See the Continue.dev config.
Zed Editor
See the Zed setup guide.
Exam Results
| Tier | Score | Max | Status |
|---|---|---|---|
| T1 Fundamentals | 11/12 | 12 | β |
| T2 RSX Syntax | 10/12 | 12 | β |
| T3 Signal Hygiene | 12/12 | 12 | β Perfect |
| T4 WCAG/ARIA | 14/14 Γ 1.5 | 21 | β Perfect |
| T5 use_resource | 8/8 Γ 1.5 | 12 | β Perfect |
| T6 Hard Reasoning | 10/10 Γ 2.0 | 20 | β Perfect |
| T7 Primitives+CSS | 11/12 Γ 1.5 | 18 | β |
| T8 GlobalSignal/i18n | 8/8 Γ 1.5 | 12 | β Perfect |
| T9 Static Navigator | 6/6 Γ 1.5 | 9 | β Perfect |
| T10 Dioxus 0.7.4 | 6/6 Γ 2.0 | 12 | β Perfect |
| Total | 135.5/140 | 140 | 96.8% |
What It Knows
- Dioxus 0.7 RSX brace syntax β never function-call style
use_signal,use_resourcewith correct three-arm matchr#foron label elements only, never inputsGlobalSignalβ.write()not.set()for statics- WCAG 2.2 AAA: tooltip always in DOM, listbox/option nesting,
aria_labelledbyon all role containers - dioxus-primitives β no manual ARIA on managed components
styles!()macro for CSS modules- Tailwind v4 utility classes and semantic tokens
- EN/VI i18n via pre-rsx! let bindings
- Dark mode via
document::eval+ CSS custom properties - Static content navigation with
use_memofiltering use_contextpanics without provider β never returns NoneWritableResultExtfrom Dioxus 0.7.4
What It Does Not Know
- Playwright/E2E testing (out of scope)
- Non-Dioxus web frameworks
- WebSocket Stream+Sink real patterns (v2.1 target)
Enabling Thinking Mode
LM Studio
| Field | Value |
|---|---|
| Before System | `< |
| After System | `< |
| Before User | `< |
| After User | `< |
| Before Assistant | `< |
llama.cpp
./llama-cli \
-m neotoi-coder-v2.0-q4_k_m.gguf \
-ngl 99 \
--temp 0.2 \
-p "<|im_start|>user\nYour question<|im_end|>\n<|im_start|>assistant\n<think>"
Model Details
- Base model: Qwen3-Coder-14B
- Method: RAFT (Retrieval-Augmented Fine-Tuning)
- Dataset: 4,185 curated Dioxus 0.7 examples
- Training: 4 epochs, RTX 3090 Ti, ~4 hours
- Train loss: 0.3727 (from clean Qwen3-14B base)
- Quantization: Q4_K_M (8.4 GB) and MLX 4-bit (7.8 GB)
License
Neotoi Coder Community License v1.0. Commercial use of outputs permitted. Weight redistribution prohibited.
Credits
Built with Unsloth, Qwen3-Coder-14B, MLX, and Claude Code.