You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Neotoi Coder v2.0

A Rust/Dioxus 0.7 specialist fine-tuned from Qwen3-Coder-14B using RAFT (Retrieval-Augmented Fine-Tuning). Optimized for production-quality Dioxus 0.7 components with Tailwind v4, WCAG 2.2 AAA accessibility, GlobalSignal state management, i18n, dark mode, and static content navigation.

What's New in v2.0

Significant improvements over v1.0 across every tier:

  • New Tier 8 โ€” GlobalSignal & i18n: Correct .write() vs .set() semantics, pre-rsx! i18n bindings, dark mode via document::eval, sticky nav with use_hook
  • New Tier 9 โ€” Static Content Navigator: use_memo filtering, tag-based routing, deterministic LLM intent mapping over static content
  • New Tier 10 โ€” Dioxus 0.7.4 APIs: WritableResultExt, use_context panic behavior, WebSocket Stream+Sink, consume_context vs use_context
  • T6 Hard Reasoning: Went from 2/5 passes to 10/10 โ€” full perfect score
  • T4 WCAG/ARIA: Perfect 14/14 โ€” tooltip always in DOM, listbox/option nesting, aria_labelledby on all role containers
  • T5 use_resource: Perfect 8/8 โ€” three-arm match, no .ok() wrapper, signal read inside closure
  • MLX format included for Apple Silicon (Ollama 0.19+, mlx_lm)

Exam Results

v2.0 โ€” 100 Question Weighted Exam

Tier Questions Weight Score Max Status
T1 Fundamentals Q1โ€“12 1.0 11/12 12 โœ…
T2 RSX Syntax Q13โ€“24 1.0 10/12 12 โœ…
T3 Signal Hygiene Q25โ€“36 1.0 12/12 12 โœ… Perfect
T4 WCAG/ARIA Q37โ€“50 1.5 14/14 21 โœ… Perfect
T5 use_resource Q51โ€“58 1.5 8/8 12 โœ… Perfect
T6 Hard Reasoning Q59โ€“68 2.0 10/10 20 โœ… Perfect
T7 Primitives+CSS Q69โ€“80 1.5 11/12 18 โœ…
T8 GlobalSignal/i18n Q81โ€“88 1.5 8/8 12 โœ… Perfect
T9 Static Navigator Q89โ€“94 1.5 6/6 9 โœ… Perfect
T10 Dioxus 0.7.4 Q95โ€“100 2.0 6/6 12 โœ… Perfect
Overall Q1โ€“100 135.5/140 140 โœ… 96.8%

Version History

Version Score Exam Status
v1.0 51/60 (85%) 60Q standard Published
v2.0 135.5/140 (96.8%) 100Q weighted Published

Model Details

  • Base model: Qwen3-Coder-14B
  • Method: RAFT (Retrieval-Augmented Fine-Tuning)
  • Dataset: 4,185 curated Dioxus 0.7 examples (3 training runs)
  • Scope: Rust + Dioxus 0.7 + Tailwind v4 + WCAG 2.2 AAA
  • Quantization: Q4_K_M (8.4 GB) and MLX 4-bit (7.8 GB)
  • Author: Kevin Miller, Jr.

Read the Full Story

Detailed writeup on the training methodology, exam results, and infrastructure:

Neotoi Coder v2.0 โ€” RockyPod.com


Files

File Format Size Use case
neotoi-coder-v2-q4_k_m.gguf GGUF Q4_K_M 8.4 GB LM Studio, llama.cpp, Ollama Linux
mlx/ MLX 4-bit 7.8 GB Ollama 0.19+ Apple Silicon, mlx_lm
neotoi-coder-v1-q4_k_m_final.gguf GGUF Q4_K_M 8.4 GB v1.0 legacy

Enabling Thinking Mode

LM Studio

In the chat interface go to the prompt template settings and configure:

Field Value
Before System <|im_start|>system
After System <|im_end|>
Before User <|im_start|>user
After User <|im_end|>
Before Assistant <|im_start|>assistant\n<think>
After Assistant <|im_end|>

Ollama (Linux / GGUF)

FROM neotoi-coder-v2-q4_k_m.gguf
PARAMETER temperature 0.2
PARAMETER num_predict 4096
PARAMETER repeat_penalty 1.15
PARAMETER stop "<|im_end|>"
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
<think>
"""
SYSTEM You are Neotoi, an expert Rust and Dioxus 0.7 developer. Always think step-by-step before answering.

mlx_lm server (Apple Silicon / Ollama 0.19+)

mlx_lm server \
  --model path/to/mlx/ \
  --host 127.0.0.1 \
  --port 8080

Then point any OpenAI-compatible client at http://localhost:8080.

llama.cpp

./llama-cli \
  -m neotoi-coder-v2-q4_k_m.gguf \
  -ngl 99 \
  --temp 0.2 \
  -p "<|im_start|>user\nYour question<|im_end|>\n<|im_start|>assistant\n<think>"

What It Knows

  • Dioxus 0.7 RSX brace syntax โ€” never function-call style
  • use_signal, use_resource with correct three-arm match
  • r#for on label elements only, never inputs
  • GlobalSignal โ€” .write() not .set() for statics
  • WCAG 2.2 AAA: tooltip always in DOM, listbox/option nesting, aria_labelledby on all role containers
  • dioxus-primitives โ€” no manual ARIA on managed components
  • styles!() macro for CSS modules
  • Tailwind v4 utility classes and semantic tokens
  • EN/VI i18n via pre-rsx! let bindings
  • Dark mode via document::eval + CSS custom properties
  • Static content navigation with use_memo filtering
  • use_context panics without provider โ€” never returns None
  • WritableResultExt from Dioxus 0.7.4

What It Does Not Know

  • Playwright/E2E testing (out of scope)
  • Non-Dioxus web frameworks
  • WebSocket Stream+Sink (simulated only โ€” v2.1 target)

License

Neotoi Coder Community License v1.0 โ€” see LICENSE file. Commercial use of model outputs permitted. Weight redistribution prohibited. Mental health deployment requires written permission.

Credits

Built with:

  • Unsloth โ€” 2x faster fine-tuning
  • TRL โ€” SFTTrainer
  • Qwen3-Coder-14B โ€” base model
  • MLX โ€” Apple Silicon inference
  • Claude Code โ€” dataset pipeline and training infrastructure
  • Dioxus โ€” the framework this model specializes in
Downloads last month
16
MLX
Hardware compatibility
Log In to add your hardware

Quantized

GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support