ARO Coder

A fine-tuned code generation model specialised in the ARO (Action Result Object) programming language.

ARO is a domain-specific language where every statement follows the pattern: Verb the <Result> preposition [the] <Object>.

Base model mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit
Quantization 4-bit (MLX)
Language ARO
Training samples 861
Syntax pass rate 47%
Source label dpo

Links

Quick Start

MLX (Apple Silicon)

from mlx_lm import load, generate

model, tokenizer = load("ARO-Lang/aro-coder-4bit")

messages = [
    {"role": "system", "content": "You are an expert ARO programmer."},
    {"role": "user", "content": "Write an ARO feature set that retrieves a user by ID and returns an OK response."},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)

MLX Server (OpenAI-compatible API)

python -m mlx_lm.server --model ARO-Lang/aro-coder-4bit --port 8080

curl http://localhost:8080/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -d '{"model": "aro-coder", "messages": [{"role": "user", "content": "Write hello world in ARO"}]}'

Ollama

ollama run aro-coder

Example Output

Prompt: Write an ARO Application-Start that starts an HTTP server.

(Application-Start: My API) {
    Log "Starting server..." to the <console>.
    Start the <http-server> with <contract>.
    Keepalive the <application> for the <events>.
    Return an <OK: status> for the <startup>.
}

What is ARO?

ARO is a DSL for expressing business features as Action-Result-Object statements. Every program is a directory of .aro files with event-driven feature sets:

(getUser: User API) {
    Extract the <id> from the <pathParameters: id>.
    Retrieve the <user> from the <user-repository> where id = <id>.
    Return an <OK: status> with <user>.
}

Key features:

  • Contract-first HTTP — routes defined in openapi.yaml, feature sets match operationId
  • Event-driven — feature sets triggered by events, not direct calls
  • Immutable bindings — every transformation produces a new name
  • Happy-path only — no error handling code; the runtime manages errors

Training

This model was trained with the ARO training pipeline:

  1. Corpus collection — 861 samples from Examples, Book, Wiki, Proposals, and real-world ARO applications
  2. Supervised fine-tuning — LoRA on all code generation, debugging, Q&A, and explanation tasks
  3. DPO preference training — using aro check validation to build chosen/rejected pairs
  4. Iterative self-improvement — multiple rounds of generate-validate-retrain

License

This model and the ARO language are open source under the MIT License.

Downloads last month
206
Safetensors
Model size
31B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ARO-Lang/aro-coder-4bit