Llama-3.2-1B-nl2bash

A fine-tuned version of mlx-community/Llama-3.2-1B-Instruct-4bit for converting natural language descriptions into bash commands.

Fine-tuned using LoRA via mlx-lm on Apple Silicon.

Training

Property Value
Base model Llama-3.2-1B-Instruct (4-bit)
Method LoRA (rank 8, 16 layers)
Dataset devinxx/nl2bash-combined
Training examples 27,748
Iterations 2,000
Learning rate 1e-4 (cosine decay)
Hardware Apple Silicon (MLX)
Peak memory ~2.9 GB

Usage

With mlx-lm

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("devinxx/Llama-3.2-1B-nl2bash")

messages = [
    {"role": "system", "content": "You are a bash expert. Convert the user's natural language description into a single bash command. Output only the bash command, no explanation."},
    {"role": "user", "content": "find all python files modified in the last 7 days"},
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=128, verbose=False)
print(response)
# find . -name "*.py" -mtime -7

CLI

mlx_lm.generate \
  --model devinxx/Llama-3.2-1B-nl2bash \
  --prompt "find all log files larger than 100MB"

Example Outputs

Natural Language Command
find all python files modified in the last week find . -name "*.py" -mtime -7
list all running processes sorted by memory ps aux --sort=-%mem
count lines in all csv files recursively find . -name "*.csv" | xargs wc -l
show disk usage of current directory sorted by size du -sh * | sort -rh
compress a folder to tar.gz tar -czf archive.tar.gz folder/

Dataset

Training data: devinxx/nl2bash-combined

Combined from:

Limitations

  • Primarily covers common Unix/Linux file, text, and process operations
  • Limited coverage of Docker, Kubernetes, cloud CLIs — the training data does not include these
  • May produce incorrect flags or syntax for edge cases
  • Always review generated commands before running, especially with rm, chmod, or destructive operations

License

Llama 3.2 Community License — see Meta's license terms.

Downloads last month
308
Safetensors
Model size
0.2B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DiyRex/Llama-3.2-1B-nl2bash

Dataset used to train DiyRex/Llama-3.2-1B-nl2bash