TinyLlama-1.1B-Chat MLX LoRA — Smart Contract Vulnerabilities OWASP
This is a LoRA adapter trained to help spot common smart contract issues (Solidity). It produces short, explainable findings. It runs locally on Apple Silicon using MLX.
- Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- This repo: Adapter weights only (load on top of the base model)
- Goal: Fast, low-cost assistant for audits, code review triage, and compliance notes
- Not a replacement for expert review or production sign-off
Why it matters (security + compliance)
- Clear, short findings you can store and review.
- Runs on your Mac (Apple Silicon), so data stays local.
- Cheap and repeatable fine-tuning as your policy and code evolve.
- Outputs follow a simple structure (Category, Explanation, PoC, Vulnerable) for easy checklists and audit trails.
Quick start (MLX)
Option A — Download adapter from Hub
from huggingface_hub import snapshot_download
from mlx_lm import load, generate
adapter_dir = snapshot_download("SatyamSinghal/tinyllama-mlx-lora-scv-1b")
model, tokenizer = load("TinyLlama/TinyLlama-1.1B-Chat-v1.0", adapter_path=adapter_dir)
prompt = (
"You are a smart contract security auditor. Analyze the following Solidity code "
"and list vulnerabilities, explanations, and PoC if relevant.\n"
"Contract:\n"
"function withdraw() public { require(balances[msg.sender] > 0); "
"(bool success,) = msg.sender.call.value(balances[msg.sender])(\"\"); "
"require(success); balances[msg.sender] = 0; }"
)
print(generate(model, tokenizer, prompt, max_tokens=200))
Option B — Use local adapter folder
from mlx_lm import load, generate
model, tokenizer = load("TinyLlama/TinyLlama-1.1B-Chat-v1.0", adapter_path="out/tinyllama_scv_lora")
print(generate(model, tokenizer, "List two reentrancy risks in withdraw patterns.", max_tokens=120))
Training at a glance
- Framework: Apple MLX + mlx-lm
- Hardware: Apple Silicon (on-device)
- Data: darkknight25/Smart_Contract_Vulnerability_Dataset
- Malformed lines skipped safely
- Reformatted to instruction pairs:
- prompt: “You are a smart contract security auditor… Contract:
” - completion: Category, Explanation, PoC, Vulnerable
- prompt: “You are a smart contract security auditor… Contract:
- 1,998 usable examples (1,898 train / 100 valid)
- LoRA params trained:
0.417% (4.6M of 1.1B) - Hyperparams: iters=400, batch_size=8, lr=2e-4
- Peak memory: ~7.9 GB
Reproduce (CLI):
python -m mlx_lm lora \
--model TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--train \
--data /path/to/data/scv/train \
--batch-size 8 \
--iters 400 \
--learning-rate 2e-4 \
--adapter-path out/tinyllama_scv_lora
Intended use
- Draft findings for audits and reviews.
- Triage code to highlight likely issues.
- Create compliance notes with short, consistent explanations.
Limits and important notes
- Not for final security approval.
- Can miss issues or be wrong (hallucinations happen).
- Always keep a human in the loop.
Responsible use
- Keep data local when you can.
- Save prompts/outputs for traceability.
- Document any human decisions made from the model’s suggestions.
Credits
- Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Dataset: darkknight25/Smart_Contract_Vulnerability_Dataset
Changelog
- v0.1: First public release (LoRA adapter, MLX, Apple Silicon)
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support