Nero1-0.5B
Nero1-0.5B is a specialized, lightweight coding model developed by NeuronicL. It is a full fine-tune of Qwen/Qwen2.5-Coder-0.5B-Instruct, specifically optimized for agentic workflows, tool use, and complex code generation tasks using the smirki/Agentic-Coding-Tessa dataset.
Model Description
Unlike standard parameter-efficient fine-tuning (LoRA), Nero1-0.5B underwent a full parameter update. This allows the model to deeply integrate the agentic reasoning patterns found in the Tessa dataset, making it exceptionally capable of:
- Writing functional, production-ready code.
- Understanding and executing multi-step agentic instructions.
- Maintaining high performance in low-latency environments (Edge/Local development).
Key Specifications
- Base Model: Qwen/Qwen2.5-Coder-0.5B-Instruct
- Training Data: smirki/Agentic-Coding-Tessa
- Fine-tuning Method: Full Parameter Fine-tuning (Full FT)
- Parameters: 0.49 Billion
- Context Length: 32,768 tokens
Usage
You can use Nero1-0.5B with the Hugging Face transformers library. Given its Qwen2.5-Coder backbone, it follows the standard ChatML-style prompt template.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "NeuronicL/Nero1-0.5B"
device = "cuda" # or "cpu"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a helpful coding assistant specialized in agentic tasks."},
{"role": "user", "content": "Write a Python script to scrape news headlines and save them to a JSON file."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
- Downloads last month
- 684