Verilog Specialist 32B Adapters
A domain-specialized PEFT/LoRA adapter for Verilog and RTL generation, built on top of unsloth/qwen2.5-coder-32b-instruct-bnb-4bit.
This model is intended to improve large language model performance on hardware-description-language tasks such as:
- synthesizable Verilog generation
- RTL completion
- FSM generation
- Verilog bug fixing and rewriting
- HDL-oriented assistance inside multi-agent silicon design workflows
It is designed as a specialist model layer for AgentIC, a full-stack autonomous silicon engineering framework.
Model Details
Model Description
vxkyyy/verilog-specialist-32b-adapters is a lightweight adapter checkpoint trained to specialize a 32B coder model for Verilog / RTL-oriented text generation.
This repository contains adapter weights only and is not a standalone base model.
The adapter is best used when a workflow needs stronger HDL generation quality than a general-purpose coding model can provide, especially for:
leaf RTL module generation
synthesizable code rewrites
hardware-style code completion
iterative repair in verification/debug loops
Developed by: vxkyyy
Shared by: vxkyyy
Project: AgentIC / Buildstack Lab
Model type: PEFT LoRA adapter for causal language modeling / text generation
Language(s): English, Verilog, HDL-style technical text
License: -
Finetuned from model:
unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
Model Sources
- Repository: https://huggingface.co/vxkyyy/verilog-specialist-32b-adapters
- Base model: https://huggingface.co/unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
- Project context: AgentIC (private / Buildstack Lab project context)
Uses
Direct Use
This model is intended for:
- Verilog RTL generation
- synthesizable HDL drafting
- FSM-oriented code generation
- hardware module completion
- rewriting buggy or low-quality RTL into cleaner synthesizable code
Example prompts:
- "Write synthesizable Verilog for a UART transmitter with active-low reset."
- "Generate a 2-process FSM for an AXI-lite write controller."
- "Rewrite this Verilog so it is synthesizable and avoids latches."
Downstream Use
This adapter is especially useful when plugged into larger systems such as:
- AgentIC
- RTL generation pipelines
- verification-aware HDL assistants
- code repair loops for hardware design
- educational and research workflows in digital design
Within AgentIC, this model is best used for:
- RTL leaf-node generation
- Verilog-specialized rewrite loops
- code-generation stages inside
react_agent.pyordesigner.py
Out-of-Scope Use
This model is not intended to replace:
- simulation
- formal verification
- timing analysis
- CDC signoff
- synthesis signoff
- physical design tools
- tapeout decision-making without external validation
It should not be treated as a guaranteed-correct chip design engine.
Bias, Risks, and Limitations
This model inherits limitations from the underlying base model and from adapter-based specialization.
Known limitations include:
- It may generate syntactically valid but logically incomplete RTL.
- It does not guarantee synthesizability or protocol correctness.
- It may require multiple repair iterations for complex modules.
- It may perform better on smaller leaf modules than on full-system hardware design.
- It should not be used as the sole basis for fabrication decisions.
Because it is adapter-based, output quality is also bounded by the capabilities of the base model.
Recommendations
Users should:
- validate outputs with Verilator / Icarus Verilog
- run Yosys synthesis checks
- use formal or simulation-based verification where possible
- prefer constrained prompts that specify reset style, clocking behavior, and synthesis requirements
- use this model as part of a larger verification-aware workflow such as AgentIC
How to Get Started with the Model
This repository contains adapter weights only.
Load it on top of the declared base model.
Transformers + PEFT
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "unsloth/qwen2.5-coder-32b-instruct-bnb-4bit"
adapter_id = "vxkyyy/verilog-specialist-32b-adapters"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype="auto"
)
model = PeftModel.from_pretrained(base_model, adapter_id)
prompt = """
Write synthesizable Verilog for a synchronous FIFO with:
- width = 8
- depth = 16
- active-low reset
- full and empty flags
- no non-synthesizable constructs
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs, skip_special_tokens=True))
- Downloads last month
- 53