Model Card for Qwen3-Coder-30B-A3B-Instruct (Ilograph fine-tuned, merged model)

A LoRA fine-tuned and merged version of Qwen/Qwen3-Coder-30B-A3B-Instruct, specialized for generating Ilograph Diagram Language (IDL) specifications from natural-language instructions.

The repository is intended to include the same IDL schema JSON and system prompt used in training, so you can reproduce the prompting setup for Ilograph diagrams.

Model Details

  • Developed by: Chris Mijangos (AI student architect at BYU)
  • Shared by: Brigham Young University (BYU)
  • Model type: Causal language model (decoder-only), fine-tuned Qwen/Qwen3-Coder-30B-A3B-Instruct with LoRA then merged
  • Language(s): Primarily English plus programming languages; capabilities depend on the base model and fine-tuning data
  • License: Same as base model; verify Qwen/Qwen3-Coder-30B-A3B-Instruct license terms before use
  • Finetuned from: Qwen/Qwen3-Coder-30B-A3B-Instruct

Model Sources

  • Repository: This model card and weights are shared via the associated Hugging Face repo
  • Demo: N/A — In construction

Uses

Direct Use

Use this model to generate Ilograph (IDL) diagram specifications from natural-language instructions.
Pair it with the system prompt and IDL schema JSON included in the repository.

The model is intended for:

  • Creating IDL diagrams that describe resources, relationships, and sequences
  • Iterative, conversational refinement of diagrams (chat-style usage)
  • Code-like structured YAML outputs following the Ilograph Diagram Language

Out-of-Scope Use

This model is not intended for:

  • Generating misleading or harmful content
  • Any use that violates the base model’s license or applicable laws

Bias, Risks, and Limitations

As with other large language models:

  • Outputs may reflect biases present in the base model and the fine-tuning data.
  • The model can produce incorrect or malformed IDL; diagrams should be validated before use.
  • The model is primarily trained for Ilograph diagrams, not general-purpose conversation.

Because the fine-tuning data is focused and the model is large (30B parameters), it may still struggle with:

  • Very complex, large-scale system diagrams
  • Highly custom or unusual IDL constructs outside what it has seen in training

Recommendations

  • Validate all generated IDL against the schema and your own checks.
  • Evaluate the model on your own tasks before deployment.
  • Keep a human in the loop when using outputs for critical documentation or system design.

How to Get Started with the Model

Load the merged fine-tuned model directly from the Hugging Face repo:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Brigham-Young-University/Qwen3-Coder-30B-A3B-Ilograph-Instruct"

tokenizer = AutoTokenizer.from_pretrained(
    model_id,
    trust_remote_code=True,
)

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    trust_remote_code=True,
)

inputs = tokenizer("Create a diagram with 3 resources", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Ilograph (IDL) system prompt and schema

Include the IDL schema JSON inside a system prompt and then append your instruction. For example:

You are an expert in the Ilograph Diagram Language (IDL). You have been trained on data that is formatted in the following way:

<insert the schema JSON here>

Your task is to create a valid IDL specification for the diagram. You will be given an instruction of what to create, and you will need to create a valid IDL specification for the diagram.

CRITICAL RULES:
- NEVER use JSON format
- NEVER use Mermaid syntax
- NEVER use any format except ilograph YAML
- Use YAML syntax with proper indentation

Here is the instruction:

The schema file (e.g., idl-2025-11-03.schema.json) should be included in the repository; inject its contents where indicated above, then add your instruction after “Here is the instruction:”.

Evaluation

No formal benchmark results are reported for this release.
Users are encouraged to evaluate the model on their own Ilograph workflows (e.g., diagram complexity, correctness, and edit iterations).

Model Card Authors

  • Chris Mijangos (BYU)

Model Card Contact

For questions about this model card or the model, please open an issue on the associated Hugging Face repository or contact through BYU.

Framework versions

  • PEFT 0.18.1
Downloads last month
69
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Brigham-Young-University/Qwen3-Coder-30B-A3B-Ilograph-Instruct

Finetuned
(51)
this model
Quantizations
1 model

Dataset used to train Brigham-Young-University/Qwen3-Coder-30B-A3B-Ilograph-Instruct