Model Card for GPT-OSS-20B (Ilograph fine-tuned, merged model)
A LoRA fine-tuned and merged version of unsloth/gpt-oss-20b, specialized for generating Ilograph Diagram Language (IDL) specifications from natural-language instructions. The model was trained with LoRA using Unsloth and then merged into a standalone checkpoint. It can be used directly as a regular Transformers causal language model.
Model Details
- Developed by: Chris Mijangos (AI student architect at BYU)
- Shared by: Brigham Young University (BYU)
- Model type: Causal language model (decoder-only), fine-tuned GPT-OSS-20B with LoRA then merged
- Language(s): Primarily English; capabilities depend on base model and fine-tuning data
- License: Same as base model; verify unsloth/gpt-oss-20b license terms before use
- Finetuned from: unsloth/gpt-oss-20b
Model Sources
- Repository: This model card and weights are shared via the associated Hugging Face repo: Brigham-Young-University/gpt-oss-20b-ilograph-instruct
- Demo: N/A — In construction
Uses
Direct Use
Load the model to generate Ilograph (IDL) diagram specifications from instructions. Use the system prompt and IDL schema in the repository (see “How to Get Started” below). The model is intended for:
- Creating IDL diagrams with resources, relationships, and sequences
- Iterative, conversational refinement of diagrams (chat-style usage)
- Structured YAML outputs following the Ilograph Diagram Language
Out-of-Scope Use
This model is not intended for high-risk or safety-critical applications without further evaluation. Do not use for generating misleading, harmful, or illegal content. Users are responsible for complying with applicable laws and the base model’s license.
Bias, Risks, and Limitations
As with other language models, this model may reflect biases present in the base model and in the fine-tuning data. Outputs should be validated for your use case. No formal bias or safety evaluation is provided with this release.
Due to limited and focused training data, the model is primarily suited for relatively simple Ilograph diagrams centered on resources, relationships, and sequences. For more complex, large-scale, or highly customized diagram structures, the model may not perform as well and additional fine-tuning or a larger base model may be required.
Recommendations
Users should evaluate the model on their own data and tasks and be aware of potential biases and limitations before deployment.
How to Get Started with the Model
Load the merged fine-tuned model directly from the Hugging Face repo:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Brigham-Young-University/gpt-oss-20b-ilograph-instruct"
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
trust_remote_code=True,
)
inputs = tokenizer("Create 3 resources with icons, description and color", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Ilograph (IDL) system prompt and schema
The repository includes a system prompt and an IDL schema (JSON). Use the schema to fill in the placeholder in the prompt, then append your instruction. Example system prompt:
You are an expert in the Ilograph Diagram Language (IDL). You have been trained on data that is formatted in the following way:
<insert the schema JSON here>
Your task is to create a valid IDL specification for the diagram. You will be given an instruction of what to create, and you will need to create a valid IDL specification for the diagram.
CRITICAL RULES:
- NEVER use JSON format
- NEVER use Mermaid syntax
- NEVER use any format except ilograph YAML
- Use YAML syntax with proper indentation
Here is the instruction:
The schema is provided in the repository; inject its contents (e.g. as formatted JSON) where indicated above, then add your diagram instruction after “Here is the instruction:”.
Evaluation
No formal evaluation results are provided with this release. Users are encouraged to evaluate the model on their own Ilograph workflows and benchmarks.
Model Card Authors
- Chris Mijangos (BYU)
Model Card Contact
For questions about this model card or the model, please open an issue on the associated Hugging Face repository or contact through BYU.
Framework versions
- PEFT 0.18.1
- Transformers 4.56.2 (used for training; verify compatibility for your environment)
- Downloads last month
- 9