Qwen3.5-9B-Danish-Instruct
Danish instruction-tuned version of unsloth/Qwen3.5-9B, fine-tuned on the kobprof/skolegpt-instruct dataset.
Usage
from transformers import AutoModelForCausalLM, AutoProcessor
model = AutoModelForCausalLM.from_pretrained("AndersMK/Qwen3.5-9B-Danish-Instruct")
processor = AutoProcessor.from_pretrained("AndersMK/Qwen3.5-9B-Danish-Instruct")
messages = [
{"role": "system", "content": [{"type": "text", "text": "Du er en hjælpsom assistent."}]},
{"role": "user", "content": [{"type": "text", "text": "Hvad er koldskål?"}]}
]
inputs = processor.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True,
enable_thinking=False, return_tensors="pt", return_dict=True
)
outputs = model.generate(**inputs, max_new_tokens=256)
input_len = inputs["input_ids"].shape[1]
print(processor.decode(outputs[0][input_len:], skip_special_tokens=True))
Training
- Base model: unsloth/Qwen3.5-9B
- Dataset: kobprof/skolegpt-instruct
- Method: LoRA fine-tuning with Unsloth
Acknowledgments
- Base model authors
- Dataset authors
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support