Qwen3-0.6B Recipe Chef

A fine-tuned version of Qwen/Qwen3-0.6B trained on 60,000 recipes from the RecipeNLG dataset. Give it a list of ingredients and it generates a complete recipe with title, quantities, and step-by-step directions.

Model Details

Property Value
Base model Qwen/Qwen3-0.6B
Training data RecipeNLG (70k samples)
Fine-tune method LoRA (r=64, alpha=128)
Epochs 2
Training loss 0.86
Framework Unsloth + TRL

How to Use

Option 1 β€” With Unsloth (recommended, faster)

from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name     = "Aniq-63/qwen3-0.6B-recipe-finetuned",
    max_seq_length = 1024,
    load_in_4bit   = True,
)

FastLanguageModel.for_inference(model)

@torch.inference_mode()
def generate_recipe(ingredients: str) -> str:
    messages = [
        {
            "role": "system",
            "content": (
                "You are a professional chef assistant. "
                "When given a list of ingredients, generate a complete recipe with "
                "a title, structured ingredient list with quantities, and clear "
                "step-by-step directions."
            )
        },
        {
            "role": "user",
            "content": ingredients
        }
    ]
    prompt = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True,
        enable_thinking=False,
    )
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    outputs = model.generate(
        **inputs,
        max_new_tokens = 400,
        temperature    = 0.7,
        top_p          = 0.9,
        do_sample      = True,
        use_cache      = False,
    )
    new_tokens = outputs[0][inputs["input_ids"].shape[1]:]
    return tokenizer.decode(new_tokens, skip_special_tokens=True)

print(generate_recipe("chicken, garlic, onion, olive oil, tomato"))

Option 2 β€” With standard Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "Aniq-63/qwen3-0.6B-recipe-finetuned",
    torch_dtype    = torch.float16,
    device_map     = "auto",
)
tokenizer = AutoTokenizer.from_pretrained("Aniq-63/qwen3-recipe-chef")

messages = [
    {
        "role": "system",
        "content": (
            "You are a professional chef assistant. "
            "When given a list of ingredients, generate a complete recipe with "
            "a title, structured ingredient list with quantities, and clear "
            "step-by-step directions."
        )
    },
    {
        "role": "user",
        "content": "chicken, garlic, onion, olive oil, tomato"
    }
]

prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens = 400,
    temperature    = 0.7,
    top_p          = 0.9,
    do_sample      = True,
)

new_tokens = outputs[0][inputs["input_ids"].shape[1]:]
print(tokenizer.decode(new_tokens, skip_special_tokens=True))

Training Details

  • Dataset: RecipeNLG
  • Fine-tune method: LoRA (Unsloth)
  • Epochs: 2
Downloads last month
99
Safetensors
Model size
0.6B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Aniq-63/qwen3-0.6B-recipe-finetuned

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(793)
this model

Dataset used to train Aniq-63/qwen3-0.6B-recipe-finetuned