This repository contains a fully fine-tuned and merged version of Qwen/Qwen3-0.6B, trained on the official Union Budget 2026–27 speech document.

The model has been instruction-tuned to answer questions related to:

  • Fiscal policy
  • Capital expenditure
  • Sector-wise allocations
  • Taxation changes
  • Government initiatives and reforms

This is a standalone model.
No LoRA adapters or base model downloads are required separately.


🔍 Model Details

  • Base model: Qwen/Qwen3-0.6B
  • Fine-tuning method: QLoRA → merged into base model
  • Model type: Causal Language Model
  • Language: English
  • Task: Instruction following, document-based Q&A
  • Model size: ~0.6B parameters
  • Deployment: CPU / GPU compatible

Training Dataset

The model was trained on an instruction-formatted dataset derived from the Union Budget 2026–27 PDF.

🔗 Dataset:
https://huggingface.co/datasets/your-username/qwen3-budget-2026-instructions

Dataset Structure

Each training sample includes:

  • instruction: Task description
  • input: Chunk of budget text
  • output: Target response

Recommended

  • Budget and policy question answering
  • Government document assistants
  • Educational and research use
  • RAG + fine-tuning hybrid systems

Not Recommended

  • Legal advice
  • Financial or investment advice
  • Real-time policy interpretation

How to Use

Load the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "your-username/qwen3-0.6b-budget-2026"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto"
)
Downloads last month
2
Safetensors
Model size
0.6B params
Tensor type
F32
·
F16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for iravikr/qwen3-0.6b-finance-india

Adapter
(56)
this model

Dataset used to train iravikr/qwen3-0.6b-finance-india