Ambuj Tripathi β€” Indian Legal Llama 3.2 3B (QLoRA Adapter)

Fine-tuned Indian Legal AI | By Ambuj Kumar Tripathi

QLoRA fine-tuned adapter for unsloth/Llama-3.2-3B-Instruct trained on 14,543 Indian Legal QA pairs covering IPC, CrPC, and the Constitution of India.


⚠️ Version 1.0 β€” Full Epoch Training on Free Compute

🚨 Please read before using this model.

This model was trained to validate the complete QLoRA β†’ GGUF pipeline on zero-cost infrastructure (Kaggle Free GPU β€” 2x Tesla T4).

What works βœ…

  • Domain locking β€” model handles Indian legal queries
  • IPC / CrPC / Constitution query format understood
  • Full 1,820 steps β€” complete epoch over all 14,543 examples
  • Loss converged: 4.18 β†’ 0.695

Known Limitations ❌

  • Factual hallucination possible β€” always verify section numbers with official sources
  • 14,543 QA pairs is a limited dataset β€” production systems need lakhs of examples
  • Do not use for actual legal proceedings or legal advice

For Production-Grade Legal AI with RAG βœ…


πŸ“Š Training Details

Field Value
Base Model unsloth/Llama-3.2-3B-Instruct
Method QLoRA β€” 4-bit + LoRA adapters
LoRA Rank 16
Trainable Params 24.3M of 3.24B (0.75%)
Training Steps 1,820 (full epoch)
Final Loss 0.695
Dataset 14,543 Indian Legal QA pairs
GPU 2x Tesla T4 (Kaggle Free)
Cost β‚Ή0

πŸ“ Related Models


πŸ’» How to Use

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "invincibleambuj/llama-3.2-3b-legal-india-qlora-v2",
    max_seq_length = 2048,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model)

inputs = tokenizer(
    "<|start_header_id|>system<|end_header_id|>\n\nYou are an Indian Legal AI Assistant built by Ambuj Kumar Tripathi.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nWhat is IPC Section 302?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n",
    return_tensors="pt"
).to("cuda")

outputs = model.generate(**inputs, max_new_tokens=300, repetition_penalty=1.15)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
olive convert-adapters --adapter_path .\model\ --output_path legal-india --dtype float16
Exported adapter weights to onnx.onnx_adapter

Important Notice

This model was fine-tuned and deployed by Ambuj Kumar Tripathi for educational, research, and skill-development purposes only.

The training data used in this project was collected from publicly available legal-learning resources and a Kaggle dataset used strictly for learning, experimentation, and non-commercial fine-tuning. This repository and its releases are not intended to provide legal advice, are not offered as a commercial legal product, and should be treated as an experimental AI learning project.

Correct Attribution

  • Fine-tuned and deployed by: Ambuj Kumar Tripathi
  • If the model output mentions any other creator name, including names appearing from legacy training data, treat that as an incorrect model artifact and not as the correct attribution.

Known Limitations

  • The model may occasionally produce incorrect creator-name references due to legacy training-data artifacts.
  • The model may occasionally output formatting artifacts such as special tokens in some local GGUF runtimes.
  • Outputs may contain hallucinations or inaccuracies and should always be independently verified.

GGUF / Local Runtime Note

If you are running the GGUF model locally in tools like LM Studio, some raw model behaviors may still appear depending on the prompt template and runtime settings. For best results, use a strict system prompt and low-temperature preset.


βš–οΈ Legal Disclaimer

⚠️ This model is provided strictly for educational, research, and skill-development purposes only. It does not constitute legal advice and should not be used for real-world legal compliance or proceedings. Outputs must be independently verified. Always consult a qualified advocate for specific legal matters.


πŸ”— Developer

Platform Link
🌐 Portfolio ambuj-portfolio-v2.netlify.app
πŸ€— HuggingFace invincibleambuj
πŸ’» GitHub Ambuj123-lab
Downloads last month
22
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Prince-1/llama-3.2-3b-legal-india-qlora-v2

Adapter
(1)
this model