Nyaya-Llama-3.1-8B-Indian-Legal โš–๏ธ๐Ÿ‡ฎ๐Ÿ‡ณ

Nyaya-Llama is a specialized legal language model fine-tuned on Indian Legal Judgments. It is based on Meta Llama 3.1 8B and trained using Unsloth for efficient fine-tuning.

  • Nyaya (เคจเฅเคฏเคพเคฏ): Sanskrit/Hindi word for Justice.
  • Focus: Designed to understand, analyze, and summarize Indian legal documents, case laws, and reasoning.

๐Ÿ“Š Model Details

  • Base Model: unsloth/Meta-Llama-3.1-8B-Instruct
  • Training Data: OpenNyAI Judgments (~12,000 Indian High Court & Supreme Court judgments).
  • Training Method: QLoRA (4-bit quantization) via Unsloth.
  • Epochs: 1 Full Epoch (guaranteeing comprehensive coverage of the subset).
  • Context Window: 8192 tokens.

๐Ÿš€ Usage

Installation

pip install unsloth
pip install --no-deps "xformers<0.0.26" "trl<0.9.0" peft accelerate bitsandbytes

Inference Code

from unsloth import FastLanguageModel
import torch

model_name = "Raazi29/Nyaya-Llama-3.1-8B-Indian-Legal" # Replace with your username

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = model_name,
    max_seq_length = 8192,
    dtype = None,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model)

prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
Analyze this Indian legal judgment and remove key reasoning.

### Input:
[Paste Legal Judgment Text Here]

### Response:
"""

inputs = tokenizer([prompt], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 512, repetition_penalty=1.2)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])

โš ๏ธ Limitations & Disclaimers

  • Legal Advice: This model is for research and development purposes only. Do not use it as a substitute for professional legal advice.
  • Citation formatting: The model may mimic the style of judgments by appending case citations to answers. Use string processing to clean outputs if needed.
  • Accuracy: While trained on real data, LLMs can hallucinate. Always verify citations against official reporters.

๐Ÿ› ๏ธ Training

Trained with Unsloth on NVIDIA GPUs.

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train Raazi29/Nyaya-Llama-3.1-8B-Indian-Legal

Spaces using Raazi29/Nyaya-Llama-3.1-8B-Indian-Legal 2