nyayasaar-lora / README.md
joyboseroy's picture
Update README
a446150 verified
---
license: apache-2.0
base_model: "Qwen/Qwen2.5-0.5B-Instruct"
tags: ["legal-ai", "lora", "peft", "qlora", "indian-law", "legal-simplification", "accessibility", "irac", "transformers", "unsloth"]
language: ["en"]
pipeline_tag: text-generation
datasets: ["joyboseroy/inIRAC"]
library_name: transformers
---
--------------------------
# NyayaSaar-LoRA
NyayaSaar-LoRA is a lightweight LoRA adapter fine-tuned to simplify structured Indian legal reasoning into plain English.
The project focuses on improving accessibility to legal reasoning for non-lawyers interacting with the Indian legal system.
This model was trained using parameter-efficient fine-tuning (LoRA/QLoRA) on structured IRAC-style legal reasoning examples derived from the inIRAC dataset.
---
# Motivation
Indian legal documents are often difficult for ordinary citizens to understand because of:
* archaic terminology
* procedural complexity
* dense sentence structures
* formal legal phrasing
NyayaSaar-LoRA explores whether small language models can learn to preserve legal meaning while simplifying language and improving readability.
The project prioritizes:
* accessibility
* explainability
* low-resource adaptation
* efficient fine-tuning
---
# Example
## Input
```text id="jlwmkm"
Issue: Whether the detention order violates Article 22.
Rule: Preventive detention laws require procedural safeguards.
Application: The petitioner argued safeguards were not followed.
Conclusion: The detention order is quashed.
```
## Output
```text id="4awft0"
The court examined whether the detention was legal under the Constitution.
The law says preventive detention must follow proper safeguards.
The petitioner argued these safeguards were ignored.
The court agreed and cancelled the detention order.
```
---
# Technical Details
## Base Model
* Qwen2.5-0.5B-Instruct (4-bit quantized)
## Fine-Tuning Method
* LoRA / QLoRA
* PEFT
* Unsloth
* Supervised Fine-Tuning (SFT)
## Training Environment
* Google Colab
* Single GPU
* Low-resource fine-tuning setup
---
# Dataset
Training examples were derived from the inIRAC dataset:
https://huggingface.co/datasets/joyboseroy/inIRAC
The dataset contains structured Indian legal reasoning in IRAC format:
* Issue
* Rule
* Application
* Conclusion
---
# Intended Use
This project is intended for:
* legal accessibility research
* plain-English legal explanation
* educational demonstrations
* low-resource legal NLP experimentation
* research into explainable legal AI
---
# Usage
```python id="zj38zt"
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "Qwen/Qwen2.5-0.5B-Instruct"
adapter_model = "joyboseroy/nyayasaar-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
device_map="auto"
)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = """
Issue: Whether the detention order violates Article 22.
Rule: Preventive detention laws require procedural safeguards.
Application: The petitioner argued safeguards were not followed.
Conclusion: The detention order is quashed.
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=200
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
# Limitations
This model:
* does not provide legal advice
* may oversimplify nuanced legal reasoning
* may omit procedural subtleties
* should not be used in high-stakes legal decision-making
Outputs must always be reviewed by qualified legal professionals before practical use.
---
# Future Work
Potential future directions include:
* multilingual Indian legal simplification
* Hindi and Bengali adaptation
* graph-grounded legal reasoning
* retrieval-augmented legal explanation
* evaluation using readability and semantic-preservation metrics
---
# Related Work
## Dataset
inIRAC Dataset:
https://huggingface.co/datasets/joyboseroy/inIRAC
## Research
Falkor-IRAC: Graph-Constrained Generation for Verified Legal Reasoning in Indian Judicial AI
https://arxiv.org/abs/2605.14665
---
# Citation
```bibtex id="t1xjlwm"
@misc{bose2026nyayasaar,
title={NyayaSaar-LoRA: Simplifying Indian Legal Reasoning using PEFT},
author={Joy Bose},
year={2026},
howpublished={Hugging Face Model Repository}
}
```
---
# Author
Joy Bose
Senior Data Scientist and Researcher
Research interests:
* Legal AI
* Explainable AI
* Graph Reasoning
* Ethical AI
* Efficient LLM Adaptation
---