Nepal Legal Mistral 7B πŸ‡³πŸ‡΅ βš–οΈ

A Mistral 7B model fine-tuned specifically for Nepal legal system queries, with focus on the National Penal Code 2017.

Model Description

This model is based on Mistral-7B-v0.1 and has been fine-tuned using LoRA (Low-Rank Adaptation) on 6,400 Nepal legal instruction-following examples. It can explain legal provisions, answer questions about Nepal law, and provide summaries of legal concepts in simple language.

🀝 Community Adoption

This model was adopted by the open-source community shortly after release.

  • Re-quantized into multiple GGUF formats by a well-known community maintainer
  • Distributed in Q2–Q8, IQ, and f16 variants for CPU-only inference
  • Enabled broader usage across low-resource and free-hardware environments

Community GGUF releases:

This demonstrates the model’s robustness, compatibility, and real-world usability.

Training Data

  • Dataset Size: 6,400 examples
  • Sources:
    • National Penal Code 2017
    • Nepal legal provisions
    • Legal Q&A pairs
  • Languages: Primarily English, with Nepal legal terminology

Training Details

  • Base Model: mistralai/Mistral-7B-v0.1
  • Method: LoRA fine-tuning
  • Trainable Parameters: 41.9M (1.11% of total)
  • Training Loss:0.260600
  • Validation Loss:0.390246
  • Training Time: ~3 hours on Kaggle T4 GPU

Usage

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("yamraj047/nepal-legal-mistral-7b")
tokenizer = AutoTokenizer.from_pretrained("yamraj047/nepal-legal-mistral-7b")

# Ask a legal question
prompt = """### Instruction:
Explain this legal provision in simple language.

### Input:
What is the punishment for theft under National Penal Code 2017?

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

# Extract just the answer
answer = response.split("### Response:")[-1].strip()
print(answer)

Example Questions

The model can answer questions like:

  • "What is the punishment for theft under Nepal law?"
  • "Explain the scope of National Penal Code 2017"
  • "What are the elements of the offense of obstructing a public servant?"
  • "What are fundamental rights in Nepal?"

Prompt Format

For best results, use this format:

### Instruction:
[What you want the model to do - e.g., "Explain this in simple language"]

### Input:
[Your specific legal question]

### Response:

Example Outputs

Q: What is the punishment for theft under National Penal Code 2017?

The punishment for theft under National Penal Code 2017 is imprisonment for up to seven years and a fine of up to seventy thousand rupees. This means that if someone steals something, they could end up spending time in prison and having to pay a large fine.

Q: Explain the scope and jurisdiction of National Penal Code 2017

National Penal Code 2017 is the primary criminal law of Nepal, which sets out the criminal offenses and their corresponding punishments. The code applies to all crimes committed within the territory of Nepal, including those committed by Nepalese citizens or foreigners...

Limitations and Ethical Considerations

⚠️ IMPORTANT DISCLAIMERS:

  1. Not Legal Advice: This model provides general legal information but is NOT a substitute for professional legal advice. Always consult qualified legal professionals for legal matters.

  2. Training Data Limitations:

    • Trained primarily on National Penal Code 2017
    • May not reflect recent legal changes
    • Limited coverage of civil law, constitutional law, etc.
  3. Language: Primarily trained on English text with Nepal legal terminology. May not perform well on queries in Nepali language.

  4. Accuracy: While the model performs well on test cases, it can still make mistakes or hallucinate information. Always verify important legal information.

  5. Bias: May reflect biases present in the training data.

Intended Use Cases

βœ… Appropriate Uses:

  • Legal education and learning
  • Quick reference for legal concepts
  • Understanding basic Nepal legal provisions
  • Research assistance
  • Explaining laws in simpler terms

❌ Inappropriate Uses:

  • Making legal decisions without professional advice
  • Representing clients in court
  • Drafting legal documents without lawyer review
  • Any situation requiring certified legal expertise

Model Performance

  • Training Loss: 0.260600
  • Validation Loss:0.390246
  • Test Cases: Successfully answered questions about:
    • Theft and punishment
    • National Penal Code scope
    • Obstructing public servants
    • And other legal provisions

Technical Specifications

  • Architecture: Mistral 7B with LoRA adapters
  • Parameters: 7B total (41.9M trainable)
  • Context Length: 256 tokens (training)
  • Precision: FP16
  • Model Size: ~13.5 GB

Citation

If you use this model in your research or application, please cite:

@misc{nepal-legal-mistral-7b,
  author = {yamraj047},
  title = {Nepal Legal Mistral 7B},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/yamraj047/nepal-legal-mistral-7b}
}

πŸš€ Live Demos

πŸ”Ή Self-Hosted Web Interface

This deployment runs the model in a self-hosted environment, demonstrating how the system can be deployed independently without relying on external APIs.

πŸ‘‰ https://huggingface.co/spaces/yamraj047/penal-legal-assistant


πŸ”Ή Fast Hosted API / Inference Demo

This version provides optimized, fast inference through a hosted Hugging Face Space.
It represents the API-style deployment, suitable for rapid testing, integrations, and demonstrations.

πŸ‘‰ https://huggingface.co/spaces/yamraj047/nepal-legal-assistant-fast


🎯 Why This Model Matters

  • First Nepal-focused legal LLM fine-tuned on national legal texts
  • Designed for practical legal Q&A and educational use
  • Successfully reused and redistributed by the community within 24 hours of release

License

This model is released under the Apache 2.0 license, same as the base Mistral model.

Acknowledgments

  • Base model: Mistral-7B-v0.1 by Mistral AI
  • Training platform: Kaggle
  • Framework: Hugging Face Transformers + PEFT

Contact

For questions, issues, or feedback about this model, please open an issue on the model repository.


Disclaimer: This is an AI model for educational and research purposes. It does not provide legal advice and should not be used as a substitute for consultation with qualified legal professionals.

Downloads last month
6
Safetensors
Model size
7B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yamraj047/nepal-legal-mistral-7b

Finetuned
(1016)
this model
Quantizations
2 models

Spaces using yamraj047/nepal-legal-mistral-7b 3