Gemma 3 12B IT - German Legal QA LoRA Adapter

This is a LoRA (Low-Rank Adaptation) adapter for the Gemma 3 12B Instruct model, fine-tuned on the German legal question-answering dataset (GerLayQA paraphrased).

Model Details

  • Base Model: unsloth/gemma-3-12b-it
  • Model Type: LoRA Adapter
  • Language: German (de)
  • Domain: Legal Question-Answering
  • Training Dataset: GerLayQA (paraphrased version)

LoRA Configuration

  • Rank (r): 16
  • Alpha: 32
  • Dropout: 0.05
  • Target Modules: Attention and MLP layers (q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj)
  • Task Type: Causal Language Modeling

Training Details

  • Sequence Length: 4096
  • Batch Size: 32
  • Learning Rate: 2e-05
  • Epochs: 7
  • Checkpoint: 154

Usage

To use this adapter with the base model:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/gemma-3-12b-it",
    device_map="auto",
    torch_dtype="auto"
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-12b-it")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "DomainLLM/gemma-3-12b-it-german-geralayqa-paraphrased-lora")

# Generate text
prompt = "Was ist ein Werkvertrag nach deutschem Recht?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Merging the Adapter (Optional)

If you want to merge the LoRA weights into the base model:

from transformers import AutoModelForCausalLM
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/gemma-3-12b-it",
    device_map="auto",
    torch_dtype="auto"
)

# Load and merge LoRA adapter
model = PeftModel.from_pretrained(base_model, "DomainLLM/gemma-3-12b-it-german-geralayqa-paraphrased-lora")
merged_model = model.merge_and_unload()

# Save merged model
merged_model.save_pretrained("./merged-gemma-german-legal")

Applications

This model is designed for:

  • German legal question answering
  • Legal document understanding
  • Legal reasoning and analysis
  • German civil law (BGB) questions
  • Legal paragraph interpretation

Limitations

  • The model is specialized for German legal domain
  • Performance may vary on non-legal or non-German texts
  • Should not be used as a replacement for professional legal advice

Citation

If you use this model, please cite:

@misc{gemma-german-legal-lora,
  title={Gemma 3 12B IT - German Legal QA LoRA Adapter},
  author={DomainLLM},
  year={2025},
  howpublished={\url{https://huggingface.co/DomainLLM/gemma-3-12b-it-german-geralayqa-paraphrased-lora}}
}

License

This adapter is released under the Apache 2.0 License, consistent with the base Gemma model.

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DomainLLM/gemma-3-12b-it-german-geralayqa-paraphrased-lora

Adapter
(35)
this model