GRUPO # 8

Integrantes:

  • Sharon Añejandra Calcina

GDPR Q&A – Qwen2.5 LoRA Model

This repository contains LoRA adapters fine-tuned on a GDPR Question–Answering dataset derived from Regulation (EU) 2016/679 (GDPR).

The model is intended for educational and informational purposes only and does not provide legal advice.

Academic Information

  • Course / Practice: Fine-tuning & Distillation (GDPR QA)
  • Group: Grupo 8
  • Students:
    • Sharon Alejandra Calcina
  • Organization: umsa-v1

Base Model

  • Base model: Qwen/Qwen2.5-0.5B-Instruct
  • Fine-tuning method: Supervised Fine-Tuning (SFT)
  • Adaptation: LoRA (PEFT)

Dataset

The model was trained using the following dataset: https://huggingface.co/datasets/umsa-v1/dataset_regulations-eu_grupo8-SharonCalcina_final The dataset contains GDPR-related question–answer pairs with paraphrased variants.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-0.5B-Instruct",
    device_map="auto"
)

model = PeftModel.from_pretrained(
    base_model,
    "umsa-v1/model_regulations-eu_grupo8-SharonCalcina_final"
)

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")

prompt = "What rights does a data subject have under GDPR?"
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Card for qwen2_5_lora_grupo3

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct. It has been trained using TRL.

Training procedure

This model was trained using Supervised Fine-Tuning (SFT) with a Low-Rank Adaptation (LoRA) approach on top of the Qwen/Qwen2.5-0.5B-Instruct base model.
The training data consists of GDPR-related question–answer pairs with paraphrased variants.

Framework versions

  • PEFT: 0.18.1
  • TRL: 0.27.2
  • Transformers: 5.1.0
  • PyTorch: 2.1.0
  • Datasets: 4.5.0
  • Tokenizers: 0.22.2

📖 Citations

If you use TRL, please cite:

@misc{vonwerra2022trl,
  title        = {{TRL: Transformer Reinforcement Learning}},
  author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall
                  and Edward Beeching and Tristan Thrush and Nathan Lambert
                  and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
  year         = 2022,
  journal      = {GitHub repository},
  publisher    = {GitHub},
  howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for umsa-v1/model_regulations-eu_grupo8-SharonCalcina_final

Adapter
(500)
this model