Text Generation
Transformers
Safetensors
qwen3
Generated from Trainer
trl
sft
conversational
text-generation-inference
Instructions to use youssefbelghmi/MNLP_M3_mcqa_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use youssefbelghmi/MNLP_M3_mcqa_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="youssefbelghmi/MNLP_M3_mcqa_model") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("youssefbelghmi/MNLP_M3_mcqa_model") model = AutoModelForCausalLM.from_pretrained("youssefbelghmi/MNLP_M3_mcqa_model") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use youssefbelghmi/MNLP_M3_mcqa_model with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "youssefbelghmi/MNLP_M3_mcqa_model" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "youssefbelghmi/MNLP_M3_mcqa_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/youssefbelghmi/MNLP_M3_mcqa_model
- SGLang
How to use youssefbelghmi/MNLP_M3_mcqa_model with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "youssefbelghmi/MNLP_M3_mcqa_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "youssefbelghmi/MNLP_M3_mcqa_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "youssefbelghmi/MNLP_M3_mcqa_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "youssefbelghmi/MNLP_M3_mcqa_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use youssefbelghmi/MNLP_M3_mcqa_model with Docker Model Runner:
docker model run hf.co/youssefbelghmi/MNLP_M3_mcqa_model
File size: 4,605 Bytes
c9ff25e ccd6e16 c9ff25e 1bdf45c c9ff25e 254a84b c9ff25e 254a84b c9ff25e 4eb2fec c9ff25e 4eb2fec c9ff25e 4eb2fec c9ff25e 4eb2fec c9ff25e 4eb2fec c9ff25e 4eb2fec c9ff25e 4eb2fec c9ff25e 4eb2fec 254a84b aece2ff 813ef54 aece2ff 813ef54 4eb2fec c9ff25e 4eb2fec | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | ---
base_model: tocico28/MNLP_M3_dpo_model
datasets: youssefbelghmi/MNLP_M3_mcqa_dataset
library_name: transformers
model_name: MNLP_M3_dpo_mcqa_model
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# MNLP M3 MCQA Model
This model is a fine-tuned version of [tocico28/MNLP_M3_dpo_model](https://huggingface.co/tocico28/MNLP_M3_dpo_model) on the [youssefbelghmi/MNLP_M3_mcqa_dataset](https://huggingface.co/datasets/youssefbelghmi/MNLP_M3_mcqa_dataset), a large-scale collection of multiple-choice questions designed for evaluating and training models in **STEM** domains (science, math, engineering, medicine, etc.).
The [tocico28/MNLP_M3_dpo_model](https://huggingface.co/tocico28/MNLP_M3_dpo_model) is itself a fine-tuned version of **Qwen/Qwen3-0.6B-Base** using a dataset of preference-labeled STEM response pairs collected through a collaborative classroom annotation effort.
It has been trained using [TRL](https://github.com/huggingface/trl) as part of the final milestone of the **CS-552: Modern NLP** course at EPFL (Spring 2025).
## Task
**Multiple-Choice Question Answering (MCQA):** Given a question and four answer options (A–D), the model must complete the prompt with the correct option letter only (e.g., `A`, `B`, `C`, or `D`). It was trained with rationales during supervision but outputs only the letter during inference, making it compatible with evaluation frameworks such as LightEval.
## Training Dataset
- **Dataset:** [`youssefbelghmi/MNLP_M3_mcqa_dataset`](https://huggingface.co/datasets/youssefbelghmi/MNLP_M3_mcqa_dataset).
- ~30,000 questions from SciQ, OpenBookQA, MathQA, ARC, and MedMCQA.
- Each sample includes in particular:
- question,
- four answer choices (A–D),
- the correct answer as a letter,
- a short explanation (`support`) to guide learning.
## Training Setup
- **Base model:** `Qwen/Qwen3-0.6B-Base`.
- **Method:** Supervised Fine-Tuning (SFT) with `trl` and `SFTTrainer`.
- **Tokenizer:** AutoTokenizer (with `eos_token` used as padding).
## Training Prompt Format
During fine-tuning, each training example is converted into a prompt-completion pair. The prompt includes both the question and an explanation to guide the model’s reasoning:
```text
The following is a multiple-choice question (with answers) about knowledge and skills in advanced master's-level STEM fields.
You will be provided with an explanation to help you understand the correct answer.
Select the correct answer by replying with the option letter (A, B, C, or D) only.
Question: <question_text>
A. <option_A>
B. <option_B>
C. <option_C>
D. <option_D>
Explanation: <support_text>
Answer:
```
The completion is a single token: " A", " B", " C", or " D", corresponding to the correct answer.
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- num_train_epochs: 1
- per_device_train_batch_size: 4
- per_device_eval_batch_size: 4
- gradient_accumulation_steps: 4
- gradient_checkpointing: true
- eval_strategy: steps
- eval_steps: 100
- logging_steps: 100
## Training Results
| Epoch | Training Loss | Validation Loss |
|--------:|----------------:|------------------:|
| 0.08 | 0.3363 | 0.2766 |
| 0.15 | 0.2938 | 0.2719 |
| 0.23 | 0.2817 | 0.2751 |
| 0.31 | 0.2688 | 0.2604 |
| 0.38 | 0.2692 | 0.2640 |
| 0.46 | 0.2611 | 0.2571 |
| 0.54 | 0.2431 | 0.2433 |
| 0.61 | 0.2495 | 0.2439 |
| 0.69 | 0.2489 | 0.2384 |
| 0.77 | 0.2321 | 0.2376 |
| 0.84 | 0.2363 | 0.2353 |
| 0.92 | 0.2106 | 0.2358 |
| 0.99 | 0.2091 | 0.2340 |
- **Final validation accuracy:** ~92.0%
### Framework versions
- TRL: 0.17.0
- Transformers: 4.53.0.dev0
- Pytorch: 2.7.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
## Author
Developed by [**Youssef Belghmi**](https://huggingface.co/youssefbelghmi)
CS-552: Modern NLP – EPFL, Spring 2025 |