Gigaverbo-v2-ablation-NonEDU-1.5B
Model Summary
GigaVerbo-v2-ablation-NonEDU-1.5B is a decoder-transformer natively pretrained in Portuguese. This model is part of an ablation study to measure the impact of our educational data filtering/augmentation strategy on the downstream performance of models trained with GigaVerbo-v2 and GigaVerbo-v2-synth. Gigaverbo-v2-ablation-NonEDU-1.5B was trained with ~46 billion tokens, those being a mixture of the non-educational portion of Gigaverbo-v2 (i.e., samples with an Edu Score < 3). This model has 1.5 billion parameters and a context length of 4096 tokens.
Details
- Architecture: a Transformer-based model (
llama) - Size: 1,510,066,176 parameters
- Context length: 4096 tokens
- Dataset(s):
- Polygl0t/gigaverbo-v2 (non-educational subset, Edu Score < 3)
- Language(s): Portuguese
- Batch size: 2,097,152 tokens
- Number of steps: 22,000
- GPU: 16 NVIDIA A40 (48 GB)
- Training time: ~ 97 hours
- Emissions: 181 KgCO2 (Germany)
- Total energy consumption: 477 kWh
This repository has the source code used to train this model. The complete configuration used for training is available in the following config file:
- Single stage (linear warmup with cosine decay): training_config.yaml
The main branch of this repository contains the final checkpoint saved at step 22,000. All other checkpoints are available as separate branches. To load a specific checkpoint, you can use the following code snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B"
revision = "step-2000" # Change this to the desired checkpoint branch
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, revision=revision)
Or, you can access all the revisions for the models via the following code snippet:
from huggingface_hub import list_repo_refs
out = list_repo_refs("Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B")
branches = [b.name for b in out.branches]
print(branches)
Intended Uses
The primary intended use of this model is to serve as a baseline for evaluating the impact of data quality and filtering on Portuguese language model performance. Researchers and practitioners can use this model as a reference point for further ablation studies or for comparison with other models trained on different data mixtures.
Basic usage
from transformers import GenerationConfig, TextGenerationPipeline, AutoTokenizer, AutoModelForCausalLM
import torch
# Specify the model and tokenizer
model_id = "Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Specify the generation parameters as you like
generation_config = GenerationConfig(
**{
"do_sample": True,
"max_new_tokens": 150,
"renormalize_logits": True,
"repetition_penalty": 1.2,
"temperature": 0.1,
"top_k": 50,
"top_p": 1.0,
"use_cache": True,
}
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
generator = TextGenerationPipeline(model=model, task="text-generation", tokenizer=tokenizer, device=device)
# Generate text
prompt = "A capital de Portugal é"
completion = generator(prompt, generation_config=generation_config)
print(completion[0]['generated_text'])
Evaluations
The table below compares our ablation models with checkpoints from the first Tucano series. Tucano models are a natural point of comparison because they were trained on Portuguese data of a similar nature and provide multiple checkpoints across different stages of training. To ensure a fair comparison, we select Tucano checkpoints that are closest to our ablation models in terms of both the number of training tokens seen (31B and 52B vs. 46B) and model size (1.1B and 2.4B parameters). We also include additional models for which reliable information on training data volume and model size is available and whose sizes are comparable to our ablation models. Performance is summarized using the NPM (Normalized Performance Metric), which provides a balanced aggregate view across tasks by normalizing each task’s score relative to its random baseline, thereby accounting for differences in task difficulty.
| NPM | ARC Challenge | Calame | Global PIQA | HellaSwag | Lambada | |
|---|---|---|---|---|---|---|
| GigaVerbo-v2 (EDU) | 39.306 | 0.328 | 0.579 | 0.82 | 0.449 | 0.377 |
| Curio-1.1b (1T + 150B) | 39.156 | 0.304 | 0.592 | 0.75 | 0.495 | 0.467 |
| Curio-1.1b (1T + 100B) | 38.88 | 0.309 | 0.599 | 0.74 | 0.489 | 0.468 |
| Curio-1.1b (1T + 50B) | 38.057 | 0.294 | 0.589 | 0.74 | 0.48 | 0.469 |
| GigaVerbo-v2 (EDU+Synth) | 37.49 | 0.344 | 0.579 | 0.75 | 0.46 | 0.39 |
| Curio-edu-1b1 (1T + 20B) | 34.774 | 0.322 | 0.549 | 0.69 | 0.463 | 0.429 |
| GigaVerbo-v2 (Synth) | 33.864 | 0.326 | 0.561 | 0.72 | 0.439 | 0.339 |
| Tucano-2b4 (500B) | 33.551 | 0.304 | 0.503 | 0.73 | 0.488 | 0.324 |
| Tucano-1b1 (250B) | 29.124 | 0.301 | 0.489 | 0.68 | 0.441 | 0.284 |
| Llama-3.2-1B (9T) | 28.315 | 0.317 | 0.5 | 0.55 | 0.453 | 0.456 |
| GigaVerbo-v2 (NonEDU) | 28.049 | 0.256 | 0.565 | 0.65 | 0.383 | 0.352 |
| Tucano-2b4 (52B) | 27.433 | 0.274 | 0.456 | 0.71 | 0.412 | 0.248 |
| GlorIA-1.3B (35B) | 27.274 | 0.264 | 0.547 | 0.64 | 0.364 | 0.367 |
| Carvalho_pt-gl-1.3B (26B + 5B) | 26.746 | 0.27 | 0.534 | 0.63 | 0.385 | 0.336 |
| Tucano-1b1 (52B) | 24.927 | 0.284 | 0.464 | 0.64 | 0.401 | 0.257 |
⭐ GigaVerbo-v2 Ablations: The Impact of 46B Tokens of Educational & Synthetic Data ⭐
All individual benchmark scores and their evolution across training time can be found in the .plots folder.
Cite as 🤗
@misc{correa2026tucano2cool,
title={{Tucano 2 Cool: Better Open Source LLMs for Portuguese}},
author={Nicholas Kluge Corr{\^e}a and Aniket Sen and Shiza Fatimah and Sophia Falk and Lennard Landgraf and Julia Kastner and Lucie Flek},
year={2026},
eprint={2603.03543},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2603.03543},
}
Aknowlegments
Polyglot is a project funded by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MWK) as part of TRA Sustainable Futures (University of Bonn) and the Excellence Strategy of the federal and state governments.
We also gratefully acknowledge the granted access to the Marvin cluster hosted by University of Bonn along with the support provided by its High Performance Computing & Analytics Lab.
License
This model is licensed under the Apache License, Version 2.0. For more details, see the LICENSE file.
- Downloads last month
- 26
Model tree for Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B
Dataset used to train Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B
Collection including Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B
Paper for Polygl0t/GigaVerbo-v2-ablation-NonEDU-1.5B
Evaluation results
- accuracy (normalized) on ARC Challenge (Portuguese)test set Language Model Evaluation Harness (branch=polyglot_harness_portuguese)25.600
- accuracy (normalized) on HellaSwag (Portuguese)validation set Language Model Evaluation Harness (branch=polyglot_harness_portuguese)38.300
- accuracy on Calametest set Language Model Evaluation Harness (branch=polyglot_harness_portuguese)56.500
- accuracy on Lambada (Portuguese)test set Language Model Evaluation Harness (branch=polyglot_harness_portuguese)35.200
- accuracy (normalized) on Global PIQA (por_latn_braz)test set Language Model Evaluation Harness (branch=polyglot_harness_portuguese)65.000
