Aitana-7B-S-base-1.0
Aitana-7B-S-base-1.0 is a generative language model from the Aitana family, developed by the GPLSI (Language and Information System Group) at the University of Alicante. This model is based on BSC-LT/salamandra-7b and has been continuously pre-trained on multilingual data (Valencian, Spanish, and English) to improve representation of Valencian and Catalan languages.
Table of Contents
Model Description
| Property | Value |
|---|---|
| Base Model | BSC-LT/salamandra-7b |
| Architecture | Transformer decoder-only |
| Parameters | ~7.77B |
| Languages | Valencian, Spanish, English |
| License | Apache 2.0 |
Aitana-7B-S-base-1.0 extends the multilingual Salamandra foundation with additional training on domain-specific Valencian, Spanish, and English data. The training emphasizes administrative, legal, and tourism domains.
Training Data
This model was trained on the following ALIA datasets:
| Dataset ID | Name | Language | Source |
|---|---|---|---|
| dc8 | dogv_va_2025 | Valencian | gplsi/alia_dogv |
| dc9 | dogv_es_2025 | Spanish | gplsi/alia_dogv |
| dc10 | corts_es_va_2025 | Spanish/Valencian | gplsi/alia_les_corts |
| dc11 | amic_va_2025 | Valencian | gplsi/alia_amic |
| dc12 | boua_va_2025 | Valencian | gplsi/alia_boua |
| dc13 | boua_es_2025 | Spanish | gplsi/alia_boua |
| dc14 | tourism_va_2025 | Valencian | gplsi/alia_tourism |
| dc15 | tourism_es_2025 | Spanish | gplsi/alia_tourism |
| dc16 | tourism_en_2025 | English | gplsi/alia_tourism |
Data Sources
- DOGV (Diari Oficial de la Generalitat Valenciana): Official communications of the Valencian Community including laws and public sector communications
- Les Corts Valencianes: Transcripts from the Valencian Parliament plenary sessions and committee meetings
- AMIC: Valencian language corpus
- BOUA (Butlletí Oficial de la Universitat d'Alacant): Official University of Alicante documents including grants, regulations, and resolutions
- Tourism: Multilingual tourism domain content
Intended Uses
This model can be used for:
- Text generation in Valencian, Spanish, and English
- Fine-tuning for specific downstream tasks
- Domain adaptation for administrative, legal, or tourism applications
Note: Due to the formal register of training data (administrative and legal domains), generated text tends toward formal language.
How to Use
Transformers
import torch
from transformers import pipeline, AutoTokenizer
model_id = "gplsi/Aitana-7B-S-base-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Valencian example
text = "Les corts valencianes han pres la decisió de"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
# Spanish example
text = "El turismo en la Comunidad Valenciana"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
Evaluation
In the following table, we can see the results obtained with different benchmarks from lm-evaluation-harness in comparison with the model used for continuous pre-training. The results have been obtained from the model pre-trained; no instruction tuning or fine-tuning of any kind has been performed.
Normalized score per language
| Language | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|
| Spanish | 0.255 | 0.252 |
| Catalan | 0.373 | 0.378 |
| English | 0.329 | 0.364 |
| Valencian | 0.614 | 0.614 |
Valencian
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| XNLI | va | Natural Language Inference | acc | 0.50 | 0.50 |
Generation Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| Cocoteros | va | Reading Comprehension | bleu | 12.01 | 16.19 |
| Phrases ca-va | va-ca | Translation - Adaptation | bleu | 86.80 | 85.33 |
| Phrases va-ca | va-ca | Translation - Adaptation | bleu | 94.71 | 80.00 |
| Phrases va-es | va-es | Translation | bleu | 79.74 | 80.59 |
| Phrases es-va | es-va | Translation | bleu | 66.42 | 69.78 |
| Truthfulqa_va | va | Truthfulness | bleu_acc | 0.33 | 0.37 |
Catalan
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| Belebele Cat_latn | ca | Reading Comprehension | acc | 0.51 | 0.54 |
| COPA | ca | Commonsense Reasoning | acc | 0.80 | 0.82 |
| XStoryCloze | ca | Commonsense Reasoning | acc | 0.75 | 0.77 |
| OpenBookQA | ca | Question Answering | acc | 0.38 | 0.38 |
| PAWS | ca | Paraphrasing | acc | 0.62 | 0.62 |
| PiQA | ca | Question Answering | acc | 0.71 | 0.72 |
| SiQA | ca | Question Answering | acc | 0.49 | 0.51 |
| ARC Easy | ca | Question Answering | acc | 0.73 | 0.73 |
| ARC Challenge | ca | Question Answering | acc | 0.47 | 0.46 |
| XNLI | ca | Natural Language Inference | acc | 0.51 | 0.50 |
| Teca | ca | Natural Language Inference | acc | 0.53 | 0.53 |
| WNLI | ca | Natural Language Inference | acc | 0.59 | 0.62 |
| Catcola | ca | Linguistic Acceptability | acc | 0.73 | 0.73 |
| Catcola | ca | Linguistic Acceptability | mcc | 0.29 | 0.15 |
| Catalanqa | ca | Question Answering | F1 | 0.82 | 0.83 |
| Mgsm direct | ca | Math | exact match | 0.07 | 0.09 |
| Catalanqa | ca | Question Answering | exact match | 0.62 | 0.65 |
| Xquad | ca | Question Answering | exact match | 0.49 | 0.51 |
| Xquad | ca | Question Answering | F1 | 0.71 | 0.73 |
Generation Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| Cabreu abstractive | ca | Summarization | bleu | 8.73 | 11.32 |
| Cabreu extractive | ca | Summarization | bleu | 44.55 | 41.80 |
| Cabreu extreme | ca | Summarization | bleu | 10.66 | 12.54 |
Spanish
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| Belebele | es | Reading Comprehension | acc | 0.493 | 0.561 |
| PAWS | es | Paraphrasing | acc | 0.608 | 0.591 |
| XNLI | es | Natural Language Inference | acc | 0.468 | 0.462 |
| WNLI | es | Natural Language Inference | acc | 0.465 | 0.437 |
| XStoryCloze | es | Commonsense Reasoning | acc | 0.745 | 0.756 |
| Escola | es | Linguistic Acceptability | acc | 0.706 | 0.678 |
| Escola | es | Linguistic Acceptability | mcc | 0.295 | 0.146 |
| OpenbookQA | es | Question Answering | acc | 0.406 | 0.382 |
| MGSM Direct | es | Math | exact match | 0.068 | 0.080 |
| XQUAD | es | Question Answering | exact match | 0.501 | 0.505 |
| XQUAD | es | Question Answering | F1 | 0.711 | 0.719 |
Generation Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| Cocoteros | es | Reading Comprehension | bleu | 13.68 | 17.51 |
| XLSum | es | Summarization | bleu | 3.59 | 5.75 |
English
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-7B | Aitana-7B-S-base-1.0 |
|---|---|---|---|---|---|
| Arc Challenge | en | Question Answering | acc | 0.527 | 0.526 |
| Arc Easy | en | Question Answering | acc | 0.824 | 0.814 |
| Belebele | en | Reading Comprehension | acc | 0.549 | 0.573 |
| PAWS | en | Paraphrasing | acc | 0.633 | 0.615 |
| XNLI | en | Natural Language Inference | acc | 0.483 | 0.476 |
| XStoryCloze | en | Commonsense Reasoning | acc | 0.795 | 0.793 |
| OpenBookQA | en | Question Answering | acc | 0.356 | 0.362 |
| PiQA | en | Question Answering | acc | 0.797 | 0.799 |
| Social iqa | en | Question Answering | acc | 0.513 | 0.512 |
| WNLI | en | Natural Language Inference | acc | 0.479 | 0.606 |
| MGSM Direct | en | Math | exact match | 0.280 | 0.564 |
| TriviaQA | en | Question Answering | exact match | 0.597 | 0.602 |
| CoLA | en | Linguistic Acceptability | mcc | 0.412 | 0.361 |
Additional Information
Author
The model has been developed by the Language and Information Systems Group (GPLSI) and the Centro de Inteligencia Digital (CENID), both part of the University of Alicante (UA), as part of their ongoing research in Natural Language Processing (NLP).
Part of the Aitana Family
This model is part of the Aitana model family developed by the GPLSI research group, which includes:
- gplsi/Aitana-2B-S - Valencian-focused 2B model
- gplsi/Aitana-2B-S-base-1.0 - Base version (1.0) of the 2B model
- gplsi/Aitana-6.3B - Larger 6.3B parameter model
- gplsi/Aitana-TA-2B-S - Translation model (Spanish ↔ Valencian)
- gplsi/Aitana-2B-S-LF - 2B Text Generation variant
- gplsi/Aitana-2B-S-tourism-base-1.0 - Domain-specific base model focused on Tourism
- gplsi/Aitana-tourism-mb-encoder-1.0 - Tourism domain Fill-Mask/Encoder model
- gplsi/Aitana-FraudDetection-R-1.0 - Text Classification model for Fraud Detection
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública, co-financed by the EU – NextGenerationEU, within the framework of the project Desarrollo de Modelos ALIA.
Acknowledgments
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
Special thanks to:
- Language Technologies Laboratory at Barcelona Supercomputing Center
- Centro Vasco de Tecnología de la Lengua (HiTZ)
- Centro Singular de Investigación en Tecnologías Inteligentes (CiTIUS)
- Sistemas Inteligentes de Acceso a la Información (SINAI)
- Instituto Universitario de Investigación Informática (IUII)
- Leonardo HPC System
- European supercomputing ecosystem (EUROHPC)
We also acknowledge the financial, technical, and scientific support of the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA, whose contribution has been essential to the completion of this research.
License
Disclaimer
This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.
Reference
@misc{gplsi-aitana-2B-S-base-1.0,
author = {Estevanell-Valladares, Ernesto L. and Yáñez-Romero, Fabio and Sepúlveda-Torres, Robiert and Consuegra-Ayala, Juan Pablo and Galiano, Santiago and Miró Maestre, María and Martínez-Murillo, Iván and Grande, Eduardo and Canal-Esteve, Miquel and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
title = {Aitana 7B base: Continually pre-trained on Valencian},
year = {2026},
institution = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)},
howpublished = {\url{https://huggingface.co/gplsi/gplsi/Aitana-2B-S-base-1.0}},
note = {Accessed: 2026-4-8}
}
Copyright © 2026 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA). Distributed under the Apache License 2.0.
- Downloads last month
- 564