Aitana-2B-S-tourism-base-1.0
Aitana-2B-S-tourism-base-1.0 is a generative language model from the Aitana family, developed by the GPLSI (Language and Information System Group) at the University of Alicante. This model is based on gplsi/Aitana-2B-S-base-1.0 and has been further trained on tourism domain data to enhance performance in tourism-related text generation.
Table of Contents
Model Description
| Property | Value |
|---|---|
| Base Model | gplsi/Aitana-2B-S-base-1.0 |
| Architecture | Transformer decoder-only |
| Parameters | ~2.25B |
| Languages | Valencian, Spanish, English |
| License | Apache 2.0 |
Aitana-2B-S-tourism-base-1.0 extends the Aitana-2B-S-base-1.0 foundation with additional training on tourism domain data. This specialized training makes it particularly well-suited for tourism-related applications in Valencian, Spanish, and English.
Training Data
This model was trained on the following tourism domain dataset:
| Dataset ID | Name | Language | Source |
|---|---|---|---|
| dc7 | tourism_va_2025 | Valencian | gplsi/alia_tourism |
| dc7 | tourism_es_2025 | Spanish | gplsi/alia_tourism |
| dc7 | tourism_en_2025 | English | gplsi/alia_tourism |
Data Source
- Tourism: Multilingual tourism domain content covering tourist information, destinations, accommodations, cultural sites, and travel-related text in Valencian, Spanish, and English.
Intended Uses
This model can be used for:
- Tourism text generation in Valencian, Spanish, and English
- Travel content creation and assistance
- Fine-tuning for specific tourism downstream tasks
- Domain adaptation for hospitality and travel applications
Note: This model is specifically optimized for tourism domain content. For general-purpose or administrative/legal text, consider using other models in the Aitana family.
How to Use
Transformers
import torch
from transformers import pipeline, AutoTokenizer
model_id = "gplsi/Aitana-2B-S-tourism-base-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Tourism example in Spanish
text = "El turismo en la Comunidad Valenciana ofrece"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
# Tourism example in Valencian
text = "Les platges de la Costa Blanca són"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
# Tourism example in English
text = "The best beaches in Valencia include"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
GGUF for LM Studio
This repository includes a GGUF version for use with LM Studio, Ollama, and other llama.cpp-based tools.
| File | Precision | Size |
|---|---|---|
Aitana-s2b-c0dc7-f16.gguf |
F16 | ~4.5 GB |
Using with llama-cpp-python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="gplsi/Aitana-2B-S-tourism-base-1.0",
filename="Aitana-s2b-c0dc7-f16.gguf",
)
output = llm("El turismo en Valencia ofrece", max_tokens=100)
print(output["choices"][0]["text"])
Additional Information
Author
The model has been developed by the Language and Information Systems Group (GPLSI) and the Centro de Inteligencia Digital (CENID), both part of the University of Alicante (UA), as part of their ongoing research in Natural Language Processing (NLP).
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública, co-financed by the EU – NextGenerationEU, within the framework of the project Desarrollo de Modelos ALIA.
Acknowledgments
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
Special thanks to:
- Language Technologies Laboratory at Barcelona Supercomputing Center
- Centro Vasco de Tecnología de la Lengua (HiTZ)
- Centro Singular de Investigación en Tecnologías Inteligentes (CiTIUS)
- Sistemas Inteligentes de Acceso a la Información (SINAI)
- Instituto Universitario de Investigación Informática (IUII)
- Leonardo HPC System
- European supercomputing ecosystem (EUROHPC)
We also acknowledge the financial, technical, and scientific support of the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA, whose contribution has been essential to the completion of this research.
License
Disclaimer
This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.
Reference
@misc{gplsi-aitana-2B-S-base-1.0,
author = {Estevanell-Valladares, Ernesto L. and Yáñez-Romero, Fabio and Sepúlveda-Torres, Robiert and Galeano, Santiago and Consuegra-Ayala, Juan Pablo and Miró Maestre, María and Martínez-Murillo, Iván and Grande, Eduardo and Canal-Esteve, Miquel and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
title = {Aitana 2B base: Continually pre-trained on Valencian},
year = {2025},
institution = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)},
howpublished = {\url{https://huggingface.co/gplsi/gplsi/Aitana-2B-S-base-1.0}},
note = {Accessed: 2025-12-12}
}
Copyright © 2026 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA). Distributed under the Apache License 2.0.
- Downloads last month
- 378