Datasets:
license: cc-by-nc-sa-4.0
language:
- srd
- it
- es
- pt
- ca
multilinguality:
- multilingual
task_categories:
- text-generation
size_categories:
- 10K<n<100K
pretty_name: LLiMba Pretraining Corpus
tags:
- sardinian
- limba-sarda-comuna
- lsc
- logudorese
- campidanese
- low-resource
- endangered-language
- romance
- pretraining
- continued-pretraining
- corpus
extra_gated_heading: Access to the LLiMba pretraining corpus
extra_gated_description: >
This corpus aggregates Sardinian text from heterogeneous sources, including
literary translations made available with explicit permissions for
non-commercial use. By requesting access you acknowledge and agree to the
conditions below. Requests are reviewed manually; approval typically takes a
few days.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
Intended use:
type: select
options:
- Academic research
- Language preservation work
- Education and teaching
- Personal study
- Other (describe in the prompt below)
Brief description of intended use: text
I agree to use the corpus for non-commercial purposes only: checkbox
I agree to license any derivative works under latest CC-BY-NC-SA: checkbox
I will not redistribute the corpus in raw form to third parties: checkbox
extra_gated_button_content: Request access
configs:
- config_name: default
data_files:
- split: train
path: corpus.jsonl
LLiMba Pretraining Corpus
The pretraining corpus used for Stage 1 (continued pretraining) of the LLiMba project. 13,925,803 tokens across 18,270 documents, predominantly Sardinian (~11.5M tokens, 17,618 documents) with ~2.4M tokens of related Romance text used as replay data to mitigate catastrophic forgetting.
To our knowledge, this is the largest openly released corpus of Sardinian text. The material was gathered from heterogeneous sources covering all three main written variants (LSC, Logudorese, Campidanese) and spanning encyclopedic, journalistic, literary, poetic, and musical registers.
⚠️ Heterogeneous source rights. The corpus combines material from sources with different licensing and permission statuses: public Wikipedia content, web scrapes from Sardinian-language sites, GlotCC CommonCrawl extracts, and professionally translated literary works (with permissions secured for non-commercial redistribution). The aggregate non-commercial license reflects the most restrictive terms among these sources. Per-source provenance is documented below; users wishing to use specific subsets under more permissive terms should consult the original sources directly.
🔒 Access is gated. Because this corpus carries forward conditions that the maintainer accepted from the rights holders, downstream access is granted only after manual review of the request form above. Approval typically takes a few days. The form asks for your name, affiliation, country, and intended use, and requires explicit agreement to the non-commercial-use and ShareAlike-derivative conditions.
Dataset structure
Each row is a single document:
{
"text": "Aposentu illatiadu de grogu. Isposu - (Intrende) O ma'. Mama - Ite? ...",
"source": "04_epub_books",
"tier": 1,
"tokens": 2126,
"url": "epub:isposoriu_de_sambene_fg_lorca.epub"
}
Schema:
text(string): the document content. Lengths vary widely, from short Wikipedia stubs (~300 tokens) to long book chapters (>20K tokens).source(string): source identifier; see the Composition section below for the mapping.tier(int): quality tier from 1 (highest) to 4 (lowest). See the Quality tiers table below for the source-to-tier mapping. Useful for filtering or weighting during downstream training.tokens(int): approximate token count of thetextfield, computed via a character-based heuristic during pipeline build. Actual token count under any specific tokenizer may differ by 10-20%.url(string): provenance pointer. Either a real URL, a Wikipedia article reference, or a synthetic identifier for non-URL sources (e.g.,epub:filename.epub).
Composition
After deduplication and language filtering, the final published corpus has 18,270 documents and 13,925,803 tokens:
| Source ID | Description | Documents | Tokens |
|---|---|---|---|
01_wikipedia |
Sardinian Wikipedia articles | 6,309 | 2,581,155 |
02_glotcc |
GlotCC CommonCrawl, filtered | 2,270 | 1,773,569 |
03_web_scrape |
Sardinian-language websites (see below) | 8,110 | 4,900,558 |
04_epub_books |
Literary translations in EPUB | 151 | 712,174 |
05_md_books |
Literary translations in Markdown | 7 | 10,831 |
06_pdf_books |
Literary translations in PDF | 251 | 1,290,294 |
07_pdf_bilingual |
Bilingual literary text (Italian/Sardinian) | 1 | 20,177 |
08_pdf_poetry |
Poetry anthologies (regional, 1400-1900) | 436 | 175,743 |
09_songs_lyrics |
Sardinian song lyrics | 83 | 18,972 |
| Sardinian total | 17,618 | 11,483,473 | |
00_replay_data |
Romance replay (it/es/pt/ca Wikipedia) | 652 | 2,442,330 |
| Combined total | 18,270 | 13,925,803 |
Quality tiers
Sources are grouped into four tiers reflecting curation level and prose register. The corpus is assembled in tier order during the build step, so the model sees tier 1 content first.
| Tier | Sources | Rationale |
|---|---|---|
| 1 | 01_wikipedia, 04_epub_books, 06_pdf_books |
Curated encyclopedic and literary prose |
| 2 | 05_md_books, 03_web_scrape, 07_pdf_bilingual |
Editorial and structured prose, including bilingual works |
| 3 | 08_pdf_poetry, 09_songs_lyrics, 00_replay_data |
Verse, song lyrics, and Romance replay |
| 4 | 02_glotcc |
Bulk CommonCrawl (noisier) |
Web scrape sources (03_web_scrape)
Verified live as of early 2026:
- salimbasarda.net - news, culture, sport, politics in LSC and Logudorese
- istorias.it - online newspaper entirely in Sardinian
- sardumatica.net - technology articles in LSC and Campidanese
- limbasardasudsardigna.it - provincial institutional content in Campidanese
- lacanas.it - Sardinian cultural and news content
Literary translations (04_epub_books, 05_md_books, 06_pdf_books)
Professional Sardinian translations of world literature, including works by Kafka (Sa Metamòrfosi), Orwell (1984), Joyce (Dublinesos), García Márquez, García Lorca (Isposòriu de Sàmbene), Cervantes (Don Chisciotte), Stevenson (Jekyll e Hyde), and Goethe (Sos Patimentos). These represent the corpus's highest-quality literary prose and were included with explicit permission for non-commercial use.
Poetry (08_pdf_poetry)
PDFs from the Sardigna in Limba anthology series (2008), covering regional poetry from 1400-1900 across different Sardinian provinces. Verse line breaks were preserved during extraction to retain poetic structure.
Romance replay (00_replay_data)
Approximately 2.4M tokens of Italian, Spanish, Portuguese, and Catalan Wikipedia content. Italian dominates (~1M tokens), with smaller shares of the other three. The replay text carries no language tag; the model learns language identity from the text itself, matching inference-time conditions. Without this replay component, the resulting model representationally blurs Sardinian and Italian; with it, the model maintains separate internal representations.
Languages
- Sardinian (sc/srd) is the primary content (~11.5M of ~13.9M tokens). The corpus deliberately spans all three main written variants:
- LSC (Limba Sarda Comuna), the standardized written form codified in 2006
- Logudorese, used in the central-north
- Campidanese, used in the south
- Italian (it) appears as Romance replay and as occasional embedded text in mixed-language web sources (navigation, headers, footers retained for context).
- Spanish (es), Portuguese (pt), Catalan (ca) appear only as Romance replay.
Note that standard language detection tools (e.g., langdetect) do not recognize Sardinian and classify it variably as Italian, Portuguese, Spanish, or Catalan. The pipeline exploits this rather than fights it: the language filter retains documents detected as any of those four, removing only documents flagged as English, German, or French (true noise).
Pipeline
The corpus was built through a six-step pipeline. Starting from 20,258 raw collected documents:
- Audit - per-source statistics and sample inspection
- Clean - Unicode normalization, whitespace collapse, removal of documents under 100 characters (20,258 → 20,251)
- Scrub - boilerplate stripping (Wikipedia disclaimers, navigation headers, Italian footer markers) (20,251 → 20,251)
- Dedup - MinHash LSH with Jaccard threshold 0.7, 128 permutations, 5-gram shingles. Removed near-duplicates, primarily GlotCC duplicates of web scrape content (20,251 → 18,385)
- Filter - language filter retaining IT/PT/ES/CA classifications, removing EN/DE/FR (18,385 → 18,270)
- Build - quality-ordered corpus assembly: tier 1 (Wikipedia and prose books) first, then tier 2 (web scrape, Markdown books, bilingual texts), tier 3 (poetry, lyrics, Romance replay), and finally tier 4 (GlotCC). The model sees the highest-quality Sardinian first during the most impactful early training steps.
The full pipeline is reproducible from github.com/lballore/LLiMba.
Usage
Load with the datasets library:
from datasets import load_dataset
ds = load_dataset("lballore/llimba-corpus", split="train")
print(ds[0]["text"][:500])
print(ds[0]["source"], "tier", ds[0]["tier"], "-", ds[0]["tokens"], "tokens")
Filter by source or tier:
# Sardinian only (exclude Romance replay)
sardinian_only = ds.filter(lambda x: x["source"] != "00_replay_data")
# Highest-quality tier only (Wikipedia + prose books)
tier1 = ds.filter(lambda x: x["tier"] == 1)
# Specific source
wikipedia_only = ds.filter(lambda x: x["source"] == "01_wikipedia")
Or stream the jsonl directly without the datasets dependency:
import json
with open("corpus.jsonl") as f:
for line in f:
doc = json.loads(line)
# doc keys: text, source, tier, tokens, url
Quality notes and limitations
Mixed-language documents are retained. Many Sardinian websites frame their Sardinian content with Italian navigation, introductions, or footers. The Sardinian body text is valuable; the Italian wrapper is minimal and reflects authentic real-world Sardinian text usage. Users wanting strict-Sardinian-only data should filter or post-process accordingly.
Heterogeneous quality across sources. Tier 1 (Wikipedia, EPUB and PDF prose books) provides the highest-quality Sardinian: curated encyclopedic content and literary translations. Tier 2 (Markdown books, web scrape, bilingual works) is editorially produced. Tier 3 (poetry, song lyrics, Romance replay) is high-quality but stylistically distinct (verse, fragmented, or non-Sardinian). Tier 4 (GlotCC bulk crawl) is the noisiest. Use the tier and source fields to weight or filter as appropriate.
Single-speaker review. Sample-based quality review (~150 documents) was performed by a single native speaker of the Nuorese variant. The review confirmed acceptable overall quality but is not exhaustive across sources or variants.
Token counts are approximate. The tokens field uses a character-based heuristic during pipeline build, not a real tokenizer pass. Counts under any specific tokenizer (e.g., Qwen2.5's BPE) may differ.
No structural metadata for books. Long literary works are stored either as continuous text or chapter-split into multiple documents. Section/chapter boundaries are not consistently marked.
License
Released under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC-BY-NC-SA-4.0). See the LICENSE file for full terms.
The aggregate license is non-commercial because some constituent sources (notably the literary translations in 04_epub_books, 05_md_books, and 06_pdf_books) were included with explicit permissions limited to non-commercial use. Individual source materials may have more permissive licenses (e.g., Wikipedia content is CC-BY-SA-4.0; GlotCC is CC-BY-4.0); users wishing to use specific subsets under more permissive terms should consult the original sources directly.
Citation
@misc{llimba2026,
title = {LLiMba: Sardinian on a Single GPU - Adapting a 3B Language Model to a Vanishing Romance Language},
author = {Luca Ballore},
year = {2026},
eprint = {2605.},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2605.09015}
}
@misc{llimba-corpus,
title = {LLiMba Pretraining Corpus},
author = {Luca Ballore},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/lballore/llimba-corpus}
}
Acknowledgements
Native speaker review by the author. Source web texts come from salimbasarda.net, istorias.it, sardumatica.net, limbasardasudsardigna.it, and lacanas.it. Sardinian Wikipedia editors, the contributors to the Sardigna in Limba poetry anthologies, and the translators of literary works into Sardinian made this corpus possible. Romance replay data drawn from the Italian, Spanish, Portuguese, and Catalan Wikipedias.