Datasets:
Glossapi Greek Nanochat Pretraining Dataset
This dataset is a unified Greek pretraining corpus assembled from the local GlossAPI collection plus four selected external sources:
HuggingFaceFW/finewiki(el)HuggingFaceFW/finepdfs-edu(ell_Grek)AI-team-UoA/greek_legal_codeOPUS/OpenSubtitles-el-v2018(monolingual Greek subtitles)
The default dataset view is the broad canonical corpus under data/*.parquet. This repo also carries builder-facing extras such as dedup metadata, planning manifests, and reconstruction scripts for nanochat-style subset construction.
Scope
- Total rows:
810638 - Included source datasets:
19 - Explicitly excluded from the canonical corpus:
95k_deigma_ellinikis
Dedup Metadata
This repo also carries builder-facing dedup metadata under:
dedup_metadata/latest.jsondedup_metadata/full_publish_aws_strip_20260328T101312Z/
These artifacts are optional downstream builder inputs. They do not mutate the default dataset rows in data/*.parquet.
The intended builder flow is:
- load base rows from
data/*.parquet - resolve the current dedup bundle from
dedup_metadata/latest.json - apply builder-time dedup policy such as
annotate,drop_intra, ordrop_intra_and_inter
The configs block in this card keeps default loading scoped to data/*.parquet, so the extra dedup parquet files do not become part of the default dataset row view.
Canonical Columns
source_datasetsource_doc_idtexttitleauthorsource_metadata_jsonis_historical_or_polytoniccontains_mathcontains_latexgreek_percentagelatin_percentagepolytonic_ratiotable_ratiogreek_badness_scoremojibake_badness_scoreneeds_ocris_emptyfilterocr_successquality_methodreevaluated_at
Metadata Notes
titleandauthorare the only normalized source metadata fields promoted to top-level canonical columns.- Pipeline-generated filenames and synthetic processing identifiers are excluded from
source_metadata_json. - Some datasets have weak metadata or unresolved joins. Good text is still kept even when metadata are sparse.
contains_mathandcontains_latexare content-analysis flags, not source metadata.
Prepared-Source Notes
row_counts.csvandvalidation_summary.csvdescribe the broad canonical corpus underdata/*.parquet.prepare_manifest.jsonis a builder-oriented planning artifact with file-level text metrics for a prepared/source-ready subset view.- Because
prepare_manifest.jsonis builder-oriented, its totals may differ from the broad-corpus totals reported inrow_counts.csv.
Quality Notes
- The canonical upload is intentionally broad. CLI-side filtering is expected for training subsets.
openarchives.grstricter filtering such asneeds_ocr == falseplusgreek_badness_score < 25is a downstream mix choice, not a canonical-upload exclusion.greek_percentageis derived where needed, typically from existing Latin-percentage signals.- Rust badness scores are more reliable for modern Greek cleanliness than for older/polytonic/liturgical corpora.
Included Source Row Counts
| source_dataset | row_count | file_count |
|---|---|---|
| 1000_prwta_xronia_ellhnikhs | 1016 | 1 |
| AI-team-UoA/greek_legal_code | 47563 | 1 |
| Apothetirio_Kallipos | 4827 | 1 |
| Apothetirio_Pergamos | 15241 | 1 |
| Ekklisiastika_Keimena | 675 | 1 |
| Ellinika_Keimena_Project_Gutenberg | 214 | 1 |
| HuggingFaceFW/finepdfs-edu | 209039 | 1 |
| HuggingFaceFW/finewiki | 242517 | 1 |
| OPUS/OpenSubtitles-el-v2018 | 143441 | 1 |
| Sxolika_vivlia | 123 | 1 |
| Wikisource_Greek_texts | 5394 | 1 |
| dimodis_logotexnia | 11 | 1 |
| ellinika_dedomena_europaikou_koinovouliou | 28723 | 1 |
| eurlex-greek-legislation | 22694 | 1 |
| greek_phd | 37229 | 2 |
| klasikh_arx_ell_grammateia | 815 | 1 |
| openarchives.gr | 46000 | 1 |
| openbook_gr | 3719 | 1 |
| opengov.gr-diaboyleuseis | 1397 | 1 |
Intended Use
- Greek pretraining experiments
- Corpus filtering and mixture design
- Text-only export for nanochat-style training pipelines
Experiment Reconstruction
The repo also includes builder-facing scripts under:
scripts/prepare_glossapi_greek_experiment_data.pyscripts/summarize_glossapi_greek_experiment_data.pyscripts/text_dedup.pyglossapi_corpus_cli/pipeline.pyrust_reevaluate_pdf_datasets.py
These scripts are convenience snapshots for reconstruction and inspection. The dedup bundle itself remains versioned under dedup_metadata/.
Limitations
- This dataset mixes modern, historical, polytonic, legal, academic, educational, repository, and subtitle text.
- Source licenses are mixed and remain governed by the upstream datasets and repositories.
- Not every source has equally strong metadata quality or equally strong OCR/noise diagnostics.
Rebuild
See BUILD_REPLICATION.md in this repo for the broad-corpus staging flow and the builder-facing overlay notes.
- Downloads last month
- 35